<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: muckitymuck</title>
    <description>The latest articles on DEV Community by muckitymuck (@muckitymuck).</description>
    <link>https://dev.to/muckitymuck</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/muckitymuck"/>
    <language>en</language>
    <item>
      <title>Simple Express server</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Tue, 17 Jan 2023 17:40:53 +0000</pubDate>
      <link>https://dev.to/muckitymuck/simple-express-server-3d4l</link>
      <guid>https://dev.to/muckitymuck/simple-express-server-3d4l</guid>
      <description>&lt;p&gt;demo:&lt;a href="https://github.com/muckitymuck/simpleServer"&gt;simpleServer&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const express = require('express') // including the express library
const app = express() // creating an express application instance
const port = 3123 // setting the port for the app to run on

const cluster = require('cluster') // including the cluster library
const numCPUs = require('os').cpus().length; // getting the number of CPU available on the machine

if (cluster.isMaster) {
  console.log(`Master ${process.pid} is running`);

  // Fork workers.
  for (let i = 0; i &amp;lt; numCPUs; i++) {
    cluster.fork();
  }

  cluster.on('exit', (worker, code, signal) =&amp;gt; {
    console.log(`worker ${worker.process.pid} died`);
  });
} else {
  // this is where the code will trigger
  const clientid = ''
  if (cluster.isWorker){
    console.log(`Client ID: ${cluster.worker.id} running`)
    const clientid = cluster.worker.id
  }
}

const bodyParser = require("body-parser"); // including the body-parser library
app.use(bodyParser.json()); // using json body parser for the app

const timestamp = new Date().toISOString(); // getting the timestamp for each request

app.get('/', (req, res) =&amp;gt; {
    console.log(`OK: ${timestamp}`) // logging the timestamp of the request

    console.log(`data received: ${JSON.stringify(req.body)}`) // logging the data received in the request
    console.log(`${clientid}:${timestamp}:${req.body}`) // logging the clientid, timestamp and data received
    res.send(`OK: ${timestamp}`) // sending a response to the client

})
app.use((req, res, next) =&amp;gt; {
    res.header("Access-Control-Allow-Origin", "*"); // allowing all origin to access the server
    res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept"); // allowing certain headers
    next();
});
app.post('/', ( req, res ) =&amp;gt; {
    console.log(req.body) // logging the data received in post request
    res.json({
        message: `OK ${new Date().toISOString()}`, // sending back a message with the current timestamp
        data: req.body // sending back the data received in the request
    });
})

app.listen(port, () =&amp;gt; {
    console.log(`example app listening on port ${port}`) // logging that the server is listening on a specific port
})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AI Generated:&lt;/p&gt;

&lt;p&gt;In this code, we are using the cluster module in Node.js to create a cluster of worker processes that listen on the same port and handle incoming requests. The cluster module allows for easy creation of child processes that share server ports, which can greatly improve the performance of a Node.js application.&lt;/p&gt;

&lt;p&gt;The first thing we do is require the cluster and os modules. Then, we use os.cpus().length to get the number of CPU cores on the machine, which we will use to determine how many worker processes to create.&lt;/p&gt;

&lt;p&gt;Next, we check if the current process is the master process using cluster.isMaster. If it is, we log that the master process is running and then use a for loop to create worker processes equal to the number of CPU cores. We also set up an event listener for when a worker process exits, which logs the worker's process ID.&lt;/p&gt;

&lt;p&gt;If the current process is not the master process, we then set a variable clientid to be an empty string. Then we check if the process is a worker with cluster.isWorker. If it is, we log that the worker process with the corresponding ID is running and set clientid to be the worker's ID.&lt;/p&gt;

&lt;p&gt;After that, we use the body-parser middleware to parse the incoming request body as JSON. Then, we set up routes for handling GET and POST requests to the server. For GET requests, we log the timestamp and any data received in the request body. For POST requests, we log the request body and send a JSON response containing a message with the current timestamp and the data received in the request.&lt;/p&gt;

&lt;p&gt;Finally, we have the server listen on the specified port.&lt;/p&gt;

&lt;p&gt;In short, this code is using the Node.js cluster module to create worker processes and listening on the same port to handle incoming requests. This helps improve performance by allowing multiple requests to be handled simultaneously. The code also logs the worker ID and any data received in the request body. This can be useful for debugging and keeping track of which worker is handling which requests.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>node</category>
    </item>
    <item>
      <title>ChatGPT pair programming</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Mon, 09 Jan 2023 20:04:13 +0000</pubDate>
      <link>https://dev.to/muckitymuck/chatgpt-pair-programming-2433</link>
      <guid>https://dev.to/muckitymuck/chatgpt-pair-programming-2433</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/muckitymuck/OpenAI-Mystery-Generator"&gt;github&lt;/a&gt;&lt;br&gt;
&lt;a href="https://open-ai-mystery-generator.vercel.app/"&gt;live demo&lt;/a&gt;&lt;br&gt;
I know it's been a bit (too long) since I wrote a blog on what I was doing.  Recently, I was hired for a OpenAI project and started researching to prepare myself.  The best way to learn is to do and so I started building my own OpenAI project.&lt;br&gt;
This project is a small scope generative AI text service to put out an opening setting to a crime mystery novel in the chosen style of famous authors and constrained to locations entered by the user.  For example, if you enter a city like NYC it will set the crime to a typical setting like a back alley, subway station, or Manhattan penthouse.  A setting for temperature will also affect the output by how random the setting is.  The author style setting affects the location but also affects the characters that appear in the setting.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SFh-TDpJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gj2pbdr4cf3guq5mfvaj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SFh-TDpJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gj2pbdr4cf3guq5mfvaj.PNG" alt="OpenAI Mystery demo" width="880" height="474"&gt;&lt;/a&gt;&lt;br&gt;
As a bonus to the project, I worked along side ChatGPT to fill out the app basic code.  I would say the Chat got it about 80% at any given output.  It did understand how the backend works with the frontend with naming conventions but random misused syntax would pop in.&lt;/p&gt;



&lt;p&gt;In this conversation, we worked on a project that is a web application that uses OpenAI's GPT-3 API to generate mystery scenes in different writing styles. The project was built using a combination of Next.js, the OpenAI API, React, Node.js, and Express.&lt;/p&gt;

&lt;p&gt;We first added a range slider to the web page that allows the user to change the temperature of the GPT-3 model. We also added a display on the webpage to show the current temperature.&lt;/p&gt;

&lt;p&gt;Next, we modified the generate.js file to include the value set on the range slider as the temperature in the request to the OpenAI API.&lt;/p&gt;

&lt;p&gt;We also added a dropdown menu to the frontend to allow the user to switch the writing style in the prompt. To do this, we added a state variable to store the value of the selected style and passed it to the value prop of the select element.&lt;/p&gt;

&lt;p&gt;Finally, we wrote a README.md file for the project, which provides an overview of the technology used and instructions for running the project locally.&lt;/p&gt;

&lt;p&gt;Overall, this conversation demonstrated how to use OpenAI's GPT-3 API to generate text in different styles and how to incorporate user input in the form of a range slider and dropdown menu in a web application.&lt;/p&gt;

&lt;p&gt;generate.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Configuration, OpenAIApi } from "openai";

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

export default async function (req, res) {
  if (!configuration.apiKey) {
    res.status(500).json({
      error: {
        message: "OpenAI API key not configured, please follow instructions in README.md",
      }
    });
    return;
  }

  const scene = req.body.scene || '';
  if (scene.trim().length === 0) {
    res.status(400).json({
      error: {
        message: "Please enter a valid location",
      }
    });
    return;
  }

  try {
    const temperature = Number(req.body.temperature) || 0.5;
    const completion = await openai.createCompletion({
      model: "text-davinci-003",
      prompt: generatePrompt(scene),
      max_tokens: 200,
      //temperature: 0.9,
      temperature: temperature,
    });
    res.status(200).json({ result: completion.data.choices[0].text });
  } catch(error) {
    // Consider adjusting the error handling logic for your use case
    if (error.response) {
      console.error(error.response.status, error.response.data);
      res.status(error.response.status).json(error.response.data);
    } else {
      console.error(`Error with OpenAI API request: ${error.message}`);
      res.status(500).json({
        error: {
          message: 'An error occurred during your request.',
        }
      });
    }
  }
}

function generatePrompt(scene, style, character) {

  let prompt = `Suggest a setting and initial characters with names the cops meet at a crime scene in the 
  style of famous authors`

  if (style === "Agatha Christie"){
    prompt += 
    `
    Setting: ${scene}
    Character: ${character}
    style: ${style}
    `;
  } else if (style === "Raymond Chandler"){
    prompt += 
    `
    Setting: ${scene}
    Character: ${character}
    style: ${style}
    `;
  }else if (style === "P.D. James"){
    prompt += 
    `
    Setting: ${scene}
    Character: ${character}
    style: ${style}
    `;
  } else if (style === "Dashiell Hammet"){
    prompt += 
    `
    Setting: ${scene}
    Character: ${character}
    style: ${style}
    `;
  } else {
    prompt += 
    `
    Setting: ${scene}
    Character: ${character}
    style: ${style}
    `;
  }
  return prompt;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The generate.js file is a server-side file that exports a function that handles an HTTP POST request to the /api/generate endpoint. When this function is called, it does the following:&lt;/p&gt;

&lt;p&gt;It first checks whether the OpenAI API key is configured. If it is not configured, it sends a JSON response with an error message to the client.&lt;br&gt;
If the API key is configured, it retrieves the scene value from the request body. If the scene value is not provided or is an empty string, it sends a JSON response with an error message to the client.&lt;br&gt;
If the scene value is provided, it creates a completion using the OpenAI API, passing in the model, prompt, max_tokens, and temperature values as parameters.&lt;br&gt;
If the completion is created successfully, it sends a JSON response with the result of the completion to the client.&lt;br&gt;
If there is an error during the request to the OpenAI API, it sends a JSON response with an error message to the client.&lt;br&gt;
This function is used to generate a mystery scene based on the scene input provided by the client and send the generated text back to the client as a response.&lt;br&gt;
In the generate.js file, prompt is a parameter passed to the createCompletion method of the OpenAI API. It represents the text that the API should use as a starting point for generating text.&lt;/p&gt;

&lt;p&gt;In the code provided, prompt is generated by calling the generatePrompt function and passing it the scene value as a parameter. The generatePrompt function returns a string that prompts the GPT-3 model to generate a mystery scene with a setting and initial characters with names that the police meet at a crime scene in the style of Arthur Canon Doyle, Agatha Christie, H.P. Lovecraft, or Raymond Chandler.&lt;/p&gt;

&lt;p&gt;The prompt value is used by the OpenAI API as the starting point for generating text, which is then returned to the client as a response.&lt;/p&gt;

&lt;p&gt;index.js&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import Head from "next/head";
import { useState } from "react";
import styles from "./index.module.css";


export default function Home() {
  const [temperature, setTemperature] = useState(0.5);
  const [sceneInput, setSceneInput] = useState("");
  const [style, setStyle] = useState("");
  const [result, setResult] = useState();
  console.log(result);

  async function onSubmit(event) {
    event.preventDefault();
    try {
      const response = await fetch("/api/generate", {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
        },
        body: JSON.stringify({ 
          scene: sceneInput,
          temperature: temperature, 
        }),
      });

      const data = await response.json();
      if (response.status !== 200) {
        throw data.error || new Error(`Request failed with status ${response.status}`);
      }

      setResult(data.result);
      setSceneInput("");
    } catch(error) {
      // Consider implementing your own error handling logic here
      console.error(error);
      alert(error.message);
    }
  }

  return (
    &amp;lt;div&amp;gt;
      &amp;lt;Head&amp;gt;
        &amp;lt;title&amp;gt;OpenAI Mystery Generator&amp;lt;/title&amp;gt;
        &amp;lt;link rel="icon" href="/magglass.png" /&amp;gt;
      &amp;lt;/Head&amp;gt;

      &amp;lt;main className={styles.main}&amp;gt;
        &amp;lt;img src="/magglass.png" className={styles.icon} /&amp;gt;
        &amp;lt;h3&amp;gt;Set a Mystery&amp;lt;/h3&amp;gt;
        &amp;lt;form onSubmit={onSubmit}&amp;gt;
          &amp;lt;input
            type="text"
            name="scene"
            placeholder="Enter an location"
            value={sceneInput}
            onChange={(e) =&amp;gt; setSceneInput(e.target.value)}
          /&amp;gt;
          {/* Add the range slider */}
          &amp;lt;label htmlFor="temperature"&amp;gt;Temperature:&amp;lt;/label&amp;gt;
          &amp;lt;input
            type="range"
            min="0"
            max="1"
            step="0.1"
            id="temperature"
            value={temperature}
            onChange={(e) =&amp;gt; setTemperature(e.target.value)}
          /&amp;gt;
          &amp;lt;span&amp;gt;{temperature}&amp;lt;/span&amp;gt;

          {/* Add a Styles dropdown menu */}
          &amp;lt;label htmlFor="style"&amp;gt;Style:&amp;lt;/label&amp;gt;
          &amp;lt;select id="style" value="{style}" onChange={(e) =&amp;gt; setStyle(e.target.value)}&amp;gt;
            &amp;lt;option value="Arthur Canon Doyle"&amp;gt;Arthur Canon Doyle&amp;lt;/option&amp;gt;
            &amp;lt;option value="Agatha Christie"&amp;gt;Agatha Christie&amp;lt;/option&amp;gt;
            &amp;lt;option value="Raymond Chandler"&amp;gt;Raymond Chandler&amp;lt;/option&amp;gt;
            &amp;lt;option value="Dashiell Hammet"&amp;gt;Dashiell Hammet&amp;lt;/option&amp;gt;
            &amp;lt;option value="P.D. James"&amp;gt;P.D. James&amp;lt;/option&amp;gt;
          &amp;lt;/select&amp;gt;
          &amp;lt;span id="selectStyle"&amp;gt;{style}&amp;lt;/span&amp;gt;

          &amp;lt;input type="submit" value="Generate scene" /&amp;gt;

        &amp;lt;/form&amp;gt;
        &amp;lt;div className={styles.result}&amp;gt;{result}&amp;lt;/div&amp;gt;



      &amp;lt;/main&amp;gt;
    &amp;lt;/div&amp;gt;
  );
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The index.js file exports a functional component that represents the home page of the web application. The component has the following components:&lt;/p&gt;

&lt;p&gt;Head: This is a Next.js component that allows you to add elements to the head of the document. In this case, it is used to add a title and a link to a favicon to the head of the document.&lt;/p&gt;

&lt;p&gt;useState: This is a React hook that allows you to add state to functional components. In this case, it is used to add two state variables: sceneInput and result. sceneInput is used to store the value of the text input field, and result is used to store the result of the API request.&lt;/p&gt;

&lt;p&gt;onSubmit: This is an async function that is called when the form is submitted. It prevents the default form submission behavior, makes a POST request to the /api/generate endpoint with the sceneInput value, and stores the response in the result state variable. If there is an error during the request, it logs the error and displays an alert.&lt;/p&gt;

&lt;p&gt;The main layout of the page consists of an image, a form with a text input field and a submit button, and a div element to display the result of the API request. The form has an onSubmit event handler that calls the onSubmit function, and the text input field has an onChange event handler that updates the sceneInput state variable with the current value of the input field.&lt;/p&gt;

&lt;p&gt;Overall, this component allows the user to enter a location in the text input field, submit the form, and display the result of the API request on the page.&lt;/p&gt;




&lt;p&gt;The above wasn't just code generated by ChatGPT, the descriptions were also its output.  One of the best features of working with Chat is its ability to explain code and answer questions.&lt;br&gt;&lt;br&gt;
It also feels very personal.  The interface is seductive to use.  I even find myself giving it the nickname "Chat", as though it has a name.&lt;br&gt;&lt;br&gt;
I encourage you to give it a try when doing a project if only to see the novelty.  Below you can find my project details.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/muckitymuck/OpenAI-Mystery-Generator"&gt;github&lt;/a&gt;&lt;br&gt;
&lt;a href="https://open-ai-mystery-generator.vercel.app/"&gt;live demo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>nextjs</category>
    </item>
    <item>
      <title>Prometheus and Grafana, Part 2</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Mon, 22 Nov 2021 20:13:46 +0000</pubDate>
      <link>https://dev.to/muckitymuck/prometheus-and-grafana-part-2-47bg</link>
      <guid>https://dev.to/muckitymuck/prometheus-and-grafana-part-2-47bg</guid>
      <description>&lt;p&gt;The Prometheus.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

scrape_configs:
  # The job name is added as a label `job=&amp;lt;job_name&amp;gt;` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'node-exporter'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s
    metrics_path: /metrics

    static_configs:
      - targets:
        - &amp;lt;PUBLIC IP&amp;gt;:9100
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is a good starting place for a monitoring system for Linux boxes&lt;/p&gt;

&lt;p&gt;Running the services in docker is also advised as it simplifies the rollout.&lt;/p&gt;

&lt;p&gt;Docker node exporter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker run --restart always -d --net="host" --pid="host"  -v "/:/host:ro,rslave"  prom/node-exporter:latest  --path.rootfs=/host
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker prometheus:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo docker service create --replicas 1 --name dockerprometheus     --mount type=bind,source=/home/ubuntu/prometheus/prometheus.yml,destination=/etc/prometheus/prometheus.yml     --publish published=9090,target=9090,protocol=tcp     prom/prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you get around to making panels in Grafana, these queries are very useful:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;100 - ((node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} * 100) /            node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;node_filesystem_avail_bytes

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;OR you could just setup a template for Node Exporter:&lt;br&gt;
To import a new dashboard for Node Exporter, go to Dashboards -&amp;gt; Manage&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foije3p8c31dk0q4e9l1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foije3p8c31dk0q4e9l1r.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;405&lt;/strong&gt; for the Grafana ID.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmznb1185qy8e232p8a2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmznb1185qy8e232p8a2o.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pick up the name and choose a Data Source for the dashboard.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83rpzsyv4arll5w16mz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83rpzsyv4arll5w16mz8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To make an alert, start a NEW dashboard and NEW panel. Template dashboards like above will not work. &lt;/p&gt;

&lt;p&gt;Enter the query for the item you want to track and press the Alert tab.&lt;br&gt;
That should do it for a basic monitoring service.&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>linux</category>
    </item>
    <item>
      <title>Boto3: Connecting your AWS with Python, EC2 edition</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Mon, 22 Nov 2021 16:24:10 +0000</pubDate>
      <link>https://dev.to/muckitymuck/boto3-connecting-your-aws-with-python-ec2-edition-na4</link>
      <guid>https://dev.to/muckitymuck/boto3-connecting-your-aws-with-python-ec2-edition-na4</guid>
      <description>&lt;p&gt;One way to automate your AWS administration is using python3 and its Boto3 library.  It can be more secure if you make a IAM role that has limited programmatic access with temporary credentials to limit session time.&lt;br&gt;
It is possible to pull information out of the EC2 instances regarding services, configs, or settings.  We will go through the complete script below to explain the different parts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

sts_client = boto3.client('sts')
assumed_role_object=sts_client.assume_role(RoleArn="arn:aws:iam::&amp;lt;ACCOUNTNUMBER&amp;gt;:role/AuthorizedRole",RoleSessionName="AssumeRoleSession1")
credentials=assumed_role_object['Credentials']
client = boto3.client('servicediscovery')

ec2_resource=boto3.resource('ec2',
        aws_access_key_id=credentials['AccessKeyId'],
        aws_secret_access_key=credentials['SecretAccessKey'],
        aws_session_token=credentials['SessionToken'],
)
productionbox = []
intbox = []
for instance in ec2_resource.instances.all():

        print(instance.id)

        for box in instance.tags:
                if box['Key'] == 'environment' and box['Value'] == 'production':
                        productionbox.append(instance.id+" : "+ instance.public_ip_address)
                elif box['Key'] =='environment' and box['Value'] != 'production':
                        intbox.append(instance.id+" : "+ instance.public_ip_address)
print("Critical: Production")
print(productionbox)
print("Other Boxes")
print(intbox)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So the first chunk deals with the initial connection settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sts_client = boto3.client('sts')
assumed_role_object=sts_client.assume_role(RoleArn="arn:aws:iam::&amp;lt;ACCOUNTNUMBER&amp;gt;:role/AuthorizedRole",RoleSessionName="AssumeRoleSession1")
credentials=assumed_role_object['Credentials']
client = boto3.client('servicediscovery')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The STS is &lt;a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html"&gt;AWS Security Token Service (STS)&lt;/a&gt;.  It gives the temporary, limited credentials for the service.&lt;br&gt;&lt;br&gt;
Using STS, you can use the assume_role to take the role of an authorized role and assign it a RoleSessionName.&lt;br&gt;
From there, you can stores the Credentials of the Assumed Role and store them for use later.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ec2_resource=boto3.resource('ec2',
        aws_access_key_id=credentials['AccessKeyId'],
        aws_secret_access_key=credentials['SecretAccessKey'],
        aws_session_token=credentials['SessionToken'],
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This stores the necessary credentials for use later.  These keys and tokens will change every time you connect to preserve security.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;productionbox = []
intbox = []
for instance in ec2_resource.instances.all():

        print(instance.id)

        for box in instance.tags:
                if box['Key'] == 'environment' and box['Value'] == 'production':
                        productionbox.append(instance.id+" : "+ instance.public_ip_address)
                elif box['Key'] =='environment' and box['Value'] != 'production':
                        intbox.append(instance.id+" : "+ instance.public_ip_address)
print("Critical: Production")
print(productionbox)
print("Other Boxes")
print(intbox)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This part is pretty straight forward.  The ec2_resource.instances.all allows you to go through all the EC2 instances and filter what you need.  The rest of the script goes through the tags on the instances and presents the instances in separate arrays that are Production environment and the other environments.  You can go further and separate them down to different types or settings.  &lt;/p&gt;

&lt;p&gt;Hope this helps someone out there.&lt;/p&gt;

</description>
      <category>python</category>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Github Actions: Build+Test in GitubVM, Deploy on Cloud</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Tue, 16 Nov 2021 20:07:28 +0000</pubDate>
      <link>https://dev.to/muckitymuck/github-actions-buildtest-in-gitubvm-deploy-on-cloud-545j</link>
      <guid>https://dev.to/muckitymuck/github-actions-buildtest-in-gitubvm-deploy-on-cloud-545j</guid>
      <description>&lt;p&gt;An interesting new way to Build and Test in the Github environment before deploying is possible using Artifacts.&lt;br&gt;&lt;br&gt;
It involves breaking up 2 jobs in your workflow yml.&lt;br&gt;&lt;br&gt;
You will use the&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;actions/upload-artifact@v2
actions/download-artifact@v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;as a means to pass the artifact between the jobs.&lt;br&gt;
Here, I am using NodeJS as the build, so I all I need to move is the Dist folder without Markdown.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- name: Archive production artifacts
        uses: actions/upload-artifact@v2
        with:
          name: dist-without-markdown
          path: |
            dist
            !dist/**/*.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You also need to remember to name the artifact in case there are multiple artifacts produced in the workflow.&lt;/p&gt;

&lt;p&gt;Finished workflow is below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: CI

# Controls when the workflow will run
on: 
  push:
    branches: [prod]

jobs:
  build:
    runs-on: ubuntu-20.04      
    steps:
      ### new workflow artifact
      - name: Make envfile
        uses: SpicyPizza/create-envfile@v1
        with:
          envkey_DEBUG: false
          envkey_SECRET_KEY: ${{ secrets.ENV_FILE }}
          file_name: .env
      - name: Checkout respository
        uses: actions/checkout@v2
      - name: npm install, build and test
        run: |
          npm install
          npm run build
          npm test 

      - name: Archive production artifacts
        uses: actions/upload-artifact@v2
        with:
          name: dist-without-markdown
          path: |
            dist
            !dist/**/*.md
  deploy:
    #needs will wait for Build to finish
    needs: build
    runs-on: self-hosted 
    steps:
      - name: Runs code deploy
        uses: actions/checkout@v2
        with:
          host: ${{ secrets.HOST }}
          username: ${{ secrets.USERNAME }}
          password: ${{ secrets.PASSWORD }}
          port: ${{ secrets.PORT }}
      - run: cd /path/to/deploy/
      - name: Make envfile
        uses: SpicyPizza/create-envfile@v1
        with:
          envkey_DEBUG: false
          envkey_SECRET_KEY: ${{ secrets.ENV_FILE }}
          file_name: .env

      - name: Archive production artifacts
        uses: actions/download-artifact@v2
        with:
          name: dist-without-markdown
          path: |
            /path/to/deploy/dist
      - run: npm ci
      - run: pm2 restart PROCESSNAME

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>github</category>
      <category>cloud</category>
      <category>devops</category>
      <category>npm</category>
    </item>
    <item>
      <title>Github Actions, env file creation, docker silliness</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Fri, 12 Nov 2021 16:20:32 +0000</pubDate>
      <link>https://dev.to/muckitymuck/github-actions-env-file-creation-docker-silliness-504k</link>
      <guid>https://dev.to/muckitymuck/github-actions-env-file-creation-docker-silliness-504k</guid>
      <description>&lt;p&gt;To push on a branch you want to target, make a workflow yml file and include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on: 
  push:
    branches: [prod]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can add an env file to your build when you deploy via Github Actions and still protect the data in your Secrets.&lt;br&gt;&lt;br&gt;
First take your .env file and put in secrets with any name(here it is ENV_FILE).&lt;br&gt;
Then add this to your workflow.yml in the build area:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;      - name: Make envfile
        uses: SpicyPizza/create-envfile@v1
        with:
          envkey_DEBUG: false
          envkey_SECRET_KEY: ${{ secrets.ENV_FILE }}
          file_name: .env
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If in the course you get this error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.24/build?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try these:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo groupadd docker
sudo usermod -aG docker ${USER}
sudo chmod 666 /var/run/docker.sock
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>docker</category>
      <category>npm</category>
    </item>
    <item>
      <title>Prometheus Additional Configs</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Tue, 19 Oct 2021 21:55:30 +0000</pubDate>
      <link>https://dev.to/muckitymuck/prometheus-additional-configs-1am4</link>
      <guid>https://dev.to/muckitymuck/prometheus-additional-configs-1am4</guid>
      <description>&lt;p&gt;This is for the prometheus.service created&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Prometheus
Wants=network-online.target
After=network-online.target

[Service]
User=ubuntu
Group=ubuntu
Type=simple
ExecStart=/usr/bin/prometheus \
    --config.file /etc/prometheus/prometheus.yml \
    --storage.tsdb.path /var/lib/prometheus/ \
    --storage.tsdb.retention=2d
    --web.console.templates=/etc/prometheus/consoles \
    --web.console.libraries=/etc/prometheus/console_libraries

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;&lt;em&gt;--storage.tsdb.path&lt;/em&gt;&lt;/strong&gt; is for where logs will be stored&lt;br&gt;
&lt;strong&gt;&lt;em&gt;--storage.tsdb.retention&lt;/em&gt;&lt;/strong&gt; is for how long will the logs be stored&lt;/p&gt;

&lt;p&gt;Other settings prometheus.yml&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;global:
  scrape_interval: 15s
  evaluation_interval: 15s
# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"
    - "/etc/prometheus/mem_over.yml # A Rule for Alert noted below

scrape_configs:
  - job_name: 'prometheus'
    # scrape frequency
    scrape_interval: 1m
    static_configs:
      - targets: ['localhost:9090']
  - job_name: 'node'
    # scrape frequency
    scrape_interval: 1m
    static_configs:
      - targets: ['localhost:9100']

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Rule that can send alerts under conditions, mem_over.yml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;groups:
- name: Memory alarm rules
  rules:
  - alert: Memory usage alarm
    expr: (node_memory_MemTotal_bytes - (node_memory_MemFree_bytes+node_memory_Buffers_bytes+node_memory_Cached_bytes )) / node_memory_MemTotal_bytes * 100 &amp;gt; 80
    for: 1m
    labels:
      user: prometheus
      severity: warning
    annotations:
      description: "The server: Memory usage over 80%!(Current value: {{ $value }}%)"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>monitoring</category>
      <category>linux</category>
    </item>
    <item>
      <title>AWS S3, Cloudfront, Certificates, and GoDaddy?</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Fri, 15 Oct 2021 19:36:28 +0000</pubDate>
      <link>https://dev.to/muckitymuck/aws-s3-cloudfront-certificates-and-godaddy-4cbp</link>
      <guid>https://dev.to/muckitymuck/aws-s3-cloudfront-certificates-and-godaddy-4cbp</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnqa8umgg7i5kuq06wjy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftnqa8umgg7i5kuq06wjy.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
So we are creating a static website with the following features:&lt;br&gt;
-Hosted on AWS S3 bucket&lt;br&gt;
-routed through Cloudfront&lt;br&gt;
-SSL certs through ACM&lt;/p&gt;

&lt;p&gt;and as a real kicker:&lt;br&gt;
-Domain acquired on GoDaddy(not Route53)&lt;/p&gt;

&lt;p&gt;First and foremost, let's fix the GoDaddy issue as managing DNS on 2 separate services is unwieldly.  &lt;/p&gt;

&lt;p&gt;Log into GoDaddy, go to Domains, and Manage All.&lt;/p&gt;

&lt;p&gt;Find your domain and go into it.  Scroll down to Additional Settings -&amp;gt; Manage DNS.&lt;/p&gt;

&lt;p&gt;Now in a separate tab, go to your AWS account and into Route53.  Create a Hosted Zone by the same name as the Domain on GoDaddy.&lt;br&gt;
Once you finish that wizard step, make note of the NS Type records it makes for you:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flv30vh9o4yl98n70nptf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flv30vh9o4yl98n70nptf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go back to your GoDaddy account and scroll down to the Nameservers section.  Click on Change:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxkeh3xchilijubmuq20.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbxkeh3xchilijubmuq20.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Change over to I'll Use My own nameservers and add in at least 2 server FQDN from AWS Route53:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrftx6imf2t8kqdzpw1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrftx6imf2t8kqdzpw1x.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
After it goes through you should see this.  Do Not Panic.&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qhd153y12rk2cf8knfp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0qhd153y12rk2cf8knfp.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
This just means that your DNS is being managed at AWS and because the resources are hosted there as well, this simplifies things.&lt;/p&gt;

&lt;p&gt;Now we need to have content for the website.&lt;br&gt;
Helpful: &lt;br&gt;
&lt;a href="https://medium.com/tensult/aws-hosting-static-website-on-s3-using-a-custom-domain-cd2782758b2c" rel="noopener noreferrer"&gt;https://medium.com/tensult/aws-hosting-static-website-on-s3-using-a-custom-domain-cd2782758b2c&lt;/a&gt;&lt;br&gt;
Make two S3 buckets.  One will be named after the naked domain name and the other will be the www.&lt;/p&gt;

&lt;p&gt;The configs for this are to TURN OFF the Blocks for Public Access and to add the Bucket Policy similar to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version":"2012-10-17",
  "Statement":[{
    "Sid":"PublicReadGetObject",
    "Effect":"Allow",
    "Principal": "*",
    "Action":["s3:GetObject"],
    "Resource":["arn:aws:s3:::example.com/*"]
  }]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also, Enable Static Website Hosting.  On the naked domain S3 bucket, change the Hosting Type to Redirect requests for an object.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofk6aivqpcz1dekx1yvk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fofk6aivqpcz1dekx1yvk.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Add the Host Name to the www web address:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;BUCKETNAME.s3-website-us-east-1.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That should make them public facing and routing to the wwwbucket. &lt;br&gt;
For 403 errors:&lt;br&gt;
&lt;a href="https://stackoverflow.com/questions/56244709/static-web-hosting-on-aws-s3-giving-me-403-permission-denied" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/56244709/static-web-hosting-on-aws-s3-giving-me-403-permission-denied&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we need to make Cloudfront Distribution to help serve up the SSL.  &lt;/p&gt;

&lt;p&gt;This time around, I made two Distributions for the two S3 buckets.  Make the Origin Names to the same as the buckets, respectively to the Origin Domains of the buckets.  And add the Alternative domain Names under General to the naked domain and wwwdomain as well.&lt;br&gt;
If you open the Distribution link (dxxxxxx.cloudfront.net), you should get an SSL protected site.  If you get errors, sometimes you have to wait or fix the configs.&lt;br&gt;
The Just Wait error:&lt;br&gt;
&lt;a href="https://stackoverflow.com/questions/42251745/aws-cloudfront-access-denied-to-s3-bucket/42285049" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/42251745/aws-cloudfront-access-denied-to-s3-bucket/42285049&lt;/a&gt;&lt;br&gt;
Now that those are running we need to make some certs.&lt;br&gt;
Head over to ACM and request 1 cert for each distribution.  You can probably do this with a single cert but it worked with 1 each so give it a try with a single if you care to.&lt;/p&gt;

&lt;p&gt;You will need to make a CNAME record in Route53 for the certs you made.  It will be everything before the domain name on the Name field and the everything for the value.  The Certificates should turn to Successful after this is completed.&lt;/p&gt;

&lt;p&gt;Head back to Cloudfront and find your respective Distributions.  Go under Settings and update Custom SSL certificates to the certificates you made previously.&lt;br&gt;
If all goes well, this should be working.&lt;/p&gt;

&lt;p&gt;A very convoluted way to get a website up and running.  Honestly an instance or a docker image is easier to setup.  But the cost is near Zero except for the cost of the domain.  And because you purchased that on GoDaddy the price should be down a ways.&lt;/p&gt;

&lt;p&gt;Happy Building.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>webdev</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Prometheus and Grafana</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Mon, 04 Oct 2021 19:27:28 +0000</pubDate>
      <link>https://dev.to/muckitymuck/prometheus-and-grafana-4k1a</link>
      <guid>https://dev.to/muckitymuck/prometheus-and-grafana-4k1a</guid>
      <description>&lt;p&gt;Monitoring services is a big part of Cloud Infrastructure Management.  Here we will go through one way to install Prometheus on Ubuntu and make pretty dashboards with Grafana on a separate machine.&lt;/p&gt;

&lt;p&gt;Go grab this download and extract to start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://github.com/prometheus/prometheus/releases/download/v2.30.3/prometheus-2.30.3.linux-amd64.tar.gz

tar -xvzf prometheus-2.11.1.linux-amd64.tar.gz

cd prometheus/

./prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Should be great success:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyg0gcyw80np2wbafu7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxyg0gcyw80np2wbafu7s.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open a web browser and head over to the http://{IP}:9090/graph&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf1gxpq8sist9us0p3nc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbf1gxpq8sist9us0p3nc.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
So far, So Good&lt;br&gt;
You can see the raw metrics by going to http://{IP}:9090/metrics&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73x83itpkwcdvivjjlit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73x83itpkwcdvivjjlit.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
You can check what is up by going to /targets&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4ivc0cf1yi79cricj4e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4ivc0cf1yi79cricj4e.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's run it as a service, create this file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/systemd/system/prometheus.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And give it this basic text settings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network-online.target

[Service]
User=root
Restart=on-failure

#Change this line if you download the 
#Prometheus on different path user

ExecStart=~/prometheus/prometheus --storage.tsdb.path=/var/lib/prometheus/data/ --web.external-url=http://myurl.com:9090

[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to reload it and start it up:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
sudo systemctl start prometheus
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There are Exporters you can add to Prometheus to increase the utility.  Node Exporter is one.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://github.com/prometheus/node_exporter/releases/download/v0.18.1/node_exporter-0.18.1.linux-amd64.tar.gz
tar -xvzf node_exporter-0.18.1.linux-amd64.tar.gz
mv node_exporter-0.18.1.linux-amd64 node_exporter
cd node_exporter
./node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can open the service in a browser at http://{IP}:9100&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa77wggxmbhruu11dyk49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa77wggxmbhruu11dyk49.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can add the node exporter to Prometheus in /etc/prometheus/prometheus.yml&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmxttpbe4r5h7moyqozj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgmxttpbe4r5h7moyqozj.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's make it into a service.  Create a file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo vi /etc/systemd/system/node_exporter.service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[Unit]
Description=Node Exporter
After=network.target
[Service]
User={user}
Group={user}
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's go ahead and get this going:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl daemon-reload
sudo systemctl start node_exporter
sudo systemctl status node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can see the two services on /targets&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgfg7ope658khvwn6ukz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flgfg7ope658khvwn6ukz.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If all is running well, enable this service to start at boot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl enable node_exporter
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's enough for today.  I will continue this in a second part for Grafana.  &lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>linux</category>
    </item>
    <item>
      <title>PM2, node, NextJS, Express</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Fri, 03 Sep 2021 19:02:48 +0000</pubDate>
      <link>https://dev.to/muckitymuck/pm2-node-nextjs-express-51p8</link>
      <guid>https://dev.to/muckitymuck/pm2-node-nextjs-express-51p8</guid>
      <description>&lt;p&gt;A quick collection of building and monitoring commands in Linux.&lt;/p&gt;

&lt;p&gt;To get ExpressJS started under pm2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/home/ec2-user/expressjs/bin

pm2 start --name expressjs www
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;PM2&lt;br&gt;
List the processes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;NPM build, start process&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm ci
npm test --if-present
npm run build
npm run start
pm2 start npm -- start
pm2 start --name AppName npm -- start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restart the processes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm2 restart all
pm2 restart &amp;lt;app name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>nextjs</category>
      <category>node</category>
      <category>pm2</category>
      <category>express</category>
    </item>
    <item>
      <title>GITHUB! Jenkins, pipeline</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Tue, 31 Aug 2021 19:31:41 +0000</pubDate>
      <link>https://dev.to/muckitymuck/github-jenkins-pipeline-2jh5</link>
      <guid>https://dev.to/muckitymuck/github-jenkins-pipeline-2jh5</guid>
      <description>&lt;p&gt;As a record of how to set up a new CI/CD pipeline from Github to Jenkins:&lt;/p&gt;

&lt;p&gt;A good first step is to install Jenkins on a separate machine or VM to get this started.&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.jenkins.io/doc/book/installing/"&gt;https://www.jenkins.io/doc/book/installing/&lt;/a&gt;&lt;br&gt;
You can pick Linux, Docker, Kubernetes.  What ever is your preference as long as you can get the webserver running.&lt;/p&gt;

&lt;p&gt;We will start by assuming you have a github repo and you set up the credentials such as tokens correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vqbN3pS3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ir8w68qo9a28w71uevou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vqbN3pS3--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ir8w68qo9a28w71uevou.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Open a new pipeline and choose Github.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--b0z6cD1u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3wdks1hqn46juxwwuopu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--b0z6cD1u--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3wdks1hqn46juxwwuopu.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter your github token to access it initially for the wizard.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zJ1HxltG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/06qencqd6fg1vqhnxb2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zJ1HxltG--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/06qencqd6fg1vqhnxb2r.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
If you are associated with multiple accounts or organizations, pick the one of your choosing and then the repo from the list provided.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--U3P8K4eX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhtf12t8mdhbbzvunz5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--U3P8K4eX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhtf12t8mdhbbzvunz5e.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
The fun part starts, you can start adding steps and actions.  It even comes with a choice for Git.&lt;br&gt;&lt;br&gt;
One you are done you can run it, but it probably won't work if the repo is private.  So we will need to add the login info.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1zDsw4nf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4e1808zj8jph853vkx3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1zDsw4nf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4e1808zj8jph853vkx3u.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
You will find this under Configure of your Pipeline.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FF_w0Ki9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gv7lrrw0kdntqxokxbjw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FF_w0Ki9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gv7lrrw0kdntqxokxbjw.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Because of Github security you will need your Token once more.  It will be protected much like Github Secrets.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qezdrQrU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7pn51htptv4uy8ckpxb3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qezdrQrU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7pn51htptv4uy8ckpxb3.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will add the target machine.  Go to the main page of Jenkins.&lt;br&gt;
And click on Manage Jenkins.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--uS8aLqlj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4yfnaa4v5w4kqqppk40k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--uS8aLqlj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4yfnaa4v5w4kqqppk40k.png" alt="image"&gt;&lt;/a&gt;&lt;br&gt;
Give it a new name.  Go back to Main Dashboard and Manage Nodes.  Click on the new node.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tZjPvtF5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67qo45wcmlahguaz0u18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tZjPvtF5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/67qo45wcmlahguaz0u18.png" alt="jenkinsMachine"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go under Configure and give it a name.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7GSneTRI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/01oqm4xgm35oxpfmy899.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7GSneTRI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/01oqm4xgm35oxpfmy899.png" alt="jenkinsMachine2"&gt;&lt;/a&gt;&lt;br&gt;
Add the IP address and credentials for the SSH.&lt;br&gt;&lt;br&gt;
This should do it to pull from github and apply to the target machine.    &lt;/p&gt;

</description>
      <category>github</category>
      <category>jenkins</category>
    </item>
    <item>
      <title>Terraform Cheat Sheet</title>
      <dc:creator>muckitymuck</dc:creator>
      <pubDate>Fri, 13 Aug 2021 11:06:01 +0000</pubDate>
      <link>https://dev.to/muckitymuck/terraform-cheat-sheet-11b5</link>
      <guid>https://dev.to/muckitymuck/terraform-cheat-sheet-11b5</guid>
      <description>&lt;p&gt;Will add more later:&lt;br&gt;
As Main.tf&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feafy6dhdnq7nsnekpvuv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feafy6dhdnq7nsnekpvuv.png" alt="Terraform reference - provider.tf"&gt;&lt;/a&gt;&lt;br&gt;
Alternatively, You can break it up into separate files:&lt;br&gt;
 &lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e2u8rswths245kiq8x5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8e2u8rswths245kiq8x5.png" alt="Terraform reference - Page 2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs" rel="noopener noreferrer"&gt;Terraform AWS resource list&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Commands:&lt;br&gt;
&lt;code&gt;terraform init&lt;/code&gt;&lt;br&gt;
initialize a working directory containing Terraform configuration files&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform fmt&lt;/code&gt;&lt;br&gt;
formats the files to be Terraform friendly.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform validate&lt;/code&gt;&lt;br&gt;
checks config files but does not check remote services&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt;&lt;br&gt;
creates execution plan&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform apply&lt;/code&gt;&lt;br&gt;
executes the actions proposed in a Terraform plan&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform destroy&lt;/code&gt;&lt;br&gt;
tears down terraform infrastructure&lt;/p&gt;

&lt;p&gt;For the Terraform Import method:&lt;br&gt;
&lt;a href="https://spacelift.io/blog/importing-exisiting-infrastructure-into-terraform" rel="noopener noreferrer"&gt;https://spacelift.io/blog/importing-exisiting-infrastructure-into-terraform&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
