<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Victor Peter</title>
    <description>The latest articles on DEV Community by Victor Peter (@veektor_v).</description>
    <link>https://dev.to/veektor_v</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/veektor_v"/>
    <language>en</language>
    <item>
      <title>Pagination with Mongoose Paginate V2 (ExpressJS and Typescript)</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Tue, 07 Jan 2025 19:12:23 +0000</pubDate>
      <link>https://dev.to/veektor_v/pagination-with-mongoose-paginate-v2-expressjs-and-typescript-19li</link>
      <guid>https://dev.to/veektor_v/pagination-with-mongoose-paginate-v2-expressjs-and-typescript-19li</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;When writing a service that will gorw in terms of data (having from  hundreds to thousands of records) for a collection and it will be unwise to send your user all the document each time the server ie being queried. To manage that, you have to share the documents into pages.&lt;/p&gt;

&lt;p&gt;A page is simply a subdivision of documents from the total documents the database holds. Example if I have a collection of posts with 100 documents in it, I can paginate it by sending 20 documents per query. This means that I'll have a total of 5 pages, each with 20 posts.&lt;/p&gt;

&lt;p&gt;A page usually has a skip which is the number of documents to skip, a limit which is the number or size of documents for the page requested. So, normally to get page 1 with 20 posts, the skip will value will be 0 and the limit value will be 20. To get page 2 the skip value will be 20 (this will skip the first 20 records) and limit will be 20.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mongoose paginate v2
&lt;/h2&gt;

&lt;p&gt;There's a better way to manage pagination on your Mongo database and that is with using the &lt;code&gt;mongoose-paginate-v2&lt;/code&gt; plugin.&lt;/p&gt;

&lt;p&gt;Here are the steps to take to start using mongoose paginate v2 below.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install mongoose mongoose on your express project
&lt;/h3&gt;

&lt;p&gt;Run the code below to install mongoose paginate v2&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i mongoose mongoose-paginate-v2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Connect to mongodb
&lt;/h3&gt;

&lt;p&gt;Make sure you have MongoDB installed (You can download MongoDB server for your pc &lt;a href="https://www.mongodb.com/try/download/community" rel="noopener noreferrer"&gt;here&lt;/a&gt;) on your PC, or you can use MongoDB Atlas as your cloud database (register for MongoDB atlas &lt;a href="https://www.mongodb.com/cloud/atlas/register" rel="noopener noreferrer"&gt;here&lt;/a&gt; and get your database url) if you don't want to install mongodb on your PC. You can connect to mongodb via puttig this code snippet on your project's entry point file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongoose
  .connect('mongodb://localhost:27017/my_test_database')
  .then(() =&amp;gt; {
    console.log("Connected to database");
  })
  .catch((error) =&amp;gt; {
    console.log(error);
  });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace "my_test_database" with the name of the database you'll like to use. Don't worry, if the database does not exist, it will be created automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Write your schema and initialize the mongoose paginate plugin.
&lt;/h3&gt;

&lt;p&gt;Next, you'll have to write a schema to use with yoir project and initalize the plugin for the collection you want to paginate using mongoose-paginate-v2. You can choose to initialize the paginate plugin only on collections that you'll like to paginate. Here is how to do it below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {Schema, InferSchemaType, model, PaginateModel} from "mongoose";
import paginate from "mongoose-paginate-v2";

const postSchema = new Schema({
    title: {
        type: String,
        default: 0
    },
    body: {
        type: String,
        default: 0
    }
}, {timestamps: true});

type postCollectionType = InferSchemaType&amp;lt;typeof postSchema&amp;gt;;

postSchema.plugin(paginate);

const postCollection = model&amp;lt;postCollectionType, PaginateModel&amp;lt;postCollectionType&amp;gt;&amp;gt;("posts", postSchema);

export {postCollection, postCollectionType};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me explain some lines of code:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{timestamps: true}&lt;/code&gt;&lt;br&gt;
This line is used to include the createdAt and updatedAt timestamps and update the updatedAt timestamp once there's a change to a record.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;type postCollectionType = InferSchemaType&amp;lt;typeof postSchema&amp;gt;;&lt;/code&gt;&lt;br&gt;
This line of code creates a type for my schema automatically, so I don't have to write a type or interface to use with the collection records. Easy peasy... ;)&lt;/p&gt;

&lt;p&gt;&lt;code&gt;postSchema.plugin(paginate);&lt;/code&gt;&lt;br&gt;
This is you initializing or registering the plugin with the schema, the schema now knows that there's a paginate plugin that will handle paginating of data.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&amp;lt;postCollectionType, PaginateModel&amp;lt;postCollectionType&amp;gt;&amp;gt;&lt;/code&gt;&lt;br&gt;
This is used to give your model the pagination type, so that yo can access properties of mongoose paginate v2 plugins.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 4: Query and paginate the result of your query
&lt;/h3&gt;

&lt;p&gt;Here is how you can query the posts model from any of your express route.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Response, Router, NextFunction, Request } from "express";
import { postCollection } from "../models/postModel.ts";

const postRoutes = Router();

postRoutes.get("/posts/:page/:limit", async (req: Request, res: Response, next: NextFunction) =&amp;gt; {
    try {

    const {page, limit} = req.params;

    const paginatedPost = await postCollection.paginate({}, {
        page,
        limit
    });

    res.send({paginatedPost});

    } catch (error) {
        next(error);
    }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The endpoint&lt;/p&gt;

&lt;h2&gt;
  
  
  Result
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "paginatedPost": {
        "docs": [],
        "totalDocs": 0,
        "limit": 10,
        "totalPages": 1,
        "page": 1,
        "pagingCounter": 1,
        "hasPrevPage": false,
        "hasNextPage": false,
        "prevPage": null,
        "nextPage": null
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;docs&lt;/strong&gt; property will be contain the records you're querying for about.&lt;br&gt;
&lt;strong&gt;totalDocs&lt;/strong&gt; the number of documents returned in the docs property.&lt;br&gt;
&lt;strong&gt;limit&lt;/strong&gt; the limit is the total number of documents you want per page.&lt;br&gt;
&lt;strong&gt;totalPages&lt;/strong&gt; the total number of pages available.&lt;br&gt;
&lt;strong&gt;page&lt;/strong&gt; the current page you're viewing.&lt;br&gt;
&lt;strong&gt;pagingCounter&lt;/strong&gt; I think it's same as page, I may be wrong.&lt;br&gt;
&lt;strong&gt;hasPrevPage&lt;/strong&gt; Boolean to indicate if the page has a previous page.&lt;br&gt;
&lt;strong&gt;hasNextPage&lt;/strong&gt; Boolean to indicate if the page has a next page.&lt;br&gt;
&lt;strong&gt;prevPage&lt;/strong&gt; null to indicate that there's no previous page.&lt;br&gt;
&lt;strong&gt;nextPage&lt;/strong&gt; null to indicate that there's no next page.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Apart from pagination, you can sort, populate, select, etc the type of data you want to paginate, you can read more and see more example of how to use mongoose-paginate-v2 plugin at &lt;a href="https://www.npmjs.com/package/mongoose-paginate-v2" rel="noopener noreferrer"&gt;Mongoose paginate npm page&lt;/a&gt;. At the end of this course you should be able to paginate your documents &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Two ways/methods to parse a CSV file content to JSON (Typescript)</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Tue, 31 Dec 2024 17:56:45 +0000</pubDate>
      <link>https://dev.to/veektor_v/two-waysmethods-to-parse-a-csv-file-content-to-json-typescript-35l9</link>
      <guid>https://dev.to/veektor_v/two-waysmethods-to-parse-a-csv-file-content-to-json-typescript-35l9</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Let's say you have a CSV file which you want to read data from and convert to JSON using typescript, how do you go about it?&lt;/p&gt;

&lt;h3&gt;
  
  
  First Method: Using csv-parser package
&lt;/h3&gt;

&lt;p&gt;First of all you'll need to install &lt;strong&gt;csv-parser&lt;/strong&gt; package to your project by running the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install csv-parser
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installing &lt;em&gt;csv-parser&lt;/em&gt;, you'll need to use the code snippet to help you extract and convert data in a CSV file to JSON.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import fs from "fs";
import csv from "csv-parser"

async function parseCSV(filePath: string) {
    let results: any[] = [];

    try {
        const readPromise = new Promise((resolve, reject) =&amp;gt; {
            fs.createReadStream(filePath)
        .pipe(csv())
        .on('data', (data) =&amp;gt; results.push(data))
        .on('end', () =&amp;gt; {
            fs.unlink(filePath as string, (error) =&amp;gt; {
                if (error) {
                    reject(error);
                  } else {
                    resolve(results);
                  }
            });
        })
        .on('error', (error) =&amp;gt; reject(error));
        });

        return readPromise;

    } catch (error) {
        console.log(error);
        return null;
    }
}

export {
    parseCSV
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To use this code snippet, call parseCSV and pass the file path to the CSV as argument. Here is an example below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const JSONResult = await parseCSV("/path/to/file");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Second method: Using papaparse package
&lt;/h3&gt;

&lt;p&gt;You can also convert the content of a CSV file to JSON using the package named &lt;strong&gt;papaparse&lt;/strong&gt;. You can install on your project by running the code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install papaparse
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, next, the code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import Papa from "papaparse";

Papa.parse("path/to/file", {
  complete: (result: any) =&amp;gt; {
    console.log("JSON data", result.data);
  }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are more methods to convert contents of a CSV file to JSON, but here are the two methods which works perfectly for me, I hope it also works for you, enjoy.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>CRUD with ExpressJS and MongoDB (Typescript)</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Sat, 28 Dec 2024 21:29:57 +0000</pubDate>
      <link>https://dev.to/veektor_v/crud-with-expressjs-and-mongodb-typescript-1gn8</link>
      <guid>https://dev.to/veektor_v/crud-with-expressjs-and-mongodb-typescript-1gn8</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;As a backend developer seeking to create APIs, you'll want to learn about how to store and retrieve data from the database your API works with. Learning all these may seem overwhelming, but, I will show you how to write a CRUD operation API in this article. Before we get our hands dirty, let's talk about some few terms that we need to be clear about.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is an API?
&lt;/h3&gt;

&lt;p&gt;The full meaning of API is Application Program Interface. An API is simply an interface that can be used by an application program to communicate with a service. E.g, your phone application needs to connect to a service to retrieve information, usually, that service may not be written with the same language your mobile app is written in, so, there need to be an interface which your mobile app can interact with to make the service understand the request that the mobile app is making. That interface is the API and your mobile app is the the program trying to interact with that services' interface.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is the backend?
&lt;/h3&gt;

&lt;p&gt;The backend comprises of the API, the database, load balancers, process managers and other technologies that enables or enhances the functionality of the service.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are RESTful APIs?
&lt;/h3&gt;

&lt;p&gt;RESTful means Representational State Transfer and it's a kind of API that ensures stateless interactions with programs using it.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Let me explain: In real life, an example of a stateless transaction would be going to your office without an ID card when the policy at your office demands that you show your ID card before you're allowed into the premises. In this case, even if the gate keeper knows you're an employee, he/she wont still let you in until you show your ID card. And even if you're with your ID card and you step out for just a minute, to get back in to the company's premises you have to show your ID card.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That's exactly how RESTful APIs behave. They don't keep any context/record about your previous interactions with them, once you send a request to a RESTful API and it sends back a response, the connection established when you sent the request is closed. When another request comes in, it's treated as a fresh request (a new connection is established between the program and the server and is closed after the server sends a response).&lt;/p&gt;

&lt;p&gt;Some features of RESTful APIs are: HTTP methods, request headers, request payloads, endpoints, response headers, response data, status codes, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database
&lt;/h3&gt;

&lt;p&gt;A database is simply a server that is dedicated to storing information permanently except the information is being deleted. Your API isn't designed to keep information permanently, so there has to be a place to store it, that's where the database comes in.&lt;/p&gt;

&lt;h3&gt;
  
  
  MongoDB
&lt;/h3&gt;

&lt;p&gt;MongoDB is coined from the word "humongous", which reflects it's ability to handle large amount of data. It's a schemaless database, which means you can store your data as you like.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mongoose
&lt;/h3&gt;

&lt;p&gt;You can also use an ODM (Object Document Mapper) like Mongoose to create and appy application level schema. MongoDB stores data as documents, has collections that are made up of documents. Mongoose is an ODM that enables developers query mongodb and also maintain schemas to be used with the database.&lt;/p&gt;

&lt;h3&gt;
  
  
  CRUD Operations
&lt;/h3&gt;

&lt;p&gt;CRUD stands for Create, Read, Update and Delete. These are the basic operations you'll come across as a backend developer/engineer.&lt;/p&gt;

&lt;h3&gt;
  
  
  ExpressJS
&lt;/h3&gt;

&lt;p&gt;ExpressJS or Express is a framework that was built to enable developers (like you and I) to build APIs seamlessly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's begin setting up.
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Initialize a project/package
&lt;/h3&gt;

&lt;p&gt;You initialize a project or package by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm init -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a package.json file that will be used by npm to manage your project's dependencies.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Install the necessary dependencies which are morgan, &lt;a href="https://expressjs.com/" rel="noopener noreferrer"&gt;ExpressJS&lt;/a&gt;, &lt;a href="https://mongoosejs.com/" rel="noopener noreferrer"&gt;Mongoose&lt;/a&gt; cors and nodemon
&lt;/h3&gt;

&lt;p&gt;I've already talked about expressjs and mongoose, so let's talk about other dependencies briefly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/expressjs/morgan" rel="noopener noreferrer"&gt;Morgan&lt;/a&gt; is used to generate logs that can be used for different purposes, like printing out the logs to the console when the server is running.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/expressjs/cors" rel="noopener noreferrer"&gt;Cors&lt;/a&gt; is an acronym that means Cross-origin resource sharing, this package/dependency is necessary when you want to restrict which program can interact with your interface and what type of request methods, etc the programs can send to your API.&lt;br&gt;
This is important when your the program and your API are runnng on different platforms or different ports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://nodemon.io/" rel="noopener noreferrer"&gt;Nodemon&lt;/a&gt; is a dependency that monitors your project files for changes and automatically restarts the server once there is a change to any file being watched by nodemon. Think of nodemon as &lt;em&gt;node monitor&lt;/em&gt; ;).&lt;br&gt;
Note: You shouldn't use nodemon on a production server, except you're part of the developers that attempt fix bug in production, lol.&lt;/p&gt;

&lt;p&gt;Typescript is used to provides types and interfaces to your code.&lt;/p&gt;

&lt;p&gt;You install morgan, cors, nodemon, express, mongoose and typescript by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install morgan cors nodemon typescript mongoose express
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also run the code below to install the type defination for morgan:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i --save-dev @types/morgan @types/express @types/cors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that initialize typescript on the project by running the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tsc --init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Create the files necessary for the project
&lt;/h3&gt;

&lt;p&gt;On your project's root folder (the folder where the package.json is located) create the following files: index.ts, crud.ts and crudModel.ts.&lt;/p&gt;

&lt;p&gt;The index.ts file will be used as the entryp point file (the file that gets executed first when the service is starting up).&lt;/p&gt;

&lt;p&gt;The crud.ts file will be used to store all the routes and controllers that will be available to users make the requests with.&lt;/p&gt;

&lt;p&gt;The crudModel.ts file which will be used to keep the schema that will be used to enforce data consistency with the collection of records we want to work with.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Paste the following into your index.js file
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import express, {Application} from "express";
import mongoose from "mongoose";
import logger from "morgan";
import cors from "cors";
import crudRoutes from "./crud";

const app: Application = express();

mongoose.connect("mongodb://localhost:27017/crud").then(() =&amp;gt; {
    console.log("Connected to database");
}).catch((error) =&amp;gt; {
    console.log("Error:", error);
});

app.use(logger("dev"));
app.use(cors({ origin: "*" }));
app.use(express.json());
app.use(express.urlencoded({ extended: false }));

app.use("/crud", crudRoutes);

app.use(function (req, res, next) {
    res.status(404).send({
        message: "Route not found"
    });
  });

app.listen(4000, () =&amp;gt; {
    console.log("The server is up...");
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Paste the following into your crud.ts file
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {Router, Request, Response} from "express";
import { contactsCollection } from "./crudModel";

const routes = Router();

routes.post("/contact", async (req: Request, res: Response) =&amp;gt; {
    const {fullName, phoneNumber} = req.body;

    const contact = await contactsCollection.create({fullName, phoneNumber});

    res.status(201).send({
        message: "New contact created successfully",
        data: contact
    });
});

routes.get("/contacts", async (req: Request, res: Response) =&amp;gt; {
    const {fullName, phoneNumber} = req.body;

    const contact = await contactsCollection.find({});

    res.send({
        message: "All contact retrieved successfully",
        data: contact
    });
});

routes.get("/contacts/:id", async (req: Request, res: Response) =&amp;gt; {
    const {id} = req.params;

    const contact = await contactsCollection.findById(id);

    res.send({
        message: "Contact retrieved successfully",
        data: contact
    });
});

routes.get("/contacts/:phoneNumber", async (req: Request, res: Response) =&amp;gt; {
    const {phoneNumber} = req.params;

    const contact = await contactsCollection.findOne({phoneNumber});

    res.send({
        message: "Contact retrieved successfully",
        data: contact
    });
});

routes.put("/contacts/:id", async (req: Request, res: Response) =&amp;gt; {
    const {id} = req.params;

    const {fullName, phoneNumber} = req.body;

    const contact = await contactsCollection.findByIdAndUpdate(id, {
        fullName, phoneNumber
    }, {new: true});

    res.send({
        message: "Contact updated successfully",
        data: contact
    });
});

routes.patch("/contacts/:id", async (req: Request, res: Response) =&amp;gt; {
    const {id} = req.params;

    const {fullName} = req.body;

    const contact = await contactsCollection.findByIdAndUpdate(id, {
        fullName
    }, {new: true});

    res.send({
        message: "Contact updated successfully",
        data: contact
    });
});

routes.delete("/contacts/:id", async (req: Request, res: Response) =&amp;gt; {
    const {id} = req.params;


    const contact = await contactsCollection.findByIdAndDelete(id);

    res.send({
        message: "Contact deleted successfully",
        data: contact
    });
});

export default routes;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6: Paste the following code into your crudModel.ts file
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import {Schema, model} from "mongoose";

const myContactsSchema = new Schema({
    fullName: {
        type: String,
        required: true
    },
    phoneNumber: {
        type: String,
        required: true
    }
}, {timestamps: true});

const contactsCollection = model("contacts", myContactsSchema);

export {
    contactsCollection
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 7: Modify your package.json file
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"scripts": {
-    "test": "echo \"Error: no test specified\" &amp;amp;&amp;amp; exit 1"
+    "start": "nodemon index.js"
  }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step: 8: Install "Rest Client" extension.
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdujxa13s5ln6p4u9anvl.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdujxa13s5ln6p4u9anvl.jpg" alt="Image description" width="538" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 9: Open up your terminal and type &lt;code&gt;npm start&lt;/code&gt;. This should startup your server automatically. The result should look like this:
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0r5r5cgav70v9l5vdfa9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0r5r5cgav70v9l5vdfa9.jpg" alt="Image description" width="343" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 10: Test the API.
&lt;/h3&gt;

&lt;p&gt;You could use postman to test the API, but a more convenient (IMO) option would be to use rest client, so in your root folder, create a test.rest file and paste the following into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST http://localhost:4000/crud/contact
Content-Type: application/json

{
    "fullName": "Jimmy wire wire",
    "phoneNumber": "081122222211111"
}

###

GET http://localhost:4000/crud/contacts
Content-Type: application/json

###

GET http://localhost:4000/crud/contacts/676f654d7872dbd2f49aee48
Content-Type: application/json

###

PUT http://localhost:4000/crud/contacts/676f654d7872dbd2f49aee48
Content-Type: application/json

{
    "fullName": "Victor Ukok",
    "phoneNumber": "0811111111211"
}

###

PATCH http://localhost:4000/crud/contacts/676f654d7872dbd2f49aee48
Content-Type: application/json

{
    "fullName": "Victor Ukok edited"
}

###

DELETE http://localhost:4000/crud/contacts/676f6638b3383ee1f50955f1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm503ymj6bhka1qbmva7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgm503ymj6bhka1qbmva7.jpg" alt="Image description" width="624" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This API can be deployed on platforms like render.com , documented on postman and made available to the public if you want to take it to that extent. I hope you learnt about how to conduct CRUD operations using APIs? Hit me up if you have any question/questions.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>express</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Send emails using Nodemailer (Typescript)</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Fri, 27 Dec 2024 21:39:23 +0000</pubDate>
      <link>https://dev.to/veektor_v/send-emails-using-nodemailer-typescript-4763</link>
      <guid>https://dev.to/veektor_v/send-emails-using-nodemailer-typescript-4763</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Have you wondered about how emails are sent from a piece of typescript file? It is possible to send emails using nodejs, typescript and nodemailer.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, what is Nodemailer?
&lt;/h2&gt;

&lt;p&gt;Nodemailer is simply a package or module that can be used to send emails via javascript/typescript.&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;You will need to download and install the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://code.visualstudio.com/download" rel="noopener noreferrer"&gt;VSCode&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nodejs.org/en/download/package-manager/current" rel="noopener noreferrer"&gt;NodeJS&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How do we start sending emails using nodemailer?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Create a folder
&lt;/h3&gt;

&lt;p&gt;You'll navigate to your desktop (or an folder or location on your PC) and create a folder. I'm naming my folder "sendemail". After that, open the folder with VSCode.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Initialize a package/project
&lt;/h3&gt;

&lt;p&gt;On VSCode, open the terminal and run the following command command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm init -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you notice, a package.json file is created right? That package.json will be used to manage the packages/dependencies your project will use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Install and initialize typescript on your project
&lt;/h3&gt;

&lt;p&gt;Next, you have to install typescript package globally if you've not installed it. You can install it by running the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g typescript
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installing typescript, run the following to initialize typescript on your project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;tsc --init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create a tsconfig.json file which you can use to configure how typescript should behave for your package/project.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Install Nodemailer
&lt;/h3&gt;

&lt;p&gt;After initializing the package, you'll then need to install nodemailer, you'll do so by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install nodemailer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Also install the type declaration for nodemailer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm i --save-dev @types/nodemailer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You're using the &lt;em&gt;--save-dev&lt;/em&gt; flag because we won't be needing the type declaration dependency in our production enviroment in case you will want to deploy the script on a production server.&lt;/p&gt;

&lt;p&gt;After running this command, if you check your &lt;em&gt;package.json&lt;/em&gt; file you will notice that the dependencies object has "nodemailer" and the version of nodemailer you installed. You should also see "@types/nodemailer" in the dev dependencies object in the &lt;em&gt;package.json&lt;/em&gt; file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Create an index.ts file
&lt;/h3&gt;

&lt;p&gt;In your "sendemail" folder (where the package.json file is located) create a file named "index.ts"&lt;/p&gt;

&lt;p&gt;There are platform like mailtrap or Google's gmail which can be used to send emails. For this  example I'll be using Google's Gmail.&lt;br&gt;
Head over to your gmail account (your gmail account needs to have 2 factor authentication switched on) and search "app password", then create an app password and use as the value of the pass property in the config below (the password should not contain space).&lt;/p&gt;

&lt;p&gt;Make sure to keep your app password a secret, don't share it. Anyone else that knows your email and app password can use your account to send emails.&lt;/p&gt;

&lt;p&gt;Inside the &lt;em&gt;index.ts&lt;/em&gt; file, write the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import nodemailer from "nodemailer";

const transporter = nodemailer.createTransport({
    service: "gmail",
    host: "smtp.gmail.com",
    auth: {
        user: "", // your email
        pass: "" // the app password you generated, paste without spaces
    },
    secure: true,
    port: 465
});
(async () =&amp;gt; {
  await transporter.sendMail({
  from: "", // your email
  to: "", // the email address you want to send an email to
  subject: "", // The title or subject of the email
  html: "" // I like sending my email as html, you can send \
           // emails as html or as plain text
});

console.log("Email sent");
})();

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Execute the index.ts file
&lt;/h3&gt;

&lt;p&gt;Before executing the node.ts file, install &lt;em&gt;ts-node&lt;/em&gt; to enable you execute typesctipt files directly. Run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install -g ts-node
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next step is to execute the &lt;em&gt;index.ts&lt;/em&gt; file by running the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ts-node index.ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;There are lots of ways you can use the code. You can use this as a utility on your express.js project for sending emails, you can create html email templates that depend on the index.ts file for sending emails, etc. I hope with this article you have learnt about how to send emails using NodeJS, Typescript and Nodemailer.&lt;/p&gt;

</description>
      <category>node</category>
      <category>express</category>
      <category>typescript</category>
      <category>nodemailer</category>
    </item>
    <item>
      <title>Difference between Call, Send and Transfer in Solidity.</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Sat, 31 Aug 2024 15:03:43 +0000</pubDate>
      <link>https://dev.to/veektor_v/difference-between-call-send-and-transfer-in-solidity-65g</link>
      <guid>https://dev.to/veektor_v/difference-between-call-send-and-transfer-in-solidity-65g</guid>
      <description>&lt;p&gt;Solidity is a language developed with the aim to make writing of smart contracts easy for developers. The language after being compiled to byte code can then be deployed and executed by the Ethereum Virtual Machine (EVM). One key feature of the language is transerring of ether (Ethereum's native currency) from one account to another. Send, transfer anc call can be used to achieve this, but what are the key differences between them and why is th call function unique? Let's see why.&lt;/p&gt;

&lt;p&gt;Send:&lt;br&gt;
The Send function came out first, it was used to send ether from one account to another, tho it's deprecated, it served as a simple function enabling a contract to tranfer ether from one account to another. The transfer function returns a boolean of true if successful, false if not successful. It uses 2300 gas.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bool isSent = &amp;lt;address to transfer to&amp;gt;.send(msg.value);
require(sent, "Transfer failed");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Transfer:&lt;br&gt;
The transfer function is still in use, it sends ether from  one account to another but throws an error if the transfer is not successful. This is done in case the user does not handle situations that the transfer is not successful. This function also uses 2300 gas and can reduce the possibilities of reentrancy attacks. Example of how a transfer function looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;address to transfer to&amp;gt;.transfer(msg.value);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Call:&lt;br&gt;
The call function is the recommended function to use to transfer ether from one account to another. Once a call for transferring ether is successful, a boolean of true is returned, else a false statement is sent. One thing unique about the call function is that it can be used to call another function in the smart contract it's interacting with and can send a fixed amount of gas to execute the function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;(bool sent, memory data) = &amp;lt;address to transfer to&amp;gt;.call{value: msg.value}("");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Call with a function to execute

(bool sent, memory data) = &amp;lt;address to transfer to&amp;gt;.call{gas: 1000, value: msg.value}("functionSignature()");
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>ERC20 tokens</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Tue, 27 Aug 2024 14:55:21 +0000</pubDate>
      <link>https://dev.to/veektor_v/erc20-tokens-1j54</link>
      <guid>https://dev.to/veektor_v/erc20-tokens-1j54</guid>
      <description>&lt;p&gt;ERC20 is a standard interface for tokens on the Ethereum blockchain. It defines a common set of rules that all Ethereum-based tokens must adhere to, making it easier to interact with different tokens. ERC means Ethereum Request for Comment, it's fungible (each ERC20 token is are rather unique than identical).&lt;/p&gt;

&lt;p&gt;Prior to the introduction of ERC-20, the process of creating tokens on the Ethereum network lacked standardization. Developers could create their own tokens, but these tokens were not always compatible with each other. This lack of a unified standard made it challenging to use or exchange tokens across different platforms and applications.&lt;/p&gt;

&lt;p&gt;Creating your own token on the Ethereum blockchain is quite simple if you know what to do. All you need to do is to create and deploy your ERC20 contract. You can do this using Open Zeppelin's wizard or, you can write your ERC20 contract that inherits its features from an ERC20 interface.&lt;/p&gt;

&lt;p&gt;But a simpler and more safe approach would be to use Open Zeppelin because their contract is audited and is bug free and has no security risks. Anyways, I wrote a simple ERC20 contract which is at &lt;a href="https://github.com/victuk/ERC20-Assignment/blob/main/MyERC20.sol" rel="noopener noreferrer"&gt;https://github.com/victuk/ERC20-Assignment/blob/main/MyERC20.sol&lt;/a&gt; which I'll explain its features.&lt;/p&gt;

&lt;p&gt;_name (private string variable): This represents the name of the token you are creating.&lt;/p&gt;

&lt;p&gt;_symbol (private string variable): This represents the symbol of the token you are creating. It's usually the short form of the token name.&lt;/p&gt;

&lt;p&gt;_balances (mapping): This represents the mapping of the token owner to the balance he/she holds.&lt;/p&gt;

&lt;p&gt;_allowed (2D mapping): This represents the mapping of the maping of the owner of the token to the spender of the token and to the spendable amount that the spender is allowed to spend on behalf of the token owner.&lt;/p&gt;

&lt;p&gt;Transfer (Event): This event is used to keep record of the activities/logs that involves transfer of tokens. This event takes in a from address which is the address of the initiator, the to address which is the address of the recepient and the amount of token being transferred to the recepient which has to be less than or equa to the initiator's balance.&lt;/p&gt;

&lt;p&gt;Approval (Event): The approval event is used to keep logs of approvals that owners have approved for spenders to use. This event takes in an owner who is the owner of the token he/she wants to give permission to be spent, a spender who is approved to spend the owner's token and an amount which is the amount the owner has allocated that a spender can use from his wallet.&lt;/p&gt;

&lt;p&gt;totalSupply (Function): This is the total amount of tokens that will ever exist for the ERC20 contract. If the smart contract is not mintable, then the total supply will always be a fixed amount.&lt;/p&gt;

&lt;p&gt;balanceOf (Function): This function is used to retrieve the amount of token the address parameter of the function holds.&lt;/p&gt;

&lt;p&gt;allowance (Funcion): This is the total amount of money a token owner has allowed a spender to use or spend on his behalf.&lt;/p&gt;

&lt;p&gt;transfer (Function): This is used by the initiator of the transaction to transfer an amount from his/her token balance to another user's token balance.&lt;/p&gt;

&lt;p&gt;approve (Function): This is used to approve a user to spend an amount of money on behalf of the owner. For example, I can approve you to spend 20000 of my token, that amount will be tracked on the token allowance mapping on the contract. The _allowance mapping is a mapping from the owner to the spender to the amount the spender is allowed to use.&lt;/p&gt;

&lt;p&gt;transferFrom (Function): The transfer from function is used by the spender to move his assigned allowance from the owner's wallet to a recepient's wallet. Example, if I allow you to spend 20000 of my token, then calling transfer from with an amount less than or equal to 20000 to a recepient's address will move the amount from the allowance to the recepient's address specified.&lt;/p&gt;

&lt;p&gt;With this article I hope it's clearer how each and every feature of an ERC20 contract is used and what they represent, thank you for reading, have a wonderful day/night.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Convert a 2D array to an object in JavaScript</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Fri, 22 Sep 2023 15:36:57 +0000</pubDate>
      <link>https://dev.to/veektor_v/convert-a-2d-array-to-an-object-in-javascript-3h32</link>
      <guid>https://dev.to/veektor_v/convert-a-2d-array-to-an-object-in-javascript-3h32</guid>
      <description>&lt;p&gt;Here is a brief tutorial about how to convert JavaScript 2D array to a array of objects.&lt;/p&gt;

&lt;p&gt;Here is a array we want to convert:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;let arrToObj = [
  ["name", "Victor"],
  ["language", "JavaScript"],
  ["country", "Nigeria"],
  ["mood", "Happy Mode"]
];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the code to convert the 2d array to an array of objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const result = arrToObj.map(arr =&amp;gt; {
    let obj = {};
    obj[arr[0]] = arr[1];
    return obj
});

console.log(result);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The result will be:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  { name: 'Victor' },
  { language: 'JavaScript' },
  { country: 'Nigeria' },
  { mood: 'Happy Mode' }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Alternative Data, what is it about?</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Mon, 14 Nov 2022 13:00:49 +0000</pubDate>
      <link>https://dev.to/veektor_v/alternative-data-what-is-it-about-1en5</link>
      <guid>https://dev.to/veektor_v/alternative-data-what-is-it-about-1en5</guid>
      <description>&lt;p&gt;Alternative data is a form of non-traditional data that has come to stay. This form of data can provide information about how well a company or an individual or an asset can perform in the future. This form of data is getting popular as the day goes by, it's a form of data that is gotten from non-traditional means. A typical traditional way of getting data is by press releases, financial statements, etc.&lt;/p&gt;

&lt;p&gt;This form of data is mostly used in the investment sector by investment analysts, quant traders, fund managers, etc to guide them on the strategy to invest so as to reduce risks and improve their profits. Proxycurl provides this alternative form of data by setting up technologies that make the collection of data from different sources (social media, email receipts, mobile applications, etc) simple and fast. Using Procycurl's &lt;a href="https://nubela.co/proxycurl/docs#company-api-employee-listing-endpoint" rel="noopener noreferrer"&gt;Employee Listing&lt;/a&gt; and &lt;a href="https://nubela.co/proxycurl/docs#people-api-person-profile-endpoint" rel="noopener noreferrer"&gt;Person Profile&lt;/a&gt; API &lt;a href="https://nubela.co/proxycurl/docs#overview" rel="noopener noreferrer"&gt;endpoints&lt;/a&gt;, companies can identify existing employees of FAANG companies, then track their change of jobs history all in one place.&lt;/p&gt;

&lt;p&gt;You can find out more about alternative data from &lt;a href="https://nubela.co/blog/the-ultimate-guide-to-alternative-data-what-is-it-really/" rel="noopener noreferrer"&gt;Proxycurl's alternative data article&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Hashing and comparing hashed passwords with bcryptjs (synchronous method)</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Sat, 29 Oct 2022 20:25:37 +0000</pubDate>
      <link>https://dev.to/veektor_v/hashing-and-comparing-hashed-passwords-with-bcryptjs-synchronous-method-56h2</link>
      <guid>https://dev.to/veektor_v/hashing-and-comparing-hashed-passwords-with-bcryptjs-synchronous-method-56h2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Your passwords are not stored as plain text on any website you've signed up for. The reason for this is that if someone gets access to the database, they will find it difficult to know the passwords of any account they have seen on the database.&lt;/p&gt;

&lt;p&gt;Hashing is the process of converting plain text into another value. The hash generated can be compared with the plain text, but, when plain text is hashed it can't be converted back to plain text. Also, a good hashing algorithm is supposed to generate unique hashes even if the same plain is hashed. This means if I hash a word like "comfortable" twice, for example, it should generate unique values each time I hash it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is bcryptjs?
&lt;/h2&gt;

&lt;p&gt;Bcryptjs is a library that can be used to hash passwords before storing and also comparing plain text passwords with hashed passwords to see if they match.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup bcryptjs
&lt;/h2&gt;

&lt;p&gt;Create a folder, name it "bcrypt-tutorial", inside the folder, open a terminal and type "npm init -y" to create a project, then type "touch index.js" to create an index.js file to be used to write our code.&lt;br&gt;
After this, you open "bcrypt-tutorial" with your favorite code editor, in this article I'll use VS Code.&lt;/p&gt;

&lt;p&gt;Then, you install bcryptjs by using the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm install bcryptjs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So far, you've created a folder for the project, created a project so that your dependencies can be managed from within the folder and you've installed "bcryptjs", well done.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to hash a password
&lt;/h2&gt;

&lt;p&gt;Let's say the password we want to hash is "comfortable".&lt;/p&gt;

&lt;p&gt;Inside the index.js file you created after requiring/importing bcryptjs, type the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const bcrypt = require("bcryptjs");

const password = "comfortable";

const salt = bcrypt.genSaltSync(10);

const hashedPassword = bcrypt.hashSync(password, salt);

console.log(hashedPassword);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me explain the lines of code we have written so far:&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;const bcrypt = require("bcryptjs");&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This line of code imports the bcryptjs package to be used inside the indes.js file.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;const password = "comfortable";&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The password we want to hash.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;const salt = bcrypt.genSaltSync(10);&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This is the salt generated to be used to hash the plain password. A hash salt can be defined as the cost factor for the hash function - incrementing it by one will double the time taken to calculate the hash. The "10" passed as an argument means the hash function will take 10 times the time to calculate the hash.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;const hashedPassword = bcrypt.hashSync(password, salt);&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This line of code actually hashes the password using the salt generated using "bcrypt.genSaltSynce()".&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;console.log(hashedPassword);&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This line of code shows the result of hashing the password.&lt;/p&gt;

&lt;p&gt;This is the result i got &lt;strong&gt;$2a$10$qADzCMgxKJjE7gLNdh0M6.cWWYyyDDHUCQWMAXdk87pFuOrkCSpQO&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Up next, I guess you will want to know how to compare the hashed password with the plain password, let's see how to do that.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to compare a plain password with a hashed password.
&lt;/h2&gt;

&lt;p&gt;Let's say you want to compare the original password (which is "comfortable") with the hashed password to see if they match, here is how to do it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const bcrypt = require("bcryptjs");

const password = "comfortable";

// The value of the he password I hashed
// Your's will be different
const hashedPassword = "$2a$10$qADzCMgxKJjE7gLNdh0M6.cWWYyyDDHUCQWMAXdk87pFuOrkCSpQO";

const passwordsMatch = bcrypt.compareSync(password, hashedPassword);

console.log(passwordsMatch);

if(passwordsMatch) {
    console.log("Passwords match");
} else {
    console.log("Passwords don't match");
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me explain what we've written so far.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;const hashedPassword = "...";&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The value of the password that was hashed.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;const passwordsMatch = bcrypt.compareSync(password, hashedPassword);&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;We use this line of code to compare the plain password with the hashed password to see if they match. This will return a Boolean of true if the passwords match and false if the passwords don't match.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;if(passwordsMatch) {&lt;br&gt;
    console.log("Passwords match");&lt;br&gt;
} else {&lt;br&gt;
    console.log("Passwords don't match");&lt;br&gt;
}&lt;/strong&gt;&lt;br&gt;
This line of code takes the Boolean value stored as passwordsMatch and uses it to print a message to the user if the passwords match or not.&lt;/p&gt;

&lt;p&gt;The result I got:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;true
Passwords match
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This method can be used when setting up a registration route on your server. Before saving any user's password, always make sure you hash it.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Get information about any school's LinkedIn profile by making a simple HTTP request to Proxycurl request</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Sat, 29 Oct 2022 14:58:55 +0000</pubDate>
      <link>https://dev.to/veektor_v/get-information-about-any-schools-linkedin-profile-by-making-a-simple-http-request-to-proxycurl-request-524j</link>
      <guid>https://dev.to/veektor_v/get-information-about-any-schools-linkedin-profile-by-making-a-simple-http-request-to-proxycurl-request-524j</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;As a web scraper, data scientist, or hobbyist, you may want to know more about a school by simply making an API call but don't know which API can offer that service. Proxycurl has an API called the School API. This API is used to get information about a school as long as the school has a profile on LinkedIn.&lt;/p&gt;

&lt;p&gt;The School API has an endpoint called the School Profile endpoint which is used to fetch information about a school's name, description, website URL, locations, etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Interact with the School Profile Endpoint
&lt;/h3&gt;

&lt;p&gt;You can interact with the School Profile endpoint by simply making a GET request to &lt;a href="https://nubela.co/proxycurl/api/linkedin/school" rel="noopener noreferrer"&gt;https://nubela.co/proxycurl/api/linkedin/school&lt;/a&gt; with the following parameters:&lt;br&gt;
&lt;strong&gt;url&lt;/strong&gt; (required): the URL of the school profile, for example, &lt;a href="https://www.linkedin.com/school/national-university-of-singapore" rel="noopener noreferrer"&gt;https://www.linkedin.com/school/national-university-of-singapore&lt;/a&gt;.&lt;br&gt;
&lt;strong&gt;use_cache&lt;/strong&gt; (optional): This parameter accepts either an &lt;strong&gt;"if-present"&lt;/strong&gt; (default value) value or an &lt;strong&gt;"if-recent" *&lt;em&gt;value. *&lt;/em&gt;"if-present"&lt;/strong&gt; fetches the profile from cache regardless of the age of the profile. If the profile is not available in cache, API will attempt to source the profile from external sources.&lt;br&gt;
The &lt;strong&gt;"if-recent"&lt;/strong&gt; value is used the API will make its best effort to return a fresh profile no older than 29 days. This costs an extra 1 credit.&lt;br&gt;
Note: Each successful request made with the School Profile endpoint costs 1 credit.&lt;br&gt;
Here is how to make the API call using JavaScript's fetch API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const parameters = new URLSearchParams({
 url: 'https://www.linkedin.com/school/national-university-of-singapore',
 use_cache: 'if-present'
});
fetch("https://nubela.co/proxycurl/api/linkedin/profile/resolve?" + parameters, {
 method: "GET",
 headers: {
 Authorization: "Bearer &amp;lt;your API key&amp;gt;"
 }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is how to make the API call using Python's requests package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests
response = requests.get("https://nubela.co/proxycurl/api/linkedin/profile/resolve",
 params = {
 'url': 'https://www.linkedin.com/school/national-university-of-singapore',
 'use_cache': 'if-present'
 },
 headers={'Authorization': 'Bearer &amp;lt;your API key&amp;gt;'})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The response should look similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "background_cover_image_url": "https://media-exp1.licdn.com/dms/image/C4D1BAQH9RnIm5udicQ/company-background_10000/0/1519796779731?e=1654585200\u0026v=beta\u0026t=OpKFclmBc0ERcn8EiRImFXJrsVmNXlXOD9JpJx5NJQA",
    "company_size": [
        5001,
        10000
    ],
    "company_size_on_linkedin": 15199,
    "company_type": "EDUCATIONAL_INSTITUTION",
    "description": "At NUS, we are shaping the future through our people and our pursuit of new frontiers in knowledge. In a single century, we have become a university of global influence and an Asian thought leader. Our location at the crossroads of Asia informs our mission and gives us a tremendous vantage point to help create opportunities and address the pressing issues facing Singapore, Asia and the world.\r\rAt NUS, we believe in education, research and service that change lives.",
    "follower_count": 470304,
    "founded_year": 1905,
    "hq": {
        "city": "Singapore",
        "country": "SG",
        "is_hq": true,
        "line_1": "21 Lower Kent Ridge Road, Singapore",
        "postal_code": "119077",
        "state": null
    },
    "industry": "Higher Education",
    "linkedin_internal_id": "5524",
    "locations": [
        {
            "city": "Singapore",
            "country": "SG",
            "is_hq": true,
            "line_1": "21 Lower Kent Ridge Road, Singapore",
            "postal_code": "119077",
            "state": null
        }
    ],
    "name": "National University of Singapore",
    "profile_pic_url": "https://media-exp1.licdn.com/dms/image/C4D0BAQGvBq9cz6AIIQ/company-logo_400_400/0/1519856127538?e=1661990400\u0026v=beta\u0026t=K6ND2NedE8iKNY9YKoQXhDCl773XebFKY0VbX-5sATA",
    "search_id": "5524",
    "similar_companies": [
        {
            "industry": "Education Management",
            "link": "https://www.linkedin.com/school/nus-business-school/",
            "location": null,
            "name": "NUS Business School"
        },
        {
            "industry": "Higher Education",
            "link": "https://www.linkedin.com/school/nusfass/",
            "location": null,
            "name": "NUS Faculty of Arts and Social Sciences"
        },
        {
            "industry": "Research",
            "link": "https://www.linkedin.com/company/solar-energy-research-institute-of-singapore-seris",
            "location": null,
            "name": "Solar Energy Research Institute of Singapore"
        },
        {
            "industry": "Higher Education",
            "link": "https://www.linkedin.com/school/duke-nus/",
            "location": null,
            "name": "Duke-NUS Medical School"
        },
        {
            "industry": "Professional Training \u0026 Coaching",
            "link": "https://www.linkedin.com/company/iss_nus",
            "location": null,
            "name": "Institute of Systems Science, National University of Singapore"
        },
        {
            "industry": "Higher Education",
            "link": "https://www.linkedin.com/company/nusfst",
            "location": null,
            "name": "NUS Department of Food Science and Technology"
        },
        {
            "industry": "Education Management",
            "link": "https://www.linkedin.com/company/centre-for-future-ready-graduates",
            "location": null,
            "name": "NUS Centre for Future-ready Graduates"
        }
    ],
    "specialities": [
        "education",
        "research",
        "broad-based curriculum",
        "cross-faculty enrichment"
    ],
    "tagline": null,
    "universal_name_id": "national-university-of-singapore",
    "updates": [],
    "website": "http://nus.edu.sg"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;As we can see, a simple API request to Proxycurl's School Profile endpoint can give us information about the school's profile we are working with. More at &lt;a href="https://nubela.co/blog/linkdb-an-exhaustive-dataset-of-linkedin-members-and-companies/" rel="noopener noreferrer"&gt;https://nubela.co/blog/linkdb-an-exhaustive-dataset-of-linkedin-members-and-companies/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>All you need to know about Proxycurl API</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Sat, 29 Oct 2022 00:30:22 +0000</pubDate>
      <link>https://dev.to/veektor_v/p-4l06</link>
      <guid>https://dev.to/veektor_v/p-4l06</guid>
      <description>&lt;p&gt;&lt;strong&gt;Content&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Getting Started&lt;/li&gt;
&lt;li&gt;Proxycurl's APIs&lt;/li&gt;
&lt;li&gt;Proxycurl's Python SDK&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Getting Started&lt;/strong&gt;&lt;br&gt;
Have you thought about how to get insights on jobs, people and companies, etc on LinkedIn without worrying about scaling a web scraping and data-science team? &lt;a href="https://nubela.co/proxycurl/" rel="noopener noreferrer"&gt;Proxycurl&lt;/a&gt; is enriched with APIs and libraries that can be used to look up people, companies, enrich people's profile, enrich people's profile and look up contact information about a business or people (link to Proxycurl here: &lt;a href="https://nubela.co/proxycurl/" rel="noopener noreferrer"&gt;https://nubela.co/proxycurl/&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proxycurl's APIs&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://nubela.co/proxycurl/" rel="noopener noreferrer"&gt;Proxycurl&lt;/a&gt; has the following APIs which can be used for different purposes, the APIs are as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Company API&lt;/li&gt;
&lt;li&gt;Contact API&lt;/li&gt;
&lt;li&gt;Jobs API&lt;/li&gt;
&lt;li&gt;People API&lt;/li&gt;
&lt;li&gt;School API&lt;/li&gt;
&lt;li&gt;Reveal API&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Meta API&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;a href="https://nubela.co/proxycurl/docs#company-api" rel="noopener noreferrer"&gt;Company API&lt;/a&gt;&lt;/b&gt;&lt;/u&gt;: This API contains endpoints that can be used to get the list of employees, count of employees, office location of the company, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;a href="https://nubela.co/proxycurl/docs#contact-api" rel="noopener noreferrer"&gt;Contact API&lt;/a&gt;&lt;/b&gt;&lt;/u&gt;: With this API, you can get information about a person's [work email address, profile url, list of a person's contact number, list of a person's email, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;a href="https://nubela.co/proxycurl/docs#jobs-api" rel="noopener noreferrer"&gt;Job API&lt;/a&gt;&lt;/b&gt;&lt;/u&gt;: Job API offers the ability to get details about a job and list of open positions in a company.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;a href="https://nubela.co/proxycurl/docs#people-api" rel="noopener noreferrer"&gt;People API&lt;/a&gt;&lt;/b&gt;&lt;/u&gt;: The people API can be used to extract a person's information from LinkedIn. Information like a person's profile information (picture, job history, etc) and a person's LinkedIn profile url.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;a href="https://nubela.co/proxycurl/docs#school-api" rel="noopener noreferrer"&gt;School API&lt;/a&gt;&lt;/b&gt;&lt;/u&gt;: The school API can be used to get information relating to a school (like the schools's location, their profile picture, etc).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;a href="https://nubela.co/proxycurl/docs#reveal-api" rel="noopener noreferrer"&gt;Reveal API&lt;/a&gt;&lt;/b&gt;&lt;/u&gt;: This API contains an endpoint to reveal the owner of a company's IPV4 address.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;u&gt;&lt;b&gt;&lt;a href="https://nubela.co/proxycurl/docs#meta-api-view-credit-balance-endpoint" rel="noopener noreferrer"&gt;Meta API&lt;/a&gt;&lt;/b&gt;&lt;/u&gt;: This API is not really related to LinkedIn, it is used to get the balance of your Proxycurl wallet.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When you sign up with Proxycurl, you get a free instant 10 credit to make API calls. View the &lt;a href="https://nubela.co/proxycurl/pricing" rel="noopener noreferrer"&gt;pricing&lt;/a&gt; for each of the endpoints available.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proxycurl's Python SDK&lt;/strong&gt;&lt;br&gt;
Apart from this APIs, Proxycurl also has a Python SDK which can be used for looking up people, companies, enrich people's profile, etc.&lt;/p&gt;

&lt;p&gt;It can be installed using the command &lt;code&gt;pip install proxycurl-py&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Proxycurl supports asyncio, gevent and twisted concurrency models.&lt;/p&gt;

&lt;p&gt;They can be installed using the following commands:&lt;/p&gt;

&lt;p&gt;With asyncio&lt;br&gt;
&lt;code&gt;$ pip install 'proxycurl[asyncio]'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With gevent&lt;br&gt;
&lt;code&gt;$ pip install 'proxycurl[gevent]'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;With twisted&lt;br&gt;
&lt;code&gt;$ pip install 'proxycurl[twisted]'&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
With Proxycurl, you can build and scale data-driven applications on people and companies without worrying about scaling a web scraping and data-science team. This can help in accelerating growth-stage startups sales and marketing automation. Find out more here: &lt;a href="https://nubela.co/blog/linkdb-an-exhaustive-dataset-of-linkedin-members-and-companies/" rel="noopener noreferrer"&gt;https://nubela.co/blog/linkdb-an-exhaustive-dataset-of-linkedin-members-and-companies/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>proxycurl</category>
      <category>api</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Get a company's LinkedIn jobs listings by using Proxycurl's Jobs API</title>
      <dc:creator>Victor Peter</dc:creator>
      <pubDate>Fri, 28 Oct 2022 21:29:57 +0000</pubDate>
      <link>https://dev.to/veektor_v/get-a-linkedin-companys-jobs-listings-by-using-proxycurls-jobs-api-16m2</link>
      <guid>https://dev.to/veektor_v/get-a-linkedin-companys-jobs-listings-by-using-proxycurls-jobs-api-16m2</guid>
      <description>&lt;h2&gt;
  
  
  Table of Content
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Introduction&lt;/li&gt;
&lt;li&gt;Endpoints Concerned with Proxycurl's Jobs API&lt;/li&gt;
&lt;li&gt;- Job Listing Endpoint&lt;/li&gt;
&lt;li&gt;- Job Profile Endpoint&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Proxycurl makes it simple to build data-driven applications and web scraping simple by providing simple but powerful APIs that make working with/getting information about a person or a company as simple as possible.&lt;/p&gt;

&lt;p&gt;Today, we are going to look into Proxycurl's Jobs API. The Job API makes it simple to fetch a list of jobs posted by a company on LinkedIn and also get structured data from a LinkedIn job profile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Endpoints Concerned with Proxycurl's Jobs API
&lt;/h2&gt;

&lt;p&gt;Proxycurl has two endpoints which are under Jobs API, the two endpoints are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Job listing endpoint&lt;/li&gt;
&lt;li&gt;Job profile endpoint&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;1. Job Listing Endpoint&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This endpoint accepts the &lt;em&gt;search_id *of the company as its parameter (which is a required parameter) and sends back a list of jobs posted by the company. The *search_id&lt;/em&gt; parameter can be gotten from Proxycurl's Company Profile API.&lt;/p&gt;

&lt;p&gt;Request example using JavaScript's fetch API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const parameters = new URLSearchParams({
  search_id: "1035"
});

fetch("https://nubela.co/proxycurl/api/v2/linkedin/company/job?" + parameters, {
  method: "GET",
  headers: {
    Authorization: "Bearer &amp;lt;your API key&amp;gt;"
  }
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Request example using Python's requests package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests

response = requests.get("https://nubela.co/proxycurl/api/v2/linkedin/company/job",
      params = {
          'search_id': "1035"
      },
      headers={'Authorization': 'Bearer &amp;lt;your API key&amp;gt;'})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The response will contain a job array/list, next_page_no, next_page_api_url, previous_page_no, previous_page_api_url, where each element of the job array/list is an object/dictionary that contains the following:&lt;/p&gt;

&lt;p&gt;company: The company name.&lt;/p&gt;

&lt;p&gt;company_url: The company's URL.&lt;/p&gt;

&lt;p&gt;job_title: The job title.&lt;/p&gt;

&lt;p&gt;job_url: The job's title.&lt;/p&gt;

&lt;p&gt;list_date: Date the job was published.&lt;/p&gt;

&lt;p&gt;location: The location associated with the Job.&lt;/p&gt;

&lt;p&gt;Example response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "job": [
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Security Analyst (DART) - Detection and Response Team",
            "job_url": "https://www.linkedin.com/jobs/view/3099474469",
            "list_date": null,
            "location": "London, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Technology Specialists BizApps",
            "job_url": "https://www.linkedin.com/jobs/view/3073299528",
            "list_date": null,
            "location": "Reading, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Customer Success Management IC4",
            "job_url": "https://www.linkedin.com/jobs/view/3088297589",
            "list_date": null,
            "location": "Reading, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Web Sales \u0026 Customer Support Advisor - Consumer Customers",
            "job_url": "https://www.linkedin.com/jobs/view/3073408044",
            "list_date": null,
            "location": "London, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Digital Advisor",
            "job_url": "https://www.linkedin.com/jobs/view/3091944620",
            "list_date": null,
            "location": "Manchester, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Digital Advisor",
            "job_url": "https://www.linkedin.com/jobs/view/3091945492",
            "list_date": null,
            "location": "London, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Meeting Room Global Black Belt Sales Professional",
            "job_url": "https://www.linkedin.com/jobs/view/3073695907",
            "list_date": null,
            "location": "London, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Customer Success Manager - Modern Work",
            "job_url": "https://www.linkedin.com/jobs/view/3092059527",
            "list_date": null,
            "location": "Reading, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Business Administrator",
            "job_url": "https://www.linkedin.com/jobs/view/3073292283",
            "list_date": null,
            "location": "Cambridge, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Account Delivery Management IC6",
            "job_url": "https://www.linkedin.com/jobs/view/3073466352",
            "list_date": null,
            "location": "Reading, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Executive Producer-Xbox Publishing",
            "job_url": "https://www.linkedin.com/jobs/view/3089096967",
            "list_date": null,
            "location": "London, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Digital Advisor",
            "job_url": "https://www.linkedin.com/jobs/view/3091944622",
            "list_date": null,
            "location": "Reading, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Full Time Opportunities - AI Researcher",
            "job_url": "https://www.linkedin.com/jobs/view/2982837417",
            "list_date": null,
            "location": "Cambridge, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Escalation Engineer Technical Support - Azure/Telco",
            "job_url": "https://www.linkedin.com/jobs/view/3095506397",
            "list_date": null,
            "location": "Reading, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Intune Support Engineer",
            "job_url": "https://www.linkedin.com/jobs/view/3073267700",
            "list_date": null,
            "location": "Reading, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Customer Success Account Mgmt - Application Development",
            "job_url": "https://www.linkedin.com/jobs/view/3085586525",
            "list_date": null,
            "location": "Cambridge, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Security Analyst (DART) - Detection and Response Team",
            "job_url": "https://www.linkedin.com/jobs/view/3099476216",
            "list_date": null,
            "location": "Cambridge, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Researcher in User Experience Design",
            "job_url": "https://www.linkedin.com/jobs/view/3073409764",
            "list_date": null,
            "location": "Cambridge, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Product Manager",
            "job_url": "https://www.linkedin.com/jobs/view/3097096699",
            "list_date": null,
            "location": "London, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Internship opportunities: Confidential Computing",
            "job_url": "https://www.linkedin.com/jobs/view/3043459039",
            "list_date": null,
            "location": "Cambridge, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Internship opportunities: Machine Learning for Visual Communication",
            "job_url": "https://www.linkedin.com/jobs/view/3037128666",
            "list_date": null,
            "location": "Reading, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Solution Area Specialists Cyber Security",
            "job_url": "https://www.linkedin.com/jobs/view/3073693634",
            "list_date": null,
            "location": "Reading, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Studio Discipline Lead",
            "job_url": "https://www.linkedin.com/jobs/view/3094117901",
            "list_date": null,
            "location": "London, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Studio Discipline Lead",
            "job_url": "https://www.linkedin.com/jobs/view/3094119661",
            "list_date": null,
            "location": "Reading, England, United Kingdom"
        },
        {
            "company": "Microsoft",
            "company_url": "https://www.linkedin.com/company/microsoft",
            "job_title": "Studio Discipline Lead",
            "job_url": "https://www.linkedin.com/jobs/view/3094124026",
            "list_date": null,
            "location": "Cambridge, England, United Kingdom"
        }
    ],
    "next_page_api_url": "http://proxycurl-web.127.0.0.1.nip.io:5002/proxycurl-dev/proxycurl-dev/api/v2/linkedin/company/job?pagination=eyJwYWdlIjogMX0\u0026search_id=1035",
    "next_page_no": 1,
    "previous_page_api_url": null,
    "previous_page_no": null
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;2. Job Profile Endpoint&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The job profile endpoint is used to get more information about a specific job that is posted. This endpoint accepts a URL parameter which is the URL of the LinkedIn Job Profile to target and returns the following in its response:&lt;/p&gt;

&lt;p&gt;linkedin_internal_id: The identifier concerned with a specific job.&lt;/p&gt;

&lt;p&gt;job_description: The job description of the jon concerned with the internal identifier.&lt;/p&gt;

&lt;p&gt;apply_url: The URL to apply for the job.&lt;/p&gt;

&lt;p&gt;title: The job title.&lt;/p&gt;

&lt;p&gt;location: The location for the job.&lt;/p&gt;

&lt;p&gt;company: The name of the company issuing the job.&lt;/p&gt;

&lt;p&gt;seniority_level: The level of experience/competence expected of the candidates applying for the job.&lt;/p&gt;

&lt;p&gt;industry: The industry the job is concerned with.&lt;/p&gt;

&lt;p&gt;employment_type: The employment type (part-time or full-time)&lt;/p&gt;

&lt;p&gt;job_functions: Things expected of the applicants.&lt;/p&gt;

&lt;p&gt;total_applicants: Total number of applicants that have applied.&lt;/p&gt;

&lt;p&gt;Request example using JavaScript's fetch API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const parameters = new URLSearchParams({
  url: "https://www.linkedin.com/jobs/view/3046202003"
});

fetch("https://nubela.co/proxycurl/api/linkedin/job?" + parameters, {
  method: "GET",
  headers: {
    Authorization: "Bearer &amp;lt;your API key&amp;gt;"
  }
});

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Request example using Python's requests package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import requests

response = requests.get("https://nubela.co/proxycurl/api/linkedin/job",
      params = {
          "url": "https://www.linkedin.com/jobs/view/3046202003"
      },
      headers={'Authorization': 'Bearer &amp;lt;your API key&amp;gt;'})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sample response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "apply_url": null,
    "company": {
        "logo": "https://media-exp1.licdn.com/dms/image/C4D0BAQHiNSL4Or29cg/company-logo_400_400/0/1519856215226?e=1661385600\u0026v=beta\u0026t=rUecQpduLPDavL3JswjLsJAUNgSu1Q2l3JS5sGp8nHk",
        "name": "Google",
        "url": "https://www.linkedin.com/company/google"
    },
    "employment_type": "Full-time",
    "industry": [
        "Internet"
    ],
    "job_description": "This role may also be located in our Playa Vista, CA campus.\n\nNote: By applying to this position you will have an opportunity to share your preferred working location from the following: Redwood City, CA, USA; Ann Arbor, MI, USA; Chicago, IL, USA; New York, NY, USA; Los Angeles, CA, USA.\n\nMinimum qualifications:\nBachelor\u2019s degree in Engineering, Computer Science, Information Systems, Statistics, Economics, Mathematics, Finance, a related quantitative field, or equivalent practical experience.2 years of experience in business intelligence, data engineering, data modeling, or using analytics with SQL.Experience with programming languages (e.g. Python).Experience partnering or working with stakeholders across organizational boundaries.\n\nPreferred qualifications:\n4 years of experience designing and building scalable data pipelines to enable data-driven business selections.Experience in statistical tools (e.g., R, SPSS, MATLAB, etc.).Experience in machine learning models (e.g., scikit-learn, TensorFlow, etc.).Knowledge of commercial and other reporting tools and technologies (e.g., Tableau, QlikView, D3, Microstrategy, BusinessObjects, Cognos, etc.).Ability to manage multiple projects, and communicate findings/reports to stakeholders and non-techincal audiences. Excellent verbal and written communication skills.\n\nAbout The Job\n\nAs a Data Engineer, you will use an analytical, data-driven approach to drive understanding of business changes. You will build data pipelines that enable engineers, analysts, and other stakeholders across the organization. You will also build data models to deliver insightful analytics while ensuring data integrity.\n\nWhen our millions of advertisers and publishers are happy, so are we! Our Google Customer Solutions (GCS) team of entrepreneurial, enthusiastic and client-focused members are the \"human face\" of Google, helping entrepreneurs both individually and broadly build their online presence and grow their businesses. We are dedicated to growing the unique needs of advertising companies. Our teams of strategists, analysts, advisers and support specialists collaborate closely to spot and analyze customer needs and trends. In collaboration, we create and implement business plans broadly for all types of businesses.\n\nResponsibilities\n\nBuild data pipelines, reports, best practices, and frameworks that enable analysts and stakeholders across the organization.Recognize and adopt best practices in developing pipelines and analytical insights, including data integrity, test design, analysis, validation, and documentation.Design and develop scalable and actionable solutions that provide insights to help advertisers grow.Work with stakeholders to understand feature and tool gaps and innovate on behalf of our customers.\n\nGoogle is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google\u0027s EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .",
    "job_functions": [],
    "linkedin_internal_id": "3046202003",
    "location": null,
    "seniority_level": null,
    "title": "Data Engineer, Google Customer Solutions",
    "total_applicants": null
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With Proxycurl's Jobs API, you can confidently get data about a job listing and get more information about a specific job. Find out more here: &lt;a href="https://nubela.co/blog/linkdb-an-exhaustive-dataset-of-linkedin-members-and-companies/" rel="noopener noreferrer"&gt;https://nubela.co/blog/linkdb-an-exhaustive-dataset-of-linkedin-members-and-companies/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>jobs</category>
      <category>linkedin</category>
      <category>proxycurl</category>
    </item>
  </channel>
</rss>
