<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: MongoDB</title>
    <description>The latest articles on DEV Community by MongoDB (@mongodb_staff).</description>
    <link>https://dev.to/mongodb_staff</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mongodb_staff"/>
    <language>en</language>
    <item>
      <title>MongoDB Enterprise Running on OpenShift</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Fri, 13 Apr 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/mongodb-enterprise-running-on-openshift-76k</link>
      <guid>https://dev.to/mongodb/mongodb-enterprise-running-on-openshift-76k</guid>
      <description>&lt;p&gt;In order to compete and get products to market rapidly enterprises today leverage cloud-ready and cloud-enabled technologies. Platforms as a Service (or PaaS) provide out-of-the-box capabilities which enable application developers to focus on their business logic and users instead of infrastructure and interoperability. This key ability separates successful projects from those which drown themselves in tangential work which never stops.&lt;/p&gt;

&lt;p&gt;In this blog post, we'll cover MongoDB's general PaaS and cloud enablement strategy as well as touch upon some new features of &lt;a href="https://www.openshift.com/" rel="noopener noreferrer"&gt;Red Hat’s OpenShift&lt;/a&gt; which enable you to run production-ready MongoDB clusters. We're also excited to announce the developer preview of MongoDB Enterprise Server running on OpenShift. This preview allows you to test out how your applications will interact with MongoDB running on OpenShift.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration Approach for MongoDB and PaaS
&lt;/h3&gt;

&lt;p&gt;Platforms as a Service are increasingly popular, especially for those of you charged with building "cloud-enabled" or "cloud-ready" applications but required to use private data center deployments today. Integrating a database with a PaaS needs to be done appropriately to ensure that database instances can be deployed, configured, and administered properly.&lt;/p&gt;

&lt;p&gt;There are two common components of any production-ready cloud-enabled database deployment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A modern, rock-solid database (like MongoDB).&lt;/li&gt;
&lt;li&gt;Tooling to enable telemetry, access and authorization, and backups (not to mention things like proactive alerting that integrates with your chosen issue tracking system, complete REST-based APIs for automation, and a seamless transition to hosted services.) For MongoDB, this is &lt;a href="https://www.mongodb.com/download-center?jmp=partners_OpenShift#ops-manager" rel="noopener noreferrer"&gt;MongoDB Ops Manager&lt;/a&gt;.
&lt;center&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage3-uf9i1l440w.png" width="366" height="177"&gt;&lt;/center&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A deep integration of MongoDB Ops Manager is core to our approach of integrating MongoDB with popular PaaS offerings. The general design approach is to use the "separation of concerns" design principle. The chosen PaaS handles the physical or virtual machines, CPU and RAM allotment, persistent storage requirements, and machine-level access control, while MongoDB Ops Manager controls all aspects of run-time database deployments&lt;/p&gt;

&lt;p&gt;This strategy enables system administrators to quickly deploy "MongoDB as a Solution" offerings within their own data centers. In turn, enterprise developers can easily self-service their own database needs.&lt;/p&gt;

&lt;p&gt;If you haven't already, download &lt;a href="https://www.mongodb.com/download-center?jmp=partners_OpenShift#ops-manager" rel="noopener noreferrer"&gt;MongoDB Ops Manager&lt;/a&gt; for the best way to run MongoDB.&lt;/p&gt;

&lt;h3&gt;
  
  
  MongoDB Enterprise Server OpenShift Developer Preview
&lt;/h3&gt;

&lt;p&gt;Our "developer preview" for MongoDB on OpenShift can be found here: &lt;a href="https://github.com/jasonmimick/mongodb-openshift-dev-preview" rel="noopener noreferrer"&gt;https://github.com/jasonmimick/mongodb-openshift-dev-preview&lt;/a&gt;. The preview allows provisioning of both MongoDB replica sets and "agent-only" nodes (for easy later use as MongoDB instances) directly through OpenShift. The deployments automatically register themselves with an instance of &lt;a href="https://www.mongodb.com/download-center?jmp=partners_OpenShift#ops-manager" rel="noopener noreferrer"&gt;MongoDB Ops Manager&lt;/a&gt;. All the technical details and notes of getting started can be found right in the repo. Here we'll just describe some of functionality and technology used.&lt;/p&gt;

&lt;p&gt;The preview requires access to an OpenShift cluster running version 3.7 or later and takes advantage of the new Kubernetes Service Catalog features. Specifically, we're using the &lt;a href="https://github.com/openshift/ansible-service-broker" rel="noopener noreferrer"&gt;Ansible Service Broker&lt;/a&gt; and have build an &lt;a href="https://github.com/ansibleplaybookbundle/ansible-playbook-bundle" rel="noopener noreferrer"&gt;Ansible Playbook Bundle&lt;/a&gt; which installs an icon into your OpenShift console. The preview also contains an example of an OpenShift template which supports replica sets and similar functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  A tour and deploying your first cluster:
&lt;/h3&gt;

&lt;p&gt;Once you have your development environment ready (see notes in the developer preview Github repository) and have configured an instance of &lt;a href="https://www.mongodb.com/download-center?jmp=partners_OpenShift#ops-manager" rel="noopener noreferrer"&gt;MongoDB Ops Manager&lt;/a&gt; you're ready to starting deploying MongoDB Enterprise Server.&lt;/p&gt;

&lt;center&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage2-ilu5vbwte9.png" width="434" height="222"&gt;&lt;/center&gt;

&lt;p&gt;Clusters can be provisioned through the OpenShift web console or via command line. The web console provides an intuitive "wizard-like" interface in which users specify values for various parameters, such as MongoDB version, storage size allocation, and MongoDB Ops Manager Organization/Project to name a few.&lt;/p&gt;

&lt;p&gt;Command line installs are also available in which parameter values can be scripted or predefined. This extensibility allows for automation and integration with various Continuous Integration and Continuous Delivery technologies.&lt;/p&gt;

&lt;p&gt;A future post will detail cluster configuration and various management scenarios, such as upgrades, performance tuning, and troubleshooting connectivity, so stay tuned.&lt;/p&gt;

&lt;p&gt;We're excited to introduce simple and efficient ways to manage your MongoDB deployments with tools such as OpenShift and Kubernetes. Please try out the developer preview and drop us a line on Twitter #mongodb-openshift or email &lt;a href="//bd@mongodb.com"&gt;bd@mongodb.com&lt;/a&gt; for more information.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Modern Distributed Application Deployment with Kubernetes and MongoDB Atlas</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Thu, 05 Apr 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/modern-distributed-application-deployment-with-kubernetes-and-mongodb-atlas-159j</link>
      <guid>https://dev.to/mongodb/modern-distributed-application-deployment-with-kubernetes-and-mongodb-atlas-159j</guid>
      <description>&lt;p&gt;Storytelling is one of the parts of being a Developer Advocate that I enjoy. Sometimes the stories are about the special moments when the team comes together to keep a system running or build it faster. But there are less than glorious tales to be told about the software deployments I’ve been involved in. And for situations where we needed to deploy several times a day, now we are talking nightmares.&lt;/p&gt;

&lt;p&gt;For some time, I worked at a company that believed that deploying to production several times a day was ideal for project velocity. Our team was working to ensure that advertising software across our media platform was always being updated and released. One of the issues was a lack of real automation in the process of applying new code to our application servers.&lt;/p&gt;

&lt;p&gt;What both ops and development teams had in common was a desire for improved ease and agility around application and configuration deployments. In this article, I’ll present some of my experiences and cover how MongoDB Atlas and Kubernetes can be leveraged together to simplify the process of deploying and managing applications and their underlying dependencies.&lt;/p&gt;

&lt;p&gt;Let's talk about how a typical software deployment unfolded:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The developer would send in a ticket asking for the deployment&lt;/li&gt;
&lt;li&gt;The developer and I would agree upon a time to deploy the latest software revision&lt;/li&gt;
&lt;li&gt;We would modify an existing bash script with the appropriate git repository version info&lt;/li&gt;
&lt;li&gt;We’d need to manually back up the old deployment&lt;/li&gt;
&lt;li&gt;We’d need to manually create a backup of our current database&lt;/li&gt;
&lt;li&gt;We’d watch the bash script perform this "Deploy" on about six servers in parallel&lt;/li&gt;
&lt;li&gt;&lt;a href="https://en.wikipedia.org/wiki/Waving_a_dead_chicken_(over_it)" rel="noopener noreferrer"&gt;Wave a dead chicken over my keyboard&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some of these deployments would fail, requiring a return to the previous version of the application code. This process to "rollback" to a prior version would involve me manually copying the repository to the older version, performing manual database restores, and finally confirming with the team that used this system that all was working properly. It was a real mess and I really wasn't in a position to change it.&lt;/p&gt;

&lt;p&gt;I eventually moved into a position which gave me greater visibility into what other teams of developers, specifically those in the open source space, were doing for software deployments. I noticed that — surprise! — people were no longer interested in doing the same work over and over again.&lt;/p&gt;

&lt;p&gt;Developers and their supporting ops teams have been given keys to a whole new world in the last few years by utilizing containers and automation platforms. Rather than doing manual work required to produce the environment that your app will live in, you can deploy applications quickly thanks to tools like Kubernetes.&lt;/p&gt;

&lt;h4&gt;
  
  
  What's Kubernetes?
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes can help reduce the amount of work your team will have to do when deploying your application. Along with MongoDB Atlas, you can build scalable and resilient applications that stand up to high traffic or can easily be scaled down to reduce costs. Kubernetes runs just about anywhere and can use almost any infrastructure. If you're using a public cloud, a hybrid cloud or even a bare metal solution, you can leverage Kubernetes to quickly deploy and scale your applications.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://cloud.google.com/kubernetes-engine/" rel="noopener noreferrer"&gt;Google Kubernetes Engine&lt;/a&gt; is built into the Google Cloud Platform and helps you quickly deploy your containerized applications.&lt;/p&gt;

&lt;p&gt;For the purposes of this tutorial, I will upload our &lt;a href="https://docs.docker.com/engine/reference/commandline/images/" rel="noopener noreferrer"&gt;image&lt;/a&gt; to GCP and then deploy to a Kubernetes cluster so I can quickly scale up or down our application as needed. When I create new versions of our app or make incremental changes, I can simply create a new image and deploy again with Kubernetes.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Atlas with Kubernetes?
&lt;/h4&gt;

&lt;p&gt;By using these tools together for your MongoDB Application, &lt;a href="https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=kubernetes&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;you can quickly produce and deploy applications&lt;/a&gt; without worrying much about infrastructure management. Atlas provides you with a persistent data-store for your application data without the need to manage the actual database software, replication, upgrades, or monitoring. All of these features are delivered out of the box, allowing you to build and then deploy quickly.&lt;/p&gt;

&lt;p&gt;In this tutorial, I will build a MongoDB Atlas cluster where our data will live for a simple Node.js application. I will then turn the app and configuration data for Atlas into a container-ready image with Docker.&lt;/p&gt;

&lt;p&gt;MongoDB Atlas is available across most regions on GCP so no matter where your application lives, you can keep your data close by (or distributed) across the cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8j7xs95d5gq9am0loyb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8j7xs95d5gq9am0loyb.png" alt="Figure 1: MongoDB Atlas runs in most GCP regions" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Requirements
&lt;/h4&gt;

&lt;p&gt;To follow along with this tutorial, users will need some of the following requirements to get started:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;Google Cloud Platform Account&lt;/a&gt; (billing enabled or credits)&lt;/li&gt;
&lt;li&gt;MongoDB Atlas Account (M10+ dedicated cluster)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/install/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nodejs.org/en/download/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://git-scm.com/" rel="noopener noreferrer"&gt;Git&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;First, I will download the repository for the code I will use. In this case, it's a basic record keeping app using MongoDB, Express, React, and Node (&lt;a href="https://www.mongodb.com/blog/post/the-modern-application-stack-part-1-introducing-the-mean-stack" rel="noopener noreferrer"&gt;MERN&lt;/a&gt;).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash-3.2$ git clone git@github.com:cefjoeii/mern-crud.git Cloning into 'mern-crud'... remote: Counting objects: 326, done. remote: Total 326 (delta 0), reused 0 (delta 0), pack-reused 326 Receiving objects: 100% (326/326), 3.26 MiB | 2.40 MiB/s, done. Resolving deltas: 100% (137/137), done. cd mern-crud
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, I will &lt;code&gt;npm install&lt;/code&gt; and get all the required npm packages installed for working with our app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;\&amp;gt; uws@9.14.0 install /Users/jaygordon/work/mern-crud/node\_modules/uws \&amp;gt; node-gyp rebuild \&amp;gt; build\_log.txt 2\&amp;gt;&amp;amp;1 || exit 0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Selecting your GCP Region for Atlas
&lt;/h4&gt;

&lt;p&gt;Each GCP region includes a set number of independent zones. Each zone has power, cooling, networking, and control planes that are isolated from other zones. For regions that have at least three zones (3Z), Atlas deploys clusters across three zones. For regions that only have two zones (2Z), Atlas deploys clusters across two zones.&lt;/p&gt;

&lt;p&gt;The Atlas &lt;a href="https://docs.atlas.mongodb.com/create-new-cluster/" rel="noopener noreferrer"&gt;Add New Cluster&lt;/a&gt; form marks regions that support 3Z clusters as &lt;strong&gt;Recommended&lt;/strong&gt; , as they provide higher availability. If your preferred region only has two zones, consider enabling cross-region replication and placing a replica set member in another region to increase the likelihood that your cluster will be available during partial region outages.&lt;/p&gt;

&lt;p&gt;The number of zones in a region has no effect on the number of MongoDB nodes Atlas can deploy. MongoDB Atlas clusters are always made of replica sets with a minimum of three MongoDB nodes.&lt;/p&gt;

&lt;p&gt;For general information on GCP regions and zones, see the Google documentation on &lt;a href="https://cloud.google.com/compute/docs/regions-zones/regions-zones" rel="noopener noreferrer"&gt;regions and zones&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Create Cluster and Add a User
&lt;/h4&gt;

&lt;p&gt;In the provided image below you can see I have selected the Cloud Provider "Google Cloud Platform." Next, I selected an instance size, in this case an M10. Deployments using M10 instances are ideal for development. If I were to take this application to production immediately, I may want to consider using an M30 deployment. Since this is a demo, an M10 is sufficient for our application. For a full view of all of the cluster sizes, check out the &lt;a href="https://www.mongodb.com/cloud/atlas/pricing?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=kubernetes&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;Atlas pricing page&lt;/a&gt;. Once I’ve completed these steps, I can click the "Confirm &amp;amp; Deploy" button. Atlas will spin up my deployment automatically in a few minutes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fry52110xaw9my3y2cpqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fry52110xaw9my3y2cpqw.png" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s create a username and password for our database that our Kubernetes deployed application will use to access MongoDB.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click "Security" at the top of the page.&lt;/li&gt;
&lt;li&gt;Click "MongoDB Users"&lt;/li&gt;
&lt;li&gt;Click "Add New User" &lt;/li&gt;
&lt;li&gt;Click "Show Advanced Options"&lt;/li&gt;
&lt;li&gt;We'll then add a user "&lt;code&gt;mernuser&lt;/code&gt;" for our &lt;code&gt;mern-crud&lt;/code&gt; app that only has access to a database named "&lt;code&gt;mern-crud&lt;/code&gt;" and give it a complex password. We'll specify readWrite privileges for this user:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmygd3otq73b1li5ea7cw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmygd3otq73b1li5ea7cw.png" width="800" height="739"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click "Add User"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kdu0z2162cyndnad97l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kdu0z2162cyndnad97l.png" width="800" height="48"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Your database is now created and your user is added. You still need our connection string and to whitelist access via the network.&lt;/p&gt;

&lt;h4&gt;
  
  
  Connection String
&lt;/h4&gt;

&lt;p&gt;Get your connection string by clicking "Clusters" and then clicking "CONNECT" next to your cluster details in your Atlas admin panel. After selecting connect, you are provided several options to use to connect to your cluster. Click "connect your application."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysnzq1w14m0hcruy8093.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fysnzq1w14m0hcruy8093.png" width="330" height="134"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Options for the 3.6 or the 3.4 versions of the MongoDB driver are given. I built mine using the 3.4 driver, so I will just select the connection string for this version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hwmmo6g07mwtr0ur7s8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hwmmo6g07mwtr0ur7s8.png" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will typically paste this into an editor and then modify the info to match my application credentials and my database name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsy4md0odhtsg51q8vlg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsy4md0odhtsg51q8vlg.png" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will now add this to the app's database configuration file and save it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn2197qmrtbgno9iaflej.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn2197qmrtbgno9iaflej.gif" width="708" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, I will package this up into an image with Docker and ship it to Google Kubernetes Engine!&lt;/p&gt;

&lt;h4&gt;
  
  
  Docker and Google Kubernetes Engine
&lt;/h4&gt;

&lt;p&gt;Get started by creating an account at Google Cloud, then &lt;a href="https://cloud.google.com/kubernetes-engine/docs/quickstart" rel="noopener noreferrer"&gt;follow the quickstart to create a Google Kubernetes Project&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once your project is created, you can find it within the Google Cloud Platform control panel:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuh0nue8ve0ac0u4i9ohf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuh0nue8ve0ac0u4i9ohf.png" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's time to create a container on your local workstation:&lt;/p&gt;

&lt;p&gt;Set the &lt;code&gt;PROJECT_ID&lt;/code&gt; environment variable in your shell by retrieving the pre- configured project ID on gcloud with the command below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export PROJECT\_ID="jaygordon-mongodb" gcloud config set project $PROJECT\_ID gcloud config set compute/zone us-central1-b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, place a &lt;code&gt;Dockerfile&lt;/code&gt; in the root of your repository with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM node:boron RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY . /usr/src/app EXPOSE 3000 CMD [npm, start]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To build the container image of this application and tag it for uploading, run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash-3.2$ docker build -t gcr.io/${PROJECT\_ID}/mern-crud:v1 . Sending build context to Docker daemon 40.66MB Successfully built b8c5be5def8f Successfully tagged gcr.io/jgordon-gc/mern-crud:v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upload the container image to the Container Registry so we can deploy to it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Successfully tagged gcr.io/jaygordon-mongodb/mern-crud:v1 bash-3.2$ gcloud docker -- push gcr.io/${PROJECT\_ID}/mern-crud:v1The push refers to repository [gcr.io/jaygordon-mongodb/mern-crud]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, I will test it locally on my workstation to make sure the app loads:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --rm -p 3000:3000 gcr.io/${PROJECT\_ID}/mern-crud:v1 \&amp;gt; mern-crud@0.1.0 start /usr/src/app \&amp;gt; node server Listening on port 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3vtxaw1k3ivchvjiw0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3vtxaw1k3ivchvjiw0e.png" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Great — pointing my browser to &lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt; brings me to the site. Now it's time to create a kubernetes cluster and deploy our application to it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Build Your Cluster With Google Kubernetes Engine
&lt;/h4&gt;

&lt;p&gt;I will be using the &lt;a href="https://cloud.google.com/shell/" rel="noopener noreferrer"&gt;Google Cloud Shell&lt;/a&gt; within the Google Cloud control panel to manage my deployment. The cloud shell comes with all required applications and tools installed to allow you to deploy the Docker image I uploaded to the image registry without installing any additional software on my local workstation.&lt;/p&gt;

&lt;p&gt;Now I will create the kubernetes cluster where the image will be deployed that will help bring our application to production. I will include three nodes to ensure uptime of our app.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Set up our environment first:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export PROJECT\_ID="jaygordon-mongodb" gcloud config set project $PROJECT\_ID gcloud config set compute/zone us-central1-b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Launch the cluster&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;gcloud container clusters create mern-crud --num-nodes=3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When completed, you will have a three node kubernetes cluster visible in your control panel. After a few minutes, the console will respond with the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Creating cluster mern-crud...done. Created [https://container.googleapis.com/v1/projects/jaygordon-mongodb/zones/us-central1-b/clusters/mern-crud]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload\_/gcloud/us-central1-b/mern-crud?project=jaygordon-mongodb kubeconfig entry generated for mern-crud. NAME LOCATION MASTER\_VERSION MASTER\_IP MACHINE\_TYPE NODE\_VERSION NUM\_NODES STATUS mern-crud us-central1-b 1.8.7-gke.1 35.225.138.208 n1-standard-1 1.8.7-gke.1 3 RUNNING
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2t5ogt8i44a920v3qp4t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2t5ogt8i44a920v3qp4t.png" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Just a few more steps left. Now we'll deploy our app with &lt;a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="noopener noreferrer"&gt;kubectl&lt;/a&gt; to our cluster from the Google Cloud Shell:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl run mern-crud --image=gcr.io/${PROJECT\_ID}/mern-crud:v1 --port 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The output when completed should be:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jay\_gordon@jaygordon-mongodb:~$ kubectl run mern-crud --image=gcr.io/${PROJECT\_ID}/mern-crud:v1 --port 3000 deployment "mern-crud" created
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Now review the application deployment status:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jay\_gordon@jaygordon-mongodb:~$ kubectl get pods NAME READY STATUS RESTARTS AGE mern-crud-6b96b59dfd-4kqrr 1/1 Running 0 1m jay\_gordon@jaygordon-mongodb:~$
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;We'll create a load balancer for the three nodes in the cluster so they can be served properly to the web for our application:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jay\_gordon@jaygordon-mongodb:~$ kubectl expose deployment mern-crud --type=LoadBalancer --port 80 --target-port 3000 service "mern-crud" exposed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Now get the IP of the loadbalancer so if needed, it can be bound to a DNS name and you can go live!&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;jay\_gordon@jaygordon-mongodb:~$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.27.240.1 &amp;lt;none&amp;gt; 443/TCP 11m
mern-crud LoadBalancer 10.27.243.208 35.226.15.67 80:30684/TCP 2m
&amp;lt;/none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;A quick curl test shows me that my app is online!&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bash-3.2$ curl -v 35.226.15.67 \* Rebuilt URL to: 35.226.15.67/ \* Trying 35.226.15.67... \* TCP\_NODELAY set \* Connected to 35.226.15.67 (35.226.15.67) port 80 (#0) \&amp;gt; GET / HTTP/1.1 \&amp;gt; Host: 35.226.15.67 \&amp;gt; User-Agent: curl/7.54.0 \&amp;gt; Accept: \*/\* \&amp;gt; \&amp;lt; HTTP/1.1 200 OK \&amp;lt; X-Powered-By: Express
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm18xoa8v136psow7f6vn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm18xoa8v136psow7f6vn.png" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have added some test data and as we can see, it's part of my deployed application via Kubernetes to GCP and storing my persistent data in MongoDB Atlas.&lt;/p&gt;

&lt;p&gt;When I am done working with the Kubernetes cluster, I can destroy it easily:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;gcloud container clusters delete mern-crud&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  What's Next?
&lt;/h4&gt;

&lt;p&gt;You've now got all the tools in front of you to build something HUGE with MongoDB Atlas and Kubernetes.&lt;/p&gt;

&lt;p&gt;Check out the rest of the &lt;a href="https://cloud.google.com/kubernetes-engine/docs/quickstart" rel="noopener noreferrer"&gt;Google Kubernetes Engine's tutorials&lt;/a&gt; for more information on how to build applications with Kubernetes. For more information on MongoDB Atlas, &lt;a href="https://www.mongodb.com/cloud/atlas" rel="noopener noreferrer"&gt;click here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>cloud</category>
      <category>database</category>
    </item>
    <item>
      <title>Inclusion at MongoDB World</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Mon, 02 Apr 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/inclusion-at-mongodb-world-bcp</link>
      <guid>https://dev.to/mongodb/inclusion-at-mongodb-world-bcp</guid>
      <description>&lt;p&gt;In June, we’ll kick off the fifth edition of &lt;a href="https://www.mongodb.com/world18?jmp=blog" rel="noopener noreferrer"&gt;MongoDB World&lt;/a&gt;, our global user conference. We continue to strive to make this conference not just a fun and interactive learning experience, but also an event that is welcoming to all. Here’s how:&lt;/p&gt;

&lt;h4&gt;
  
  
  Diversity Scholarship
&lt;/h4&gt;

&lt;p&gt;MongoDB’s Scholarship program seeks to support to members of groups who are underrepresented in the technology industry. This includes, but is not limited to, Black, LatinX, women, low-income, and LGBTQ.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLSf_u0u_gdvsUDHwj5VyR2fhOYAxUrbYOoEtnpchsdbJeN7aPg/viewform" rel="noopener noreferrer"&gt;Diversity Scholarship recipients receive*&lt;/a&gt;:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complimentary admission to MongoDB World&lt;/li&gt;
&lt;li&gt;Complimentary admission to a pre-conference workshop&lt;/li&gt;
&lt;li&gt;Invitation to a lunch session with other Scholars&lt;/li&gt;
&lt;li&gt;Speed mentoring with MongoDB speakers at the event&lt;/li&gt;
&lt;li&gt;A MongoDB certification voucher applicable for both developer and DBA certification exams&lt;/li&gt;
&lt;li&gt;Six-month access to on-demand MongoDB University courses&lt;/li&gt;
&lt;li&gt;Lifelong membership in the online MongoDB Diversity Scholars community&lt;/li&gt;
&lt;li&gt;A feature in a blog post&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you’re a member of any of the-above mentioned groups, you qualify to apply for a scholarship. The application is &lt;a href="https://docs.google.com/forms/d/e/1FAIpQLSf_u0u_gdvsUDHwj5VyR2fhOYAxUrbYOoEtnpchsdbJeN7aPg/viewform" rel="noopener noreferrer"&gt;here&lt;/a&gt; and the deadline to apply is May 4.&lt;/p&gt;

&lt;center&gt;&lt;a href="https://docs.google.com/forms/d/e/1FAIpQLSf_u0u_gdvsUDHwj5VyR2fhOYAxUrbYOoEtnpchsdbJeN7aPg/viewform" rel="noopener noreferrer"&gt;Apply for a Diversity Scholarship&lt;/a&gt;&lt;/center&gt;  

&lt;p&gt;&lt;strong&gt;If you cannot or do not want to apply, but would still like to contribute to scholarships for others&lt;/strong&gt; , make sure to &lt;a href="https://www.eventbrite.com/e/mongodb-world-2018-tickets-41087292197" rel="noopener noreferrer"&gt;sign up for MongoDB World&lt;/a&gt;. You have the option to donate towards the Diversity Scholarship and can still have an impact by giving someone the opportunity to attend.&lt;/p&gt;

&lt;h4&gt;
  
  
  Female Innovators
&lt;/h4&gt;

&lt;p&gt;Know a technologist who identifies as a woman who should attend MongoDB World? &lt;a href="https://www.mongodb.com/world18/female-innovators?jmp=blog" rel="noopener noreferrer"&gt;Nominate her&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Nominees get a complimentary conference pass* and an invitation to a guided discussion over lunch in the Women and Trans Coders Lounge. &lt;a href="https://www.mongodb.com/world18/female-innovators?jmp=blog" rel="noopener noreferrer"&gt;Self-nominations are also encouraged&lt;/a&gt;. The nomination deadline is April 27.&lt;/p&gt;

&lt;center&gt;&lt;a href="https://www.mongodb.com/world18/female-innovators?jmp=blog" rel="noopener noreferrer"&gt;Nominate a Female Innovator
&lt;/a&gt;&lt;/center&gt;  

&lt;h4&gt;
  
  
  Women and Trans Coders Lounge
&lt;/h4&gt;

&lt;p&gt;The Women and Trans Coders Lounge aims to amplify the voices of non-binary people, women, and trans people of all genders within our engineering community. It’s run in part by MongoDB’s Women and Trans Coders group. During MongoDB World, stop by to network, share your thoughts in a guided discussion, or sit in on a Make it Matter session.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmashite8mw4vfcb8fud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmashite8mw4vfcb8fud.png" width="715" height="477"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Affinity Group Happy Hours
&lt;/h4&gt;

&lt;p&gt;Join us for cocktails and mocktails, run by MongoDB’s employee resource groups, on June 26. You’ll be able to attend a happy hour hosted by the Black Network, the MongoDB Women’s Group, and the Queeries (MongoDB’s LGBTQ group). All happy hours will also serve mocktails and nonalcoholic beverages.&lt;/p&gt;

&lt;h4&gt;
  
  
  Black Network Happy Hour
&lt;/h4&gt;

&lt;p&gt;Meet members of the MongoDB Black Network and other conference attendees who identify as Black during the Black Network Happy Hour.&lt;/p&gt;

&lt;h4&gt;
  
  
  LGBTQ Happy Hour
&lt;/h4&gt;

&lt;p&gt;End the first day of MongoDB World by joining fellow MongoDB users who identify as members of the LGBTQ community for a drink at the bar.&lt;/p&gt;

&lt;h4&gt;
  
  
  Women in Tech Happy Hour
&lt;/h4&gt;

&lt;p&gt;You’re invited to connect with Female Innovators, conference speakers, and other women in tech at the Women in Tech Happy Hour.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpv58eyzm911x3rnrcwj9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpv58eyzm911x3rnrcwj9.png" width="617" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Accessibility
&lt;/h4&gt;

&lt;p&gt;Our goal is to make MongoDB World accessible to all. Contact us if you need accessibility accommodations.&lt;/p&gt;

&lt;h4&gt;
  
  
  Code of Conduct
&lt;/h4&gt;

&lt;p&gt;All conference attendees are expected to agree with and abide by the code of conduct.&lt;/p&gt;

&lt;center&gt;&lt;a href="https://www.mongodb.com/world18?jmp=blog" rel="noopener noreferrer"&gt;Register Now
&lt;/a&gt;&lt;/center&gt;  

&lt;p&gt;We look forward to seeing you on June 26-27 in NYC. Over the course of two days, you’ll be able to engage with MongoDB users and industry experts from around the globe. And you’ll walk away with the tools that enable you to build your giant ideas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event Details:&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Date: June 26-27, 2018&lt;br&gt;&lt;br&gt;
Location: New York Hilton Midtown, 1335 6th Ave, New York, NY 10019&lt;br&gt;&lt;br&gt;
&lt;a href="https://www.mongodb.com/world18/" rel="noopener noreferrer"&gt;mongodbworld.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54sg9p5p3tkvkd9m66uz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F54sg9p5p3tkvkd9m66uz.png" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;* Please note that travel and lodging is not included in the Diversity Scholarship and Female Innovators awards. Recipients are responsible for their own accommodations.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Stack Overflow Research of 100,000 Developers Finds MongoDB is the Most Wanted Database</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Tue, 13 Mar 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/stack-overflow-research-of-100000-developers-finds-mongodb-is-the-most-wanted-database--m4h</link>
      <guid>https://dev.to/mongodb/stack-overflow-research-of-100000-developers-finds-mongodb-is-the-most-wanted-database--m4h</guid>
      <description>&lt;p&gt;It’s well established that developers want to work with a database that offers flexibility, versatility, and ease of use. Scalability and reliability certainly don’t hurt either. For an increasing number of developers, MongoDB is the desired solution to meet all these requirements. And for the second year in a row, this has been validated by the opinions of those who matter most — developers themselves.&lt;/p&gt;

&lt;p&gt;Today, we’re excited to announce that, for the second year in a row, MongoDB is the &lt;strong&gt;most wanted database&lt;/strong&gt; in the &lt;a href="https://insights.stackoverflow.com/survey/2018/"&gt;Stack Overflow Developer Survey 2018&lt;/a&gt;, the world’s largest developer survey with over 100,000 respondents.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--B7mucnbE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Screen%2520Shot%25202018-03-13%2520at%25203.38.03%2520PM-qxj2aahz6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--B7mucnbE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Screen%2520Shot%25202018-03-13%2520at%25203.38.03%2520PM-qxj2aahz6b.png" alt="" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Steady Increases in Popularity
&lt;/h3&gt;

&lt;p&gt;Since its debut, MongoDB’s popularity among developers has steadily increased. This popularity has been driven primarily by the platform’s ease of use and flexibility; more recently, the release of the &lt;a href="https://www.mongodb.com/cloud/atlas"&gt;fully managed database service MongoDB Atlas&lt;/a&gt; has made it easier than ever to run the database in any of the major cloud platforms.&lt;/p&gt;

&lt;p&gt;MongoDB continues to innovate, listening and reacting to the demands of developers from small startups to the largest of enterprises. New and exciting features to the platform include &lt;a href="https://www.mongodb.com/blog/post/an-introduction-to-change-streams"&gt;change streams&lt;/a&gt;, which push real-time data updates to downstream applications, &lt;a href="https://docs.mongodb.com/manual/core/retryable-writes/"&gt;retryable writes&lt;/a&gt;, which enhance reliability without increasing complexity, and the announcement of &lt;a href="https://www.mongodb.com/blog/post/multi-document-transactions"&gt;multi-document ACID transactions&lt;/a&gt;, coming in version 4.0 later this year.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why MongoDB?
&lt;/h3&gt;

&lt;p&gt;MongoDB’s document model offers developers massive increases in productivity by allowing developers to store data in a way that is consistent with how they think and create applications. Additionally, MongoDB’s native replication and horizontal partitioning give developers the confidence and freedom to concentrate on differentiating code without having to be concerned about challenges associated with data locality, reliability and scalability. Lastly, MongoDB offers the freedom to run consistently all the way from local development environments to the largest mainframe deployments. The database can be deployed on-premises, in a hybrid cloud, or in any of the public clouds.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can you benefit from this news?
&lt;/h3&gt;

&lt;p&gt;Hiring managers seeking talented developers may do well to review some of the additional statistics included in today’s release. One interesting statistic can be found in the chart entitled “How Developers Assess Potential Jobs”. If you are a hiring manager and you’re looking to attract top talent, considering the most wanted database platform might be a way to enhance the attractiveness of your opportunities.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EAKIWdJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Screen%2520Shot%25202018-03-13%2520at%25203.48.22%2520PM-9jlp7s8ju5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EAKIWdJy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Screen%2520Shot%25202018-03-13%2520at%25203.48.22%2520PM-9jlp7s8ju5.png" alt="" width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Javascript and Python developers reviewing the Stack Overflow report will be encouraged as these development environments continue to grow in popularity. Popularity of web application stacks like MEAN or MERN have provided Javascript developers with frameworks for building applications quickly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--GDMnjvnf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Screen%2520Shot%25202018-03-13%2520at%25203.43.55%2520PM-tnah73u9sx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--GDMnjvnf--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Screen%2520Shot%25202018-03-13%2520at%25203.43.55%2520PM-tnah73u9sx.png" alt="" width="777" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not using Javascript or Python? MongoDB offers full support for &lt;a href="https://docs.mongodb.com/manual/applications/drivers"&gt;these and all&lt;/a&gt; popular development languages. Want to use Golang with MongoDB? A fully supported MongoDB &lt;a href="https://github.com/mongodb/mongo-go-driver"&gt;Golang driver has reached Alpha stage&lt;/a&gt; in development, giving even more options to developers to build with the latest and most popular languages.&lt;/p&gt;

&lt;h3&gt;
  
  
  In summary
&lt;/h3&gt;

&lt;p&gt;Developers are the new &lt;em&gt;&lt;a href="https://www.mongodb.com/press/developers-are-the-enterprise-kingmakers-but-monotony-coding-is-killing-innovation"&gt;Kingmakers&lt;/a&gt;&lt;/em&gt; and their opinions are vitally important. The results of the Stack Overflow survey are an exciting validation of MongoDB’s efforts to build a resilient, flexible, and easy-to-use platform wanted by developers the world over.&lt;/p&gt;




&lt;h4&gt;
  
  
  Are you looking to learn more?
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://university.mongodb.com/"&gt;MongoDB University&lt;/a&gt; offers free courses for &lt;a href="https://university.mongodb.com/courses/M001/about"&gt;beginners&lt;/a&gt;, &lt;a href="https://university.mongodb.com/courses/M101P/about"&gt;developers&lt;/a&gt;, &lt;a href="https://university.mongodb.com/courses/M102/about"&gt;database administrators&lt;/a&gt;, and &lt;a href="https://university.mongodb.com/courses/M103/about"&gt;operations personnel&lt;/a&gt;. Get online education about &lt;a href="https://university.mongodb.com/courses/M123/about"&gt;MongoDB Atlas&lt;/a&gt;, &lt;a href="https://www.mongodb.com/products/compass"&gt;Compass&lt;/a&gt; and other products to help developers build great modern applications.&lt;/p&gt;

&lt;p&gt;Want to build right away with MongoDB? Get started today with a 512 MB database managed by MongoDB Atlas &lt;a href="https://www.mongodb.com/cloud/atlas"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>data</category>
    </item>
    <item>
      <title>Optimizing for Fast, Responsive Reads with Cross-Region Replication in MongoDB Atlas</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Mon, 12 Mar 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/optimizing-for-fast-responsive-reads-with-cross-region-replication-in-mongodb-atlas-28d7</link>
      <guid>https://dev.to/mongodb/optimizing-for-fast-responsive-reads-with-cross-region-replication-in-mongodb-atlas-28d7</guid>
      <description>&lt;p&gt;MongoDB Atlas customers can enable cross-region replication for multi-region fault tolerance and fast, responsive reads.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Improved availability guarantees can be achieved by distributing replica set members across multiple regions. These secondaries will participate in the automated election and failover process should the primary (or the cloud region containing the primary) go offline. &lt;/li&gt;
&lt;li&gt;Read-only replica set members allow customers to optimize for local reads (minimize read latency) across different geographic regions using a single MongoDB deployment. These replica set members will not participate in the election and failover process and can never be elected to a primary replica set member.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post, we’ll dive a little deeper into optimizing for local reads using cross-region replication and walk you through the necessary configuration steps on an environment running on AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Primer on read preference
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.mongodb.com/manual/core/read-preference/"&gt;Read preference&lt;/a&gt; determines how MongoDB clients route read operations to the members of a replica set. By default, an application directs its read operations to the replica set &lt;a href="https://docs.mongodb.com/manual/reference/glossary/#term-primary"&gt;primary&lt;/a&gt;. By specifying the read preference, users can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable local reads for geographically distributed users. Users from California, for example, can read data from a replica located locally for a more responsive experience&lt;/li&gt;
&lt;li&gt;Allow read-only access to the database during failover scenarios&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TVFhe4NO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image5-vh6i7z8j0q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TVFhe4NO--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image5-vh6i7z8j0q.png" alt="" width="613" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A read replica is simply an instance of the database that provides the replicated data from the oplog; clients will not write to a read replica.&lt;/p&gt;

&lt;p&gt;With MongoDB Atlas, we can easily distribute read replicas across multiple cloud regions, allowing us to expand our application's data beyond the region containing our replica set primary in just a few clicks.&lt;/p&gt;

&lt;p&gt;To enable local reads and increase the read throughput to our application, we simply need to modify the read preference via the MongoDB drivers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enabling read replicas in MongoDB Atlas
&lt;/h3&gt;

&lt;p&gt;We can enable read replicas for a new or existing MongoDB paid cluster in the Atlas UI. To begin, we can click on the cluster “configuration” button and then find the link named _ &lt;strong&gt;"Enable cross-region configuration options."&lt;/strong&gt; _&lt;/p&gt;

&lt;p&gt;When we click this, we’ll be presented with an option to select the type of cross-replication we want. We'll choose _ &lt;strong&gt;deploy read-only replicas&lt;/strong&gt; _:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--RYBpB6I8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image1-q7z5p2iagp.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--RYBpB6I8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image1-q7z5p2iagp.gif" alt="" width="800" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see above, we have our preferred region (the region containing our replica set primary) set to AWS, us-east-1 (Virginia) with the default three nodes. We can add regions to our cluster configuration based on where we think other users of our application might be concentrated. In this case, we will add additional nodes in us-west-1 (Northern California) and eu-west-1 (Ireland), providing us with read replicas to serve local users.&lt;/p&gt;

&lt;p&gt;Note that all writes will still go to the primary in our preferred region, and reads from the secondaries in the regions we’ve added will be eventually consistent.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tV3d3t8G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image4-l82awkjmnd.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tV3d3t8G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image4-l82awkjmnd.gif" alt="" width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We’ll click "Confirm and Deploy", which will deploy our multi-region cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jKajv4N6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image3-tkhye8w0g6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jKajv4N6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image3-tkhye8w0g6.png" alt="" width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our default connection string will now include these read replicas. We can go to the "Connect" button and find our full connection string to access our cluster:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--a0ohBLIT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image2-fpoel33crq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--a0ohBLIT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image2-fpoel33crq.png" alt="" width="684" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the deployment of the cluster completes, we will be ready to distribute our application's data reads across multiple regions using the MongoDB drivers. We can specifically configure readPreference within our connection string to send clients to the "closest replicas". For example, the &lt;a href="https://mongodb.github.io/node-mongodb-native/index.html"&gt;Node native MongoDB Driver&lt;/a&gt; permits us to specify our preference:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;readPreference&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;Specifies the &lt;a href="https://docs.mongodb.com/manual/reference/glossary/#term-replica-set"&gt;replica set&lt;/a&gt; read preference for this connection.&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;| The read preference values are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.mongodb.com/manual/reference/read-preference/#primary"&gt;primary&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.mongodb.com/manual/reference/read-preference/#primaryPreferred"&gt;primaryPreferred&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.mongodb.com/manual/reference/read-preference/#secondary"&gt;secondary&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.mongodb.com/manual/reference/read-preference/#secondaryPreferred"&gt;secondaryPreferred&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.mongodb.com/manual/reference/read-preference/#nearest"&gt;nearest&lt;/a&gt;
|&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For our app, if we want to ensure the &lt;a href="https://docs.mongodb.com/manual/reference/connection-string/"&gt;read preference in our connection string&lt;/a&gt; is set to the nearest MongoDB replica, we would configure it as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongodb://admin:@cluster0-shard-00-00-bywqq.mongodb.net:27017,cluster0-shard-00-01-bywqq.mongodb.net:27017,cluster0-shard-00-02-bywqq.mongodb.net:27017,cluster0-shard-00-03-bywqq.mongodb.net:27017,cluster0-shard-00-04-bywqq.mongodb.net:27017/test?ssl=true&amp;amp;replicaSet=Cluster0-shard-0&amp;amp;authSource=admin?readPreference=nearest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Security and Connectivity (on AWS)
&lt;/h3&gt;

&lt;p&gt;MongoDB Atlas allows us to peer our application server's VPC directly to our MongoDB Atlas VPC within the same region. This permits us to reduce the network exposure to the internet and allows us to use native AWS Security Groups or CIDR blocks. You can review how to configure &lt;a href="https://docs.atlas.mongodb.com/security-vpc-peering/"&gt;VPC Peering here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;A note on VPCs for cross-region nodes:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At this time, MongoDB Atlas &lt;strong&gt;does not&lt;/strong&gt; support VPC peering across regions. If you want to grant clients in one cloud region read or write access to database instances in another cloud region, you would need to permit the clients’ public IP addresses to access your database deployment via &lt;a href="https://docs.atlas.mongodb.com/security-whitelist/"&gt;IP whitelisting&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With cross-region replication and read-only replicas enabled, your application will now be capable of providing fast, responsive access to data from any number of regions.&lt;/p&gt;




&lt;p&gt;Get started today with a free 512 MB database managed by MongoDB Atlas &lt;a href="https://www.mongodb.com/cloud/atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=cross-reg&amp;amp;jmp=dev-ref"&gt;here&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mongodb</category>
      <category>cloud</category>
      <category>database</category>
    </item>
    <item>
      <title>Improving MongoDB Performance with Automatically Generated Index Suggestions</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Thu, 01 Feb 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/improving-mongodb-performance-with-automatically-generated-index-suggestions-3fe1</link>
      <guid>https://dev.to/mongodb/improving-mongodb-performance-with-automatically-generated-index-suggestions-3fe1</guid>
      <description>&lt;p&gt;Beyond good data modeling, there are a few processes that teams responsible for optimizing query performance can leverage: looking for &lt;a href="https://docs.mongodb.com/manual/reference/explain-results/#collection-scan-vs-index-use" rel="noopener noreferrer"&gt;COLLSCANS&lt;/a&gt; in logs, analyzing &lt;a href="https://docs.mongodb.com/manual/reference/explain-results/" rel="noopener noreferrer"&gt;explain results&lt;/a&gt;, or relying on third-party tools. While these efforts may help you resolve some of the problems you’re noticing, they often require digging into documentation, time, and money, all the while your application remains bogged down with issues.&lt;/p&gt;

&lt;p&gt;MongoDB Atlas, &lt;a href="https://www.mongodb.com/cloud/atlas" rel="noopener noreferrer"&gt;the fully managed database service&lt;/a&gt;, helps you resolve performance issues with a greater level of ease by providing you with tools to ensure that your data is accessed as efficiently as possible. This post will provide a basic overview of how to access the MongoDB Atlas Performance Advisor, a tool that reviews your queries for up to two weeks and provides recommended indexes where appropriate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting Started
&lt;/h3&gt;

&lt;p&gt;This short tutorial makes use of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A demo data set generated with &lt;code&gt;mgodatagen&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;A dedicated MongoDB Atlas cluster (the Performance Advisor is available for M10s or larger)&lt;/li&gt;
&lt;li&gt;MongoDB shell install (to create indexes)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My database has two million documents in two separate collections:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hx7b3c03e9p0vqdx2ye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2hx7b3c03e9p0vqdx2ye.png" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If an application tries to access these documents without the right indexes in place, a &lt;em&gt;collection&lt;/em&gt; scan will take place. The database will &lt;em&gt;scan&lt;/em&gt; the full collection to find the required documents, and any documents that are not in memory are read from disk. This can dramatically reduce performance and cause your application to respond slower than expected.&lt;/p&gt;

&lt;p&gt;Case in point, when I try to run an unindexed query against my collections, MongoDB Atlas will automatically create an alert indicating that the query is not well targeted.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07coexrt9hdjfxv7ucb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F07coexrt9hdjfxv7ucb7.png" width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Reviewing Performance Advisor
&lt;/h3&gt;

&lt;p&gt;The Performance Advisor monitors slow-running queries (anything that takes longer than 100 milliseconds to execute) and suggests new indexes to improve query performance.&lt;/p&gt;

&lt;p&gt;To access this tool, go to your Atlas control panel and click your cluster's name. You’ll then find "Performance Advisor" at the top.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg5cz74s6f7pyxnoga3t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhg5cz74s6f7pyxnoga3t.png" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click the link and you'll be taken to the page where you'll see any relevant index recommendations, based on the fixed time period at the top of the page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqppc4kz12amtecpn81yf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqppc4kz12amtecpn81yf.png" width="563" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this example, I will review the performance of my queries from the last 24 hours. The Performance Advisor provides me with some recommendations on how to improve the speed of my slow queries:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9bjgszz46nq164gutlh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9bjgszz46nq164gutlh.png" width="800" height="702"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It looks like the &lt;em&gt;test&lt;/em&gt; collection with the field "name" could use an index. We can review the specific changes to be made by clicking the "More Info" button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn0cr6r8vj5wsehwyru0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn0cr6r8vj5wsehwyru0.png" width="800" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I can copy the contents of this recommendation and paste it into my MongoDB Shell to create the recommended index. You’ll notice a special option, { background: true }, is passed with the createIndex command. Using this command ensures that index creation does not block any operations. If you’re building new indexes on production systems, I highly recommend you read more about &lt;a href="https://docs.mongodb.com/manual/core/index-creation/" rel="noopener noreferrer"&gt;index build operations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faps7mv8h4mblia5t9oaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faps7mv8h4mblia5t9oaf.png" width="681" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that the recommended index is created, I can review my application's performance and see if it meets the requirements of my users. The Atlas alerts I received earlier have been resolved, which is a good sign:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnscdgzqby0qrstyjz5w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnscdgzqby0qrstyjz5w.png" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Noticeable slowdowns in performance from unindexed queries damage the user experience of your application, which may result in reduced engagement or customer attrition. The Performance Advisor in MongoDB Atlas gives you a simple and cost-efficient way to ensure that you’re getting the most out of the resources you’ve provisioned.&lt;/p&gt;

&lt;p&gt;To get started, &lt;a href="https://www.mongodb.com/cloud/atlas" rel="noopener noreferrer"&gt;sign up for MongoDB Atlas and deploy a cluster in minutes&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>database</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Training Machine Learning Models with MongoDB</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Thu, 18 Jan 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/training-machine-learning-models-with-mongodb-3b44</link>
      <guid>https://dev.to/mongodb/training-machine-learning-models-with-mongodb-3b44</guid>
      <description>&lt;p&gt;Over the last four months, I attended an immersive data science program at Galvanize in San Francisco. As a graduation requirement, the last three weeks of the program are reserved for a student-selected project that puts to use the skills learned throughout the course. The project that I chose to tackle utilized natural language processing in tandem with sentiment analysis to parse and classify news articles. With the controversy surrounding our nation’s media and the concept of “fake news” floated around every corner, I decided to take a pragmatic approach to address bias in the media.&lt;/p&gt;

&lt;p&gt;My resulting model identified three topics within an article and classified the sentiments towards each topic. Next, for each classified topic, the model returned a new article with the opposite sentiment, resulting in three articles provided to the user for each input article. With this model, I hoped to negate some of the inherent bias within an individual news article by providing counter arguments from other sources. The algorithms used were the following (in training order): &lt;a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html" rel="noopener noreferrer"&gt;TFIDF Vectorizer&lt;/a&gt; (text preprocessing), &lt;a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html#sklearn.decomposition.LatentDirichletAllocation.transform" rel="noopener noreferrer"&gt;Latent Dirichlet Allocation&lt;/a&gt; (topic extraction), &lt;a href="https://docs.scipy.org/doc/scipy/reference/cluster.hierarchy.html#module-scipy.cluster.hierarchy" rel="noopener noreferrer"&gt;Scipy’s Implementation of Hierarchical Clustering&lt;/a&gt; (document similarity), and &lt;a href="http://scikit-learn.org/dev/modules/generated/sklearn.naive_bayes.MultinomialNB.html" rel="noopener noreferrer"&gt;Multinomial Naive Bayes&lt;/a&gt; (sentiment classifier).&lt;/p&gt;

&lt;p&gt;Initially, I was hesitant to use any database, let alone a non-relational one. However, as I progressed through the experiment, managing the plethora of CSV tables became more and more difficult. I needed the flexibility to add additional features to my data as the model engineered them. This is a major drawback of relational databases. Using SQL, there are two options: generate a new table for each new feature and use a multitude of JOINs to retrieve all the necessary data, or use ALTER TABLE to add a new column for each new feature. However, due the the varied algorithms I used, some features were generated one data point at a time, while others were returned as a single python list. Neither option was well suited to my needs. As a result, I turned to MongoDB to resolve my data storage, processing, and analysis issues.&lt;/p&gt;

&lt;p&gt;To begin with, I used MongoDB to store the training data scraped from the web. I stored raw text data as individual documents on an AWS EC2 instance running a MongoDB database. Running a simple Python script on my EC2 instance, I generated a list of public news articles URLs to scrape and stored the scraped data (such as the article title and body) into my MongoDB database. I appreciated that, with MongoDB, I could employ indexes to ensure that duplicate URLs, and their associated text data, were not added to the database.&lt;/p&gt;

&lt;p&gt;Next, the entire dataset needed to be parsed using NLP and passed in as training data for the &lt;a href="https://en.wikipedia.org/wiki/Tf%E2%80%93idf" rel="noopener noreferrer"&gt;TFIDF&lt;/a&gt; Vectorizer (in the &lt;a href="http://scikit-learn.org/" rel="noopener noreferrer"&gt;scikit-learn toolkit&lt;/a&gt;) and the &lt;a href="https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation" rel="noopener noreferrer"&gt;Latent Dirichlet Allocation&lt;/a&gt; (LDA) model. Since both TFIDF and LDA require training on the entire dataset (represented by a matrix of ~70k rows x ~250k columns), I needed to store a lot of information in memory. LDA requires training on non-reduced data in order to identify correlations between all features in their original space. Scikit Learn’s implementations of TFIDF and LDA are trained iteratively, from the first data point to the last. I was able to reduce the total load on memory and allocate more to actual training, by passing a Python &lt;a href="https://wiki.python.org/moin/Generators" rel="noopener noreferrer"&gt;generator function&lt;/a&gt; to the model that called my MongoDB database for each new data point. This also enabled me to use a smaller EC2 instance, thereby optimizing costs.&lt;/p&gt;

&lt;p&gt;Once the vectorizer and LDA model were trained, I utilized the LDA model to extract 3 topics from each document, storing the top 50 words pertaining to each topic back in MongoDB. These top 50 words were used as the features to train my &lt;a href="https://docs.scipy.org/doc/scipy/reference/cluster.hierarchy.html#module-scipy.cluster.hierarchy" rel="noopener noreferrer"&gt;hierarchical clustering algorithm&lt;/a&gt;. The clustering algorithm functions much like a decision tree, and I generated pseudo-labels for each document by determining which leaf the document fell into.Since I could use dimensionally reduced data at this point, memory was not an issue, but all these labels needed to be referenced later in other parts of the pipeline. Rather than assigning several variables and allowing the labels to remain indefinitely in memory, I inserted new key-value pairs for the top words associated with each topic, topic labels according the clustering algorithm, and sentiment labels into each corresponding document in the collection. As each article was analyzed, the resulting labels and topic information were stored in the article’s document in MongoDB. As a result, there was no chance of data loss and any method could query the database for needed information regardless of whether other processes running in parallel were complete.&lt;/p&gt;

&lt;p&gt;Sentiment analysis was the most difficult part of the project. There is currently no valuable labeled data related to politics and news so I initially tried to train the base models on a data set of Amazon product reviews. Unsurprisingly, this proved to be a poor choice of training data because the resulting models consistently graded sentences such as “The governor's speech reeked of subtle racism and blatant lack of political savvy” as having positive sentiment with a ~90% probability, which is questionable at best. As a result I had to manually label ~100k data points, which was time-intensive, but resulted in a much more reliable training set. The model trained on manual labels significantly outperformed the base model, trained on the Amazon product reviews data. No changes were made to the sentiment analysis algorithm itself; the only difference was the training set. This highlights the importance of accurate and relevant data for training ML models – and the necessity, more often than not, of human intervention in machine learning. Finally, by code freeze, the model was successfully extracting topics from each article and clustering the topics based on similarity to the topics in other articles.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, MongoDB provides several different capabilities such as: flexible data model, indexing and high-speed querying, that make training and using machine learning algorithms much easier than with traditional, relational databases. Running MongoDB as the backend database to store and enrich ML training data allows for persistence and increased efficiency.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbj8nozumhrbjb2vjkp8q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbj8nozumhrbjb2vjkp8q.png" width="800" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  A final look at the MongoDB pipeline used for this project
&lt;/h3&gt;

&lt;p&gt;If you are interested in this project, feel free to take a look at the code on &lt;a href="https://github.com/nkpng2k/news_article_sentiment_analysis" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, or feel free to contact me via &lt;a href="https://www.linkedin.com/in/nick-png/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; or &lt;a href="//nkpng2k@gmail.com"&gt;email&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;About the author - Nicholas Png&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Nicholas Png is a Data Scientist recently graduated from the Data Science Immersive Program at Galvanize in San Francisco. He is a passionate practitioner of Machine Learning and Artificial Intelligence, focused on Natural Language Processing, Image Recognition, and Unsupervised Learning. He is familiar with several open source databases including MongoDB, Redis, and HDFS. He has a Bachelors of Science in Mechanical Engineering as well as multiple years experience in both software and business development.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mongodb</category>
      <category>cloud</category>
      <category>database</category>
    </item>
    <item>
      <title>Migrating your data from DynamoDB to MongoDB Atlas</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Fri, 05 Jan 2018 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/migrating-your-data-from-dynamodb-to-mongodb-atlas-efj</link>
      <guid>https://dev.to/mongodb/migrating-your-data-from-dynamodb-to-mongodb-atlas-efj</guid>
      <description>&lt;p&gt;There may be a number of reasons you are looking to migrate from DynamoDB to &lt;a href="https://www.mongodb.com/cloud/atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=dynamodb&amp;amp;jmp=dev-ref"&gt;MongoDB Atlas&lt;/a&gt;. While DynamoDB may be a good choice for a set of specific use cases, many developers prefer solutions that reduce the need for client-side code or additional technologies &lt;a href="https://www.mongodb.com/blog/post/q4-inc-relies-on-mongodb-atlas-to-boost-productivity-outpace-the-competition-and-lower-costs?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=dynamodb&amp;amp;jmp=dev-ref"&gt;as requirements become more sophisticated&lt;/a&gt;. They may also want to work with open source technologies or require some degree of deployment flexibility. In this post, we are going to explore a few reasons why you might consider using MongoDB Atlas over DynamoDB, and then look at how you would go about migrating a pre-existing workload.&lt;/p&gt;

&lt;h2&gt;
  
  
  Get building, faster...
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/cloud/atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=dynamodb&amp;amp;jmp=dev-ref"&gt;MongoDB Atlas&lt;/a&gt; is the best way to consume MongoDB and get access to a developer-friendly experience, delivered as a service on AWS. MongoDB’s query language is incredibly rich and allows you to execute complex queries natively while avoiding the overhead of moving data between operational and analytical engines. With Atlas, you can use MongoDB’s native query language to perform anything from searches on single keys or ranges, faceted searches, graph traversals, and geospatial queries through to complex aggregations, JOINs, and subqueries - without the need to use additional add-on services or integrations.&lt;/p&gt;

&lt;p&gt;For example, MongoDB’s aggregation pipeline is a powerful tool for performing analytics and statistical analysis in real-time and generating pre-aggregated reports for dashboarding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5XdwPEYy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image4-bi4cgu3tlk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5XdwPEYy--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image4-bi4cgu3tlk.png" alt="" width="800" height="563"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Additionally, MongoDB will give you a few extra things which I think are pretty cool:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Document Size - MongoDB handles documents up to 16MB in size, natively. In DynamoDB, you’re limited to 400KB per item, including the name and any local secondary indexes. For anything bigger, AWS suggests that you split storage between DynamoDB and S3.&lt;/li&gt;
&lt;li&gt;Deployment flexibility - Using MongoDB Atlas, you are able to deploy fully managed MongoDB to AWS, Google Cloud Platform, or Microsoft Azure. If you decide that you no longer want to run your database in the cloud, MongoDB can be run in nearly any environment on any hardware so self-managing is also an option.&lt;/li&gt;
&lt;li&gt;MongoDB has an idiomatic driver set, providing native language access to the database in dozens of programming languages. &lt;/li&gt;
&lt;li&gt;MongoDB Atlas provides a queryable backup method for restoring your data at the document level, without requiring a full restoration of your database. &lt;/li&gt;
&lt;li&gt;MongoDB Atlas provides you with over 100 different instance metrics, for rich native alerting and monitoring. &lt;/li&gt;
&lt;li&gt;Atlas will assist you in finding the right indexes thanks to &lt;a href="http://docs.atlas.mongodb.com/performance-advisor/"&gt;Performance Advisor&lt;/a&gt;. The Performance Advisor utility is on all the time, helping you make certain that your queries are efficient and fast.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;In this tutorial, we'll take a basic &lt;a href="http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.Java.02.html"&gt;data set&lt;/a&gt; from an existing DynamoDB table and migrate it to MongoDB Atlas. We'll use a free, M0 cluster so you can do this as well at no cost while you evaluate the benefits of MongoDB Atlas.&lt;/p&gt;

&lt;p&gt;This blog post makes a couple of assumptions:&lt;/p&gt;

&lt;p&gt;You've &lt;a href="https://docs.mongodb.com/manual/installation/"&gt;installed MongoDB&lt;/a&gt; on the computer you'll be importing the data from (we need the &lt;code&gt;mongoimport&lt;/code&gt; tool which is included with MongoDB) You've signed up for a &lt;a href="https://www.mongodb.com/cloud/atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=dynamodb&amp;amp;jmp=dev-ref"&gt;MongoDB Atlas Account&lt;/a&gt; (the M0 instance is free and fine for this demonstration)&lt;/p&gt;

&lt;p&gt;To begin, we'll review our table in AWS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rTEcEpQS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image6-yj19ur1tyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rTEcEpQS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image6-yj19ur1tyi.png" alt="" width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This table contains data on movies including the year they were released, the title of the film, with other information about the film contained in a subdocument. &lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--DyjJ3qOF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image3-3i0dc3i2mj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--DyjJ3qOF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image3-3i0dc3i2mj.png" alt="" width="590" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We want to take this basic set of data and bring it into MongoDB Atlas for a better method of querying, indexing, and managing our data long term.&lt;/p&gt;

&lt;p&gt;First, ensure your application has stopped your writes if you are in production to prevent new entries into your database. You'll likely want to create a temporary landing page and disable new connections to your DynamoDB. Once you've completed this, navigate to your table in your AWS panel.&lt;/p&gt;

&lt;p&gt;Click "Actions" at the top after you've selected your table. Find the "Export to .csv" option and click it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hSVzaHyC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image5-6vozp33uqs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hSVzaHyC--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image5-6vozp33uqs.png" alt="" width="800" height="310"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you'll have a CSV export of your data from DynamoDB, let's take a quick look:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ more ~/Downloads/Movies.csv
"year (N)","title (S)","info (M)"
"1944","Arsenic and Old Lace","{ ""actors"" : { ""L"" : [{ ""S"" : ""Cary Grant"" }, { ""S"" : ""Priscilla Lane"" }, { ""S"" : ""Raymond Massey"" }] }, ""directors"" : { ""L"" : [{ ""S"" : ""Frank Capra"" }] }, ""genres"" : { ""L"" : [{ ""S"" : ""Comedy"" }, { ""S"" : ""Crime"" }, { ""S"" : ""Romance"" }, { ""S"" : ""Thriller"" }] }, ""image_url"" : { ""S"" : ""http://ia.media-imdb.com/images/M/MV5BMTI3NTYyMDA0NV5BMl5BanBnXkFtZTcwMjEwMTMzMQ@@._V1_SX400_.jpg"" }, ""plot"" : { ""S"" : ""A drama critic learns on his wedding day that his beloved maiden aunts are homicidal maniacs, and that insanity runs in his family."" }, ""rank"" : { ""N"" : ""4025"" }, ""rating"" : { ""N"" : ""8"" }, ""release_date"" : { ""S"" : ""1944-09-01T00:00:00Z"" }, ""running_time_secs"" : { ""N"" : ""7080"" } }"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Looks good, let's go ahead and start using MongoDB's open source tools to import this to our Atlas cluster.&lt;/p&gt;

&lt;p&gt;Import your data Let's start moving our data into MongoDB Atlas. First, &lt;a href="https://www.linkedin.com/pulse/introducing-m0-free-tier-mongodb-atlas-jay-gordon/"&gt;launch a new free M0 cluster&lt;/a&gt; (M0s are great for demos but you’ll want to pick a different tier if you are going into production). Once you have a new M0 cluster, you can then &lt;a href="https://www.youtube.com/watch?v=leNNivaQbDY"&gt;whitelist your local IP address&lt;/a&gt; so that you may access your Atlas cluster.&lt;/p&gt;

&lt;p&gt;Next, you'll want to use the &lt;code&gt;mongoimport&lt;/code&gt; utility to upload the contents of Movies.csv to Atlas. I'll provide my connection string, which I can get right from my Atlas control panel so that &lt;code&gt;mongoimport&lt;/code&gt; can begin importing our data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Bhh5HLkB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image7-9dn45d4wfr.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Bhh5HLkB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image7-9dn45d4wfr.gif" alt="" width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I can enter this into my &lt;code&gt;mongoimport&lt;/code&gt; command along with some other important options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongoimport --uri "mongodb://admin:PASSWORD@demoabc-shard-00-00-a7nzr.mongodb.net:27017,demoabc-shard-00-01-a7nzr.mongodb.net:27017,demoabc-shard-00-02-a7nzr.mongodb.net:27017/test?ssl=true&amp;amp;replicaSet=demoabc-shard-0&amp;amp;authSource=admin" --collection movies --file ~/Downloads/Movies.csv --type csv --headerline
2017-11-20T14:02:45.612-0500 imported 100 documents
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now that our documents are uploaded, we can log into &lt;a href="https://www.mongodb.com/blog/post/your-mongodb-atlas-toolkit-logging-into-mongodb-atlas-with-compass?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=dynamodb&amp;amp;jmp=dev-ref"&gt;Atlas with MongoDB Compass&lt;/a&gt; and review our data:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FM5tT_dB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image2-g9s1y87xqk.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FM5tT_dB--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image2-g9s1y87xqk.gif" alt="" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;center&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--wSaYTvWh--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/image1-x3vtgq7sr3.png" width="446" height="480"&gt;&lt;/center&gt;

&lt;p&gt;This is cool but I want to do something a little bit more advanced. Luckily, MongoDB’s aggregation pipeline will give us the power to do so.&lt;/p&gt;

&lt;p&gt;We'll need to connect to our shell here; I can press the "CONNECT" button within my Atlas cluster’s overview panel and find the connection instructions for the shell.&lt;/p&gt;

&lt;p&gt;Once I am logged in, I can start playing with some different aggregations; here is a basic one that tells us the total number of movies released in 1944 in our collection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.movies.aggregate(

    // Pipeline
    [
        // Stage 1
        {
            $match: { year : 1944 }
        },

        // Stage 2
        {
            $group: { _id: null, count: { $sum: 1 } }
        },
    ]
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With DynamoDB, we would have had to connect our database cluster to Amazon EMR, adding cost, complexity, and latency.&lt;/p&gt;

&lt;p&gt;You can configure backups, ensure network security and configure additional user roles for your data all from the MongoDB Atlas UI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/cloud/atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=dynamodb&amp;amp;jmp=dev-ref"&gt;Sign up for MongoDB Atlas today and start building better apps faster&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mongodb</category>
      <category>cloud</category>
      <category>database</category>
    </item>
    <item>
      <title>Building a voice-activated movie search app powered by Amazon Lex, Lambda, and MongoDB Atlas - Part 3</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Sun, 26 Nov 2017 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/building-a-voice-activated-movie-search-app-powered-by-amazon-lex-lambda-and-mongodb-atlas---part--3-2lca</link>
      <guid>https://dev.to/mongodb/building-a-voice-activated-movie-search-app-powered-by-amazon-lex-lambda-and-mongodb-atlas---part--3-2lca</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;This is Part 3 of our Amazon Lex blog post series. This tutorial is divided into 3 parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/blog/post/aws-lex-lambda-mongodb-atlas-movie-search-app-part-1?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex3&amp;amp;jmp=dev-ref"&gt;Part 1: Lex overview, demo scenario and data layer setup&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/blog/post/aws-lex-lambda-mongodb-atlas-movie-search-app-part-2?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex3&amp;amp;jmp=dev-ref"&gt;Part 2: Set up and test an Amazon Lex bot&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Part 3: Deploy a Lambda function as our Lex bot fulfillment (this blog post)&lt;/p&gt;

&lt;p&gt;In this last blog post, we will deploy our Lambda function using the AWS Command Line Interface and verify that the bot fully works as expected. We’ll then review the code that makes up our Lambda function and explain how it works.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s deploy our AWS Lambda function
&lt;/h3&gt;

&lt;p&gt;Please follow the deployment steps available in &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies#aws-lambda-function"&gt;this GitHub repository&lt;/a&gt;. I have chosen to use Amazon’s &lt;a href="https://github.com/awslabs/aws-sam-local"&gt;SAM Local&lt;/a&gt; tool to showcase how you can test your Lambda function locally using &lt;a href="https://www.docker.com/"&gt;Docker&lt;/a&gt;, as well as package it and deploy it to an AWS account in just a few commands. However, if you’d like to deploy it manually to the AWS Console, you can always use &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/zip.sh"&gt;this zip script&lt;/a&gt; to deploy it in pretty much the same way I did in this &lt;a href="https://www.mongodb.com/blog/post/serverless-development-with-nodejs-aws-lambda-mongodb-atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex3&amp;amp;jmp=dev-ref"&gt;MongoDB Atlas with Lambda tutorial&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s test our Lex bot (end-to-end)
&lt;/h3&gt;

&lt;p&gt;Now that our Lambda fulfillment function has been deployed, let’s test our bot again in the Amazon Lex console and verify that we get the expected response. For instance, we might want to search for all the romance movies Jennifer Aniston starred in, a scenario we can test with the following bot conversation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--SaijyIlM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Amazon_Lex_Bot_Prompt1-s3mh080kx8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--SaijyIlM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Amazon_Lex_Bot_Prompt1-s3mh080kx8.png" alt="Amazon Lex Test Bot UI" width="728" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eJJ8wQYS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Amazon_Lex_Bot_Prompt2-dwb1k7ezzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eJJ8wQYS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Amazon_Lex_Bot_Prompt2-dwb1k7ezzg.png" alt="Amazon Lex Test Bot UI" width="740" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7UkgVgYY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Amazon_Lex_Bot_Prompt3-wf8n3qlx9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7UkgVgYY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Amazon_Lex_Bot_Prompt3-wf8n3qlx9x.png" alt="Amazon Lex Test Bot UI" width="718" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the screenshot above testifies, the Lex bot replies with the full list of Jennifer Aniston’s romance movies retrieved from our movies MongoDB database through our Lambda function. But how does our Lambda function process that request? We’ll dig deeper into our &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js"&gt;Lambda function code&lt;/a&gt; in the next section.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let's dive into the Lambda function code
&lt;/h3&gt;

&lt;p&gt;Our Lambda function always receives a JSON payload with a structure compliant with Amazon Lex’ &lt;a href="http://docs.aws.amazon.com/lex/latest/dg/lambda-input-response-format.html#using-lambda-input-event-format"&gt;input event format&lt;/a&gt; (as this &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/event.json"&gt;event.json&lt;/a&gt; file is):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "messageVersion": "1.0",
  "invocationSource": "FulfillmentCodeHook",
  "userId": "user-1",
  "sessionAttributes": {},
  "bot": {
    "name": "SearchMoviesBot",
    "alias": "$LATEST",
    "version": "$LATEST"
  },
  "outputDialogMode": "Text",
  "currentIntent": {
    "name": "SearchMovies",
    "slots": {
      "castMember": "jennifer aniston",
      "year": "0",
      "genre": "Romance"
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that the request contains the bot’s name (&lt;em&gt;SearchMoviesBot&lt;/em&gt;) and the slot values representing the answers to the bot’s questions provided by the user.&lt;/p&gt;

&lt;p&gt;The Lambda function starts with the &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L13"&gt;exports.handler method&lt;/a&gt; which validates the bot’s name and performs some additional processing if the payload is received through Amazon API Gateway (this is only necessary if you want to test your Lambda function through Amazon API Gateway but is not relevant in an Amazon Lex context). It then calls the &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L33"&gt;dispatch()&lt;/a&gt; method, which takes care of connecting to our MongoDB Atlas database and passing on the bot’s intent to the &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L67"&gt;query()&lt;/a&gt; method, which we’ll explore in a second. Note that the dispatch() method uses the performance optimization technique I highlighted in &lt;a href="https://www.mongodb.com/blog/post/optimizing-aws-lambda-performance-with-mongodb-atlas-and-nodejs?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex3&amp;amp;jmp=dev-ref"&gt;Optimizing AWS Lambda performance with MongoDB Atlas and Node.js&lt;/a&gt;, namely not closing the database connection and using the &lt;em&gt;callbackWaitsForEmptyEventLoop&lt;/em&gt; Lambda context property. This allows our bot to be more responsive after the first query fulfilled by the Lambda function.&lt;/p&gt;

&lt;p&gt;Let’s now take a closer look at the query() method, which is the soul and heart of our Lambda function. First, that method retrieves the cast member, movie genre, and movie release year. Because these values all come as strings and the movie release year is stored as an integer in MongoDB, the function must &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L71"&gt;convert that value to an integer&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We then build the query we will run against MongoDB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var castArray = [castMember];

var matchQuery = {
    Cast: { $in: castArray },
    Genres: { $not: { $in: ["Documentary", "News", ""] } },
    Type: "movie"
  };

  if (genre != undefined &amp;amp;&amp;amp; genre != allGenres) {
    matchQuery.Genres = { $in: [genre] };
    msgGenre = genre.toLowerCase();
  }

  if ((year != undefined &amp;amp;&amp;amp; isNaN(year)) || year &amp;gt; 1895) {
    matchQuery.Year = year;
    msgYear = year;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We first restrict the query to items that are indeed movies (since the database also stores TV series) and we exclude some irrelevant movie genres such as the documentary and news genres. We also make sure we only query movies in which the cast member starred. Note that the &lt;strong&gt;&lt;em&gt;&lt;a href="https://docs.mongodb.com/manual/reference/operator/query/in?jmo=adref"&gt;$in&lt;/a&gt;&lt;/em&gt;&lt;/strong&gt; operator expects an array, which is why we have to wrap our unique cast member into the &lt;strong&gt;&lt;em&gt;castArray&lt;/em&gt;&lt;/strong&gt; array. Since the cast member is the only mandatory query parameter, we add it first and then optionally add the &lt;code&gt;Genres&lt;/code&gt; and &lt;code&gt;Year&lt;/code&gt; parameters if the code determines that they were provided by the user (i.e. the user did not use the &lt;code&gt;All&lt;/code&gt; and/or &lt;code&gt;0&lt;/code&gt; escape values).&lt;/p&gt;

&lt;p&gt;The query() method then goes on to define the default response message based on the user-provided parameters. This default response message is used if the query doesn’t return any matching element:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var resMessage = undefined;
  if (msgGenre == undefined &amp;amp;&amp;amp; msgYear == undefined) {
    resMessage = `Sorry, I couldn't find any movie for ${castMember}.`;
  }
  if (msgGenre != undefined &amp;amp;&amp;amp; msgYear == undefined) {
    resMessage = `Sorry, I couldn't find any ${msgGenre} movie for ${castMember}.`;
  }
  if (msgGenre == undefined &amp;amp;&amp;amp; msgYear != undefined) {
    resMessage = `Sorry, I couldn't find any movie for ${castMember} in ${msgYear}.`;
  }
  if (msgGenre != undefined &amp;amp;&amp;amp; msgYear != undefined) {
    resMessage = `Sorry, ${castMember} starred in no ${msgGenre} movie in ${msgYear}.`;
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The meat of the query() method happens next as the code performs the database query using 2 different methods: the classic &lt;a href="https://docs.mongodb.com/manual/reference/method/db.collection.find/"&gt;db.collection.find()&lt;/a&gt; method and the &lt;a href="https://docs.mongodb.com/manual/reference/method/db.collection.aggregate?jmp=adref"&gt;db.collection.aggregate()&lt;/a&gt; method. The default method used in this Lambda function is the aggregate one, but you can easily test the find() method by setting the &lt;em&gt;&lt;a href="[https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L112](https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L112)"&gt;aggregationFramewor_k&lt;/a&gt; variable to _false&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;In our specific use case scenario (querying for one single cast member and returning a small amount of documents), there likely won’t be any noticeable performance or programming logic impact. However, if we were to query for all the movies multiple cast members each starred in (i.e. the union of these movies, not the intersection), the aggregation framework query is a clear winner. Indeed, let’s take a closer look at the find() query the code runs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cursor = db.collection(moviesCollection)
      .find(matchQuery, { _id: 0, Title: 1, Year: 1 })
      .collation(collation)
      .sort({ Year: 1 });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It’s a fairly simple query that retrieves the movie’s title and year, sorted by year. Note that we also use the same { locale: "en", strength: 1 } collation we used to create the case-insensitive index on the Cast property in &lt;a href="https://www.mongodb.com/blog/post/aws-lex-lambda-mongodb-atlas-movie-search-app-part-2?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex3&amp;amp;jmp=dev-ref"&gt;Part 2 of this blog post series&lt;/a&gt;. This is critical since the end user might not &lt;a href="https://en.wikipedia.org/wiki/Capitalization#Title_case"&gt;title case&lt;/a&gt; the cast member’s name (and Lex won’t do it for us either).&lt;/p&gt;

&lt;p&gt;The simplicity of the query is in contrast to the relative complexity of the app logic we have to write to process the result set we get with the find() method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var maxYear, minYear;
for (var i = 0, len = results.length; i &amp;lt; len; i++) { 
    castMemberMovies += `${results[i].Title} (${results[i].Year}), `;
}

 //removing the last comma and space
castMemberMovies = castMemberMovies.substring(0, castMemberMovies.length - 2);

moviesCount = results.length;
var minYear, maxYear;
minYear = results[0].Year;
maxYear = results[results.length-1].Year;
yearSpan = maxYear - minYear;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First, we have to iterate over all the results to concatenate its &lt;code&gt;Title&lt;/code&gt; and &lt;code&gt;Year&lt;/code&gt; properties into a legible string. This might be fine for 20 items, but if we had to process hundreds of thousands or millions of records, the performance impact would be very noticeable. We further have to remove the last period and white space characters of the concatenated string since they’re in excess. We also have to manually retrieve the number of movies, as well as the low and high ends of the movie release years in order to compute the time span it took the cast member to shoot all these movies. This might not be particularly difficult code to write, but it’s clutter code that affects app clarity. And, as I wrote above, it definitely doesn’t scale when processing millions of items.&lt;/p&gt;

&lt;p&gt;Contrast this app logic with the succinct code we have to write when using the aggregation framework method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for (var i = 0, len = results.length; i &amp;lt; len; i++) { 
    castMemberMovies = results[i].allMovies;
    moviesCount = results[i].moviesCount;
    yearSpan = results[i].timeSpan;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code is not only much cleaner and concise now, it’s also more generic, as it can handle the situation where we want to process movies for each of multiple cast members. You can actually test this use case by uncommenting the following line earlier in &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L78"&gt;the source code&lt;/a&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;castArray = [castMember, "Angelina Jolie"]&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;and by testing it using this &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/sam-invoke.sh"&gt;SAM script&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;With the aggregation framework, we get the correct raw and final results without changing a single line of code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--FyIceIuX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/MongoDB_AggregateResponse-x4u6m0bxt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--FyIceIuX--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/MongoDB_AggregateResponse-x4u6m0bxt6.png" alt="MongoDB Aggregation Framework Query Response" width="800" height="228"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, the find() method’s post-processing requires some significant effort to fix this incorrect output (the union of comedy movies in which Angelina Jolie or Brad Pitt starred in, all incorrectly attributed to Brad Pitt):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Sx1WYhKQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/MongoDB_FindResponse-y8s3psxbpa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Sx1WYhKQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/MongoDB_FindResponse-y8s3psxbpa.png" alt="MongoDB Find Query Response" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We were able to achieve this code conciseness and correctness by moving most of the post-processing logic to the database layer using a MongoDB &lt;a href="https://docs.mongodb.com/manual/core/aggregation-pipeline?jmp=adref"&gt;aggregation pipeline&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cursor = db.collection(moviesCollection).aggregate(
      [
        { $match: matchQuery },
        { $sort: { Year: 1 } },
        unwindStage,
        castFilterStage,
        { $group: {
            _id: "$Cast",
            allMoviesArray: {$push: {$concat: ["$Title", " (", { $substr: ["$Year", 0, 4] }, ")"] } },
            moviesCount: { $sum: 1 },
            maxYear: { $last: "$Year" },
            minYear: { $first: "$Year" }
          }
        },
        {
          $project: {
            moviesCount: 1,
            timeSpan: { $subtract: ["$maxYear", "$minYear"] },
            allMovies: {
              $reduce: {
                input: "$allMoviesArray",
                initialValue: "",
                in: {
                  $concat: [
                    "$$value",
                    {
                      $cond: {
                        if: { $eq: ["$$value", ""] },
                        then: "",
                        else: ", "
                      }
                    },
                    "$$this"
                  ]
                }
              }
            }
          }
        }
      ],
      {collation: collation}

);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This aggregation pipeline is arguably more complex than the find() method discussed above, so let’s try to explain it one stage at a time (since an aggregation pipeline consists of stages that transform the documents as they pass through the pipeline):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L125"&gt;$match stage&lt;/a&gt;: performs a filter query to only return the documents we’re interested in (similarly to the find() query above).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L126"&gt;$sort stage&lt;/a&gt;: sorts the results by year ascending.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L113"&gt;$unwind stage&lt;/a&gt;: splits each movie document into multiple documents, one for each cast member in the original document. For each original document, this stage unwinds the Cast array of cast members and creates separate, unique documents with the same values as the original document, except for the Cast property which is now a string value (equal to each cast member) in each unwinded document. This stage is necessary to be able to group by only the cast members we’re interested in (especially if there are more than one). The output of this stage may contain documents with other cast members irrelevant to our query, so we must filter them out in the next stage.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L114"&gt;$match stage&lt;/a&gt;: filters the deconstructed documents from the $unwind stage by only the cast members we’re interested in. This stage essentially removes all the documents tagged with cast members irrelevant to our query.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L129"&gt;$group stage&lt;/a&gt;: groups movies by cast member (for instance, all movies with Brad Pitt and all movies with Angelina Jolie, separately). This stage also concatenates each movie title and release year into the &lt;em&gt;Title (Year)&lt;/em&gt; format and adds it to an array called &lt;em&gt;allMoviesArray&lt;/em&gt; (one such array for each cast member). This stage also computes a count of all movies for each cast member, as well as the earliest and latest year the cast member starred in a movie (of the requested movie genre, if any). This stage essentially performs most of the post-processing we previously had to do in our app code when using the find() method. Because that post-processing now runs at the database layer, it can take advantage of the database server’s computing power along with the distributed system nature of MongoDB (in case the collection is partitioned across multiple shards, each shard performs this stage independently of the other shards).&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies/blob/master/code/lambda.js#L138"&gt;$project stage&lt;/a&gt;: last but not least, this stage performs a &lt;a href="https://docs.mongodb.com/manual/reference/operator/aggregation/reduce?jmp=adref"&gt;$reduce operation&lt;/a&gt; (new in MongoDB 3.4) to concatenate our array of ‘&lt;em&gt;Title (Year)&lt;/em&gt;’ strings into one single string we can use as is in the response message sent back to the bot.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once the matching movies have been retrieved from our MongoDB Atlas database, the code generates the proper response message and sends it back to the bot according to the expected &lt;a href="http://docs.aws.amazon.com/lex/latest/dg/lambda-input-response-format.html#using-lambda-response-format"&gt;Amazon Lex response format&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (msgGenre != allGenres) {
                resMessage = `${toTitleCase(castMember)} starred in 
                the following ${moviesCount&amp;gt;1?moviesCount+" ":""}
                ${msgGenre.toLowerCase()} movie(s)${yearSpan&amp;gt;0?" over " 
                + yearSpan +" years":""}: ${castMemberMovies}`;
} else {
    resMessage = `${toTitleCase(castMember)} starred in the following 
    ${moviesCount&amp;gt;1?moviesCount+" ":""}movie(s)${yearSpan&amp;gt;0?" over " 
    + yearSpan +" years":""}: ${castMemberMovies}`;
}
if (msgYear != undefined) {
    resMessage = `In ${msgYear}, ` + resMessage;

callback(
    close(sessionAttributes, "Fulfilled", {
        contentType: "PlainText",
        content: resMessage
    })
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our Jennifer Aniston fan can now be wowed by the completeness of our bot's response!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--1hvn-oXK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Amazon_Lex_Bot_Response-1x76of69xk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--1hvn-oXK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://webassets.mongodb.com/_com_assets/cms/Amazon_Lex_Bot_Response-1x76of69xk.png" alt="Amazon Lex MongoDB response" width="716" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Wrap-up and next steps
&lt;/h3&gt;

&lt;p&gt;This completes our Lex blog post series and I hope you enjoyed reading it as much as I did writing it.&lt;/p&gt;

&lt;p&gt;In this final blog post, we tested and deployed a Lambda function to AWS using the &lt;a href="https://github.com/awslabs/aws-sam-local"&gt;SAM Local tool&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We also learned:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How a Lambda function processes a Lex request and responds to it using &lt;a href="http://docs.aws.amazon.com/lex/latest/dg/lambda-input-response-format.html"&gt;Amazon Lex’ input and out event format&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How to use a case-insensitive index in a find() or aggregate() query&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How to make the most of MongoDB’s aggregation framework to move complexity from the app layer to the database layer&lt;/p&gt;

&lt;p&gt;As next steps, I suggest you now take a look at the AWS documentation to learn how to &lt;a href="http://docs.aws.amazon.com/lex/latest/dg/fb-bot-association.html"&gt;deploy your bot to Facebook Messenger&lt;/a&gt; , &lt;a href="http://docs.aws.amazon.com/lex/latest/dg/slack-bot-association.html"&gt;Slack&lt;/a&gt; or &lt;a href="https://aws.amazon.com/blogs/ai/greetings-visitor-engage-your-web-users-with-amazon-lex/"&gt;to your own web site&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy Lex-ing!&lt;/p&gt;

&lt;p&gt;_About the Author - &lt;strong&gt;Raphael Londner&lt;/strong&gt; _&lt;/p&gt;

&lt;p&gt;Raphael Londner is a Principal Developer Advocate at MongoDB, focused on cloud technologies such as Amazon Web Services, Microsoft Azure and Google Cloud Engine. Previously he was a developer advocate at Okta as well as a startup entrepreneur in the identity management space. You can follow him on Twitter at &lt;em&gt;&lt;strong&gt;&lt;a href="https://www.twitter.com/rlondner"&gt;@rlondner&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>nosql</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Building a voice-activated movie search app powered by Amazon Lex, Lambda, and MongoDB Atlas - Part 2</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Mon, 13 Nov 2017 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/building-a-voice-activated-movie-search-app-powered-by-amazon-lex-lambda-and-mongodb-atlas---part--2-150h</link>
      <guid>https://dev.to/mongodb/building-a-voice-activated-movie-search-app-powered-by-amazon-lex-lambda-and-mongodb-atlas---part--2-150h</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;Before you read this article, take a look at &lt;a href="https://www.mongodb.com/blog/post/aws-lex-lambda-mongodb-atlas-movie-search-app-part-1?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex2&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt; for a brief overview of Amazon Lex and instructions to set up our movie database with &lt;a href="https://www.mongodb.com/cloud/atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex2&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;MongoDB Atlas&lt;/a&gt;, our fully managed database service.&lt;/p&gt;

&lt;p&gt;As a reminder, this tutorial is divided into 4 parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/blog/post/aws-lex-lambda-mongodb-atlas-movie-search-app-part-1?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex2&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;Part 1: Lex overview, demo scenario and data layer setup&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Part 2: Set up and test an Amazon Lex bot (this post)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/blog/post/aws-lex-lambda-mongodb-atlas-movie-search-app-part-3?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex2&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;Part 3: Deploy a Lambda function as our Lex bot fulfillment&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, we will set up our Lex bot in the AWS Console and verify that its basic flow works as expected. We’ll implement the business logic (which leverages MongoDB) in Part 3 of this post series.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Lex bot setup instructions
&lt;/h3&gt;

&lt;p&gt;In this section, we will go through the whole process of creating our &lt;em&gt;SearchMovies&lt;/em&gt; bot while explaining the architectural decisions I made.&lt;/p&gt;

&lt;p&gt;After signing in into the &lt;a href="https://aws.amazon.com/console/" rel="noopener noreferrer"&gt;AWS Console&lt;/a&gt;, select the Lex service (in the Artificial Intelligence section) and press the &lt;strong&gt;Create&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;Select the &lt;strong&gt;Custom bot&lt;/strong&gt; option and fill out the form parameters as follows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Bot name: &lt;strong&gt;SearchMoviesBot&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Output voice: &lt;strong&gt;None&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Session timeout: &lt;strong&gt;5&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;COPPA: &lt;strong&gt;No&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Press the &lt;strong&gt;Create&lt;/strong&gt; button at the bottom of the form.&lt;/p&gt;

&lt;p&gt;A new page appears, where you can create an intent. Press the &lt;strong&gt;Create Intent&lt;/strong&gt; button and in the &lt;strong&gt;Add intent&lt;/strong&gt; pop-up page, click the &lt;strong&gt;Create new intent&lt;/strong&gt; link and enter &lt;strong&gt;&lt;em&gt;SearchMovies&lt;/em&gt;&lt;/strong&gt; in the intent name field.&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;&lt;em&gt;Slot types&lt;/em&gt;&lt;/strong&gt; section, add a new slot type with the following properties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Slot type name: &lt;strong&gt;&lt;em&gt;MovieGenre&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Description: &lt;strong&gt;&lt;em&gt;Genre of the movie (Action, Comedy, Drama…)&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Slot Resolution: &lt;strong&gt;&lt;em&gt;Restrict to Slot values and Synonyms&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Values: &lt;strong&gt;&lt;em&gt;All, Action, Adventure, Biography, Comedy, Crime, Drama, Romance, Thriller&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_0-srs3a9ixjq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_0-srs3a9ixjq.png" alt="image alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can add synonyms to all these terms (which strictly match the possible values for movie genres in our sample database), but the most important one for which you will want to configure synonyms is the &lt;strong&gt;&lt;em&gt;Any&lt;/em&gt;&lt;/strong&gt; value. We will use it as a keyword to avoid filtering on movie genre in scenarios when the user cannot qualify the genre of the movie he’s looking for or wants to retrieve all the movies for a specific cast member. Of course, you can explore the movie database on your own to identify and add other movie genres I haven’t listed above. Once you’re done, press the &lt;strong&gt;Save slot type&lt;/strong&gt; button.&lt;/p&gt;

&lt;p&gt;Next, in the &lt;strong&gt;&lt;em&gt;Slots&lt;/em&gt;&lt;/strong&gt; section, add the following 3 slots:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;genre&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Type: &lt;strong&gt;&lt;em&gt;MovieGenre&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Prompt: &lt;strong&gt;&lt;em&gt;I can help with that. What's the movie genre?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Required: &lt;strong&gt;&lt;em&gt;Yes&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;castMember&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Type: &lt;strong&gt;&lt;em&gt;AMAZON.Actor&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Prompt: &lt;strong&gt;&lt;em&gt;Do you know the name of an actor or actress in that movie?&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Required: &lt;strong&gt;&lt;em&gt;Yes&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;year&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Type: &lt;strong&gt;&lt;em&gt;AMAZON.FOUR_DIGIT_NUMBER&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Prompt: &lt;strong&gt;&lt;em&gt;Do you know the year {castMember}'s movie was released? If not, just type 0&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Required: &lt;strong&gt;&lt;em&gt;Yes&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Press the &lt;strong&gt;Save Intent&lt;/strong&gt; button and verify you have the same setup as shown in the screenshot below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_1-k44ww1m57n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_1-k44ww1m57n.png" alt="image alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The order of the slots is important here: once the user’s first utterance has been detected to match a Lex intent, the Lex bot will (by default) try to collect the slot values from the user in the priority order specified above by using the Prompt texts for each slot. Note that you can use previously collected slot values in subsequent slot prompts, which I demonstrate in the ‘&lt;em&gt;year&lt;/em&gt;’ slot. For instance, if the user answered &lt;em&gt;Angelina Jolie&lt;/em&gt; to the &lt;em&gt;castMember&lt;/em&gt; slot prompt, the &lt;em&gt;year&lt;/em&gt; slot prompt will be: &lt;em&gt;‘Do you know the year Angelina Jolie’s movie was released? If not, just type 0&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Note that it’s important that all the slots are marked &lt;strong&gt;&lt;em&gt;Required&lt;/em&gt;&lt;/strong&gt;. Otherwise, the only opportunity for the user to specify them is to mention them in the original utterance. As you will see below, we will provide such ability for Lex to identify slots right from the start, but what if the user chooses to kick off the process without mentioning any of them? If the slots aren’t required, they are by default overlooked by the Lex bot so we need to mark them &lt;em&gt;Required&lt;/em&gt; to offer the user the option to define them.&lt;/p&gt;

&lt;p&gt;But what if the user doesn’t know the answer to those prompts? We’ve handled this case as well by defining "default" values: &lt;strong&gt;&lt;em&gt;All&lt;/em&gt;&lt;/strong&gt; for the genre slot and &lt;strong&gt;&lt;em&gt;0&lt;/em&gt;&lt;/strong&gt; for the year slot. The only mandatory parameter the bot’s user must provide is the cast member’s name; the user can restrict the search further by providing the movie genre and release year.&lt;/p&gt;

&lt;p&gt;Last, let’s add the following sample utterances that match what we expect the user will type (or say) to launch the bot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am looking for a movie&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am looking for a ​{genre}​ movie&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am looking for a movie released in ​{year}​&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am looking for a ​{genre}​ movie released in ​{year}​&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In which movie did ​{castMember}​ play&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In which movie did ​{castMember}​ play in {year}&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In which ​{genre}​ movie did ​{castMember}​ play&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In which ​{genre}​ movie did ​{castMember}​ play in {year}&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I would like to find a movie&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I would like to find a movie with ​{castMember}​&lt;/p&gt;

&lt;p&gt;Once the utterances are configured as per the screenshot below, press &lt;strong&gt;&lt;em&gt;Save Intent&lt;/em&gt;&lt;/strong&gt; at the bottom of the page and then &lt;strong&gt;&lt;em&gt;Build&lt;/em&gt;&lt;/strong&gt; at the top of the page. The process takes a few seconds, as AWS builds the deep learning model Lex will use to power our &lt;em&gt;SearchMovies&lt;/em&gt; bot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_2-x9mh5f359p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_2-x9mh5f359p.png" alt="image alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It’s now time to test the bot we just built!&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing the bot
&lt;/h3&gt;

&lt;p&gt;Once the build process completes, the test window automatically shows up:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_3-aldti5086j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_3-aldti5086j.png" alt="image alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Test the bot by typing (or saying) sentences that are close to the sample utterances we previously configured. For instance, you can type ‘&lt;em&gt;Can you help me find a movie with Angelina Jolie?&lt;/em&gt;’ and see the bot recognize the sentence as a valid kick-off utterance, along with the &lt;em&gt;{castMember}&lt;/em&gt; slot value (in this case, ‘&lt;em&gt;Angelina Jolie&lt;/em&gt;’). This can be verified by looking at the Inspect Response panel:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_4-vg1y388asj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_4-vg1y388asj.png" alt="image alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, the movie genre hasn’t been specified yet, so Lex prompts for it (since it’s the first required slot). Once you answer that prompt, notice that Lex skips the second slot (&lt;em&gt;{castMember}&lt;/em&gt;) since it already has that information.&lt;/p&gt;

&lt;p&gt;Conversely, you can test that the &lt;em&gt;‘Can you help me find a comedy movie with angelina jolie?’&lt;/em&gt; utterance will immediately prompt the user to fill out the {year} slot since both the &lt;em&gt;{castMember}&lt;/em&gt; and &lt;em&gt;{genre}&lt;/em&gt; values were provided in the original utterance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_5-tci29y4wt9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwebassets.mongodb.com%2F_com_assets%2Fcms%2Fimage_5-tci29y4wt9.png" alt="image alt text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An important point to note here is that enumeration slot types (such as our &lt;em&gt;MovieGenre&lt;/em&gt; type) are not case-sensitive. This means that both "comedy" and “coMeDy” will resolve to “Comedy”. This means we will be able to use a regular index on the Genres property of our movies collection (as long as our enumeration values in Lex match the Genres case in our database).&lt;/p&gt;

&lt;p&gt;However, the &lt;em&gt;AMAZON.Actor&lt;/em&gt; type is case sensitive - for instance, "&lt;em&gt;angelina jolie&lt;/em&gt;" and “&lt;em&gt;Angelina Jolie&lt;/em&gt;” are 2 distinct values for Lex. This means that we must define a &lt;a href="https://docs.mongodb.com/manual/core/index-case-insensitive?jmp=adref" rel="noopener noreferrer"&gt;case-insensitive index&lt;/a&gt; on the &lt;strong&gt;Cast&lt;/strong&gt; property (don’t worry, there is already such an index, called ‘Cast_1’ in our sample movie database). Note that in order for queries to use that case-insensitive index, we’ll have to make sure our find() query specifies the same &lt;a href="https://docs.mongodb.com/manual/reference/collation?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex2&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;collation&lt;/a&gt; as the one used to create the index (locale=’en’ and strength=1). But don’t worry for now: I’ll make sure to point it out again in Part 3 when we review the code of our chat’s business logic (in the Lambda function we’ll deploy).&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;In this blog post, we created the SearchMovies Lex bot and tested its flow. More specifically, we:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Created a custom Lex slot type (MovieGenre)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Configured intent slots&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Defined sample utterances (some of which use our predefined slots)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Tested our utterances and the specific prompt flows each of them starts&lt;/p&gt;

&lt;p&gt;We also identified the case sensitivity of a built-in Lex slot that adds a new index requirement on our database.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://www.mongodb.com/blog/post/aws-lex-lambda-mongodb-atlas-movie-search-app-part-3?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex2&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;Part 3&lt;/a&gt;, we’ll get to the meat of this Lex blog post series and deploy the Lambda function that will allow us to complete our bots’ intended action (called ‘fulfillment’ in the Lex terminology).&lt;/p&gt;

&lt;p&gt;Meanwhile, I suggest the following readings to further your knowledge of Lex and MongoDB:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://developer.amazon.com/docs/custom-skills/slot-type-reference.html" rel="noopener noreferrer"&gt;Lex built-in slot types&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://docs.mongodb.com/manual/core/index-case-insensitive?jmp=adref" rel="noopener noreferrer"&gt;Case-insensitive indexes&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;_About the Author - &lt;strong&gt;Raphael Londner&lt;/strong&gt; _&lt;/p&gt;

&lt;p&gt;Raphael Londner is a Principal Developer Advocate at MongoDB, focused on cloud technologies such as Amazon Web Services, Microsoft Azure and Google Cloud Engine. Previously he was a developer advocate at Okta as well as a startup entrepreneur in the identity management space. You can follow him on Twitter at &lt;em&gt;&lt;strong&gt;&lt;a href="https://www.twitter.com/rlondner" rel="noopener noreferrer"&gt;@rlondner&lt;/a&gt;&lt;/strong&gt;&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>nosql</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Building a voice-activated movie search app powered by Amazon Lex, Lambda, and MongoDB Atlas - Part 1</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Sun, 12 Nov 2017 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/building-a-voice-activated-movie-search-app-powered-by-amazon-lex-lambda-and-mongodb-atlas---part-1-192d</link>
      <guid>https://dev.to/mongodb/building-a-voice-activated-movie-search-app-powered-by-amazon-lex-lambda-and-mongodb-atlas---part-1-192d</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;This tutorial is divided into 3 parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Part 1: Lex overview, demo scenario and data layer setup&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/blog/post/aws-lex-lambda-mongodb-atlas-movie-search-app-part-2?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex1&amp;amp;jmp=dev-ref"&gt;Part 2: Set up and test an Amazon Lex bot&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/blog/post/aws-lex-lambda-mongodb-atlas-movie-search-app-part-3?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex1&amp;amp;jmp=dev-ref"&gt;Part 3: Deploy a Lambda function as our bot fulfillment logic&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since this is Part 1 of our blog series, let’s dig right into it now.&lt;/p&gt;

&lt;h3&gt;
  
  
  What is Amazon Lex?
&lt;/h3&gt;

&lt;p&gt;Amazon Lex is a deep learning service provided by AWS to power conversational bots (more commonly known as "chatbots"), which can either be text- or voice-activated. It’s worth mentioning that Amazon Lex is the technology that powers Alexa, the popular voice service available with Amazon Echo products and mobile applications (hence the Lex name). Amazon Lex bots are built to perform actions (such as ordering a pizza), which in Amazon lingo is referred to as &lt;strong&gt;&lt;em&gt;intents&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Note that each bot may perform multiple intents (such as "booking a flight" and “booking a hotel”), which can each be kicked off by distinct phrases (called &lt;strong&gt;&lt;em&gt;utterances&lt;/em&gt;&lt;/strong&gt; ). This is where the Natural Language Understanding (NLU) power of Lex bots shines — you define a few sample utterances and let the Lex AI engine infer all the possible variations of these utterances (another interesting aspect of Lex’ AI engine is its Automatic Speech Recognition technology, which allows).&lt;/p&gt;

&lt;p&gt;Let's illustrate this concept with a fictitious, movie search scenario. If you create a &lt;em&gt;SearchMovies&lt;/em&gt; intent, you may want to define a sample utterance as “&lt;em&gt;I would like to search for a movie&lt;/em&gt;”, since you expect it to be what the user will say to express their movie search intention. But as you may well know, human beings have a tendency to express the same intention in many different ways, depending on their mood, cultural background, language proficiency, etc... So if the user types (or says) “&lt;em&gt;I’d like to find a movie&lt;/em&gt;” or “&lt;em&gt;I’d like to see a movie&lt;/em&gt;”, what happens? Well, you’ll find that Lex is smart enough to figure out that those phrases have the same meaning as “&lt;em&gt;I would like to search for a movie&lt;/em&gt;” and consequently trigger the “SearchMovies” intent.&lt;/p&gt;

&lt;p&gt;However, as our ancestors the Romans would say, &lt;a href="https://en.wiktionary.org/wiki/dura_lex,_sed_lex"&gt;dura lex sed lex&lt;/a&gt; and if the user’s utterance veers too far away from the sample utterances you have defined, Lex would stop detecting the match. For instance, while "&lt;em&gt;I’d like to search for a motion picture&lt;/em&gt;" and “&lt;em&gt;I’d like to see a movie&lt;/em&gt;” are detected as matches of our sample utterance (&lt;em&gt;I would like to search for a movie&lt;/em&gt;), “&lt;em&gt;I’d like to see a motion picture”&lt;/em&gt; is not (at least in the tests I performed).&lt;/p&gt;

&lt;p&gt;The interim conclusion I drew from that small experiment is that Lex’ AI engine is not yet ready to power Blade Runner’s replicants or Westworld’s hosts, but it definitely can be useful in a variety of situations (and I’m sure the AWS researchers are hard at work to refine it).&lt;/p&gt;

&lt;p&gt;In order to fulfill the intent (such as providing the name of the movie the user is looking for), Amazon Lex would typically need some additional information, such as the name of a cast member, the movie genre and the movie release year. These additional parameters are called &lt;strong&gt;&lt;em&gt;slots&lt;/em&gt;&lt;/strong&gt; in the Lex terminology and theye are collected one at a time after a specific Lex prompt.&lt;/p&gt;

&lt;p&gt;For instance, after an utterance is detected to launch the &lt;em&gt;SearchMovies&lt;/em&gt; intent, Lex may ask the following questions to fill all the required slots:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What's the movie genre? (to fill the &lt;em&gt;genre&lt;/em&gt; slot)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Do you know the name of an actor or actress with a role in that movie? (to fill the &lt;em&gt;castMember&lt;/em&gt; slot)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When was the movie was released? (to fill the &lt;em&gt;year&lt;/em&gt; slot)&lt;/p&gt;

&lt;p&gt;Once all the required slots have been filled, Lex tries to fulfill the intent by passing all the slot values to some business logic code that performs the necessary action — e.g, searching for matching movies in a movie database or booking a flight. As expected, AWS promotes its own technologies so Lex has a built-in support for Lambda functions, but you can also "return parameters to the client", which is the method you’ll want to use if you want to process the fulfillment in your application code (used in conjunction with the &lt;a href="http://docs.aws.amazon.com/lex/latest/dg/API_Operations_Amazon_Lex_Runtime_Service.html"&gt;Amazon Lex Runtime Service API&lt;/a&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo bot scenario
&lt;/h3&gt;

&lt;p&gt;Guess what? This will be a short section since the scenario we will implement in this blog post series is exactly the "fictitious example" I described above (what a coincidence!).&lt;/p&gt;

&lt;p&gt;Indeed, we are going to build a bot allowing us to search for movies among those stored in a movie database. The data store we will use is a MongoDB database running in &lt;a href="https://www.mongodb.com/cloud/atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex1&amp;amp;jmp=dev-ref"&gt;MongoDB Atlas&lt;/a&gt;, which is a good serverless fit for developers and DevOps folks who don’t want to set up and manage infrastructure.&lt;/p&gt;

&lt;p&gt;Speaking of databases, it’s time for us to deploy our movie database to &lt;a href="https://www.mongodb.com/cloud/atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex1&amp;amp;jmp=dev-ref"&gt;MongoDB Atlas&lt;/a&gt; before we start building our Lex bot.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data setup and exploration
&lt;/h3&gt;

&lt;p&gt;To set up the movie database, follow the instructions available in this &lt;a href="https://github.com/rlondner/mongodb-awslex-searchmovies#mongodb-atlas-setup"&gt;GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Note that in order to keep the database dump file under GitHub's 100MB limit per file, the database I have included isn’t complete (for instance, it doesn’t include movies released prior to 1950 - sincere apologies to Charlie Chaplin fans).&lt;/p&gt;

&lt;p&gt;Now, let’s take a look at a typical document in this database (Mr. &amp;amp; Mrs. Smith released in 2005):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "_id" : ObjectId("573a13acf29313caabd287dd"),
    "ID" : 356910,
    "imdbID" : "tt0356910",
    "Title" : "Mr. &amp;amp; Mrs. Smith",
    "Year" : 2005,
    "Rating" : "PG-13",
    "Runtime" : "120 min",
    "Genre" : "Action, Comedy, Crime",
    "Released" : "2005-06-10",
    "Director" : "Doug Liman",
    "Writer" : "Simon Kinberg",
    "Cast" : [
        "Brad Pitt",
        "Angelina Jolie",
        "Vince Vaughn",
        "Adam Brody"
    ],
    "Metacritic" : 55,
    "imdbRating" : 6.5,
    "imdbVotes" : 311244,
    "Poster" : "http://ia.media-imdb.com/images/M/MV5BMTUxMzcxNzQzOF5BMl5BanBnXkFtZTcwMzQxNjUyMw@@._V1_SX300.jpg",
    "Plot" : "A bored married couple is surprised to learn that they are both assassins hired by competing agencies to kill each other.",
    "FullPlot" : "John and Jane Smith are a normal married couple, living a normal life in a normal suburb, working normal jobs...well, if you can call secretly being assassins \"normal\". But neither Jane nor John knows about their spouse's secret, until they are surprised to find each other as targets! But on their quest to kill each other, they learn a lot more about each other than they ever did in five (or six) years of marriage.",
    "Language" : "English, Spanish",
    "Country" : "USA",
    "Awards" : "9 wins &amp;amp; 17 nominations.",
    "lastUpdated" : "2015-09-04 00:02:26.443000000",
    "Type" : "movie",
    "Genres" : [
        "Action",
        "Comedy",
        "Crime"
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have highlighted the properties of interest to our use case. Each movie record typically includes the principal cast members (stored in a string array), a list of genres the movie can be categorized in (stored in a string array) and a release year (stored as a 4-digit integer).&lt;/p&gt;

&lt;p&gt;These are the 3 properties we will leverage in our Lex bot (which we will create in Part 2) and consequently in our Lambda function (which we will build in Part 3) responsible for querying our movies database.&lt;/p&gt;

&lt;p&gt;Storing these properties as string arrays is key to ensure that our bot is responsive: they allow us to build small, multikey indexes that will make our queries much faster compared to full collection scans (which regex queries would trigger).&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;In this blog post, we introduced the core concepts of Amazon Lex and described the scenario of the Lex bot we’ll create in Part 2. We then deployed a sample movie database to MongoDB Atlas, explored the structure of a typical movie document and identified the fields we’ll use in the Lambda function we’ll build in Part 3. We then reviewed the benefits of using secondary indexes on these fields to speed up our queries.&lt;/p&gt;

&lt;p&gt;I have only scratched the surface on all these topics, so here is some additional content for those of you who strive to learn more:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;How &lt;a href="http://docs.aws.amazon.com/lex/latest/dg/how-it-works.html"&gt;Amazon Lex works&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;MongoDB documentation on &lt;a href="https://docs.mongodb.com/manual/indexes?jmp=adref"&gt;indexes&lt;/a&gt; and &lt;a href="https://docs.mongodb.com/manual/core/index-multikey?jmp=adref"&gt;multikey indexes&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/presentations/webinar-index-tuning-and-evaluation-using-mongodb?jmp=adref"&gt;Index Tuning and Evaluation using MongoDB&lt;/a&gt; webinar by Daniel Farrell&lt;/p&gt;

&lt;p&gt;I hope this introduction to Lex has drawn enough interest for you to continue our journey with &lt;a href="https://www.mongodb.com/blog/post/aws-lex-lambda-mongodb-atlas-movie-search-app-part-2?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=lex1&amp;amp;jmp=dev-ref"&gt;Part 2&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;About the Author - Raphael Londner&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Raphael Londner is a Principal Developer Advocate at MongoDB, focused on cloud technologies such as Amazon Web Services, Microsoft Azure and Google Cloud Engine. Previously he was a developer advocate at Okta as well as a startup entrepreneur in the identity management space. You can follow him on Twitter at &lt;a href="https://www.twitter.com/rlondner"&gt;@rlondner&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>nosql</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Building a NodeJS App with MongoDB Atlas and AWS Elastic Container Service - Part 1</title>
      <dc:creator>MongoDB</dc:creator>
      <pubDate>Mon, 30 Oct 2017 00:00:00 +0000</pubDate>
      <link>https://dev.to/mongodb/building-a-nodejs-app-with-mongodb-atlas-and-aws-elastic-container-service---part-1-41fb</link>
      <guid>https://dev.to/mongodb/building-a-nodejs-app-with-mongodb-atlas-and-aws-elastic-container-service---part-1-41fb</guid>
      <description>&lt;p&gt;Building apps has never been faster, thanks in no small part to advancements in virtualization. Lately, I’ve been exploring better ways to build and deploy apps without having to worry about ongoing front-end work, such as patching Linux machines, replacing them when they fail, or even having to SSH into systems.&lt;/p&gt;

&lt;p&gt;In this two-part tutorial, I will share how to build a Node.js app with MongoDB Atlas and deploy it easily using Amazon EC2 Container Service (ECS).&lt;/p&gt;

&lt;p&gt;In part one, we'll cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using Docker for app deployment&lt;/li&gt;
&lt;li&gt;Building a MongoDB Atlas Cluster&lt;/li&gt;
&lt;li&gt;Connecting it to a Node.js-based app which allows full CRUD operations&lt;/li&gt;
&lt;li&gt;Launching it locally in a development environment.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.mongodb.com/blog/post/building-a-nodejs-app-with-mongodb-atlas-and-aws-elastic-container-service-part-2?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=container2&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;In part two&lt;/a&gt;, we'll get our app running in a Linux container on AWS by installing some tools, configuring a few environment variables, and launching a Docker cluster.&lt;/p&gt;

&lt;p&gt;By the end of this two-part tutorial, you'll see how to work with Node.js to start an application, how to build a MongoDB cluster for the app, then finally, how to use AWS to deploy that app in containers.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Quick Intro To Docker
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;(Already know about Containers &amp;amp; Docker? Skip ahead to the tutorial.)&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Docker makes it simple for developers to build, ship, and run distributed applications across different environments. It does this by helping you “containerize” your application.&lt;/p&gt;

&lt;p&gt;What's a container? The &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker website&lt;/a&gt; defines a container as "&lt;a href="https://www.docker.com/what-docker" rel="noopener noreferrer"&gt;a way to package software in a format that can run isolated on a shared operating system&lt;/a&gt;". More broadly, containers are a form of system virtualization that allows you to run an application and its dependencies in resource-isolated processes. Using containers, you can easily package application code, configurations, and dependencies into easy-to-use building blocks specifically built to run your code. This ensures that you can deploy quickly, reliably, and consistently regardless of the deployment environment, and allows you to focus more on building your app. Moreover, containers make it easier to manage microservices and support CI/CD workflows where incremental changes are isolated, tested, and released with little to no impact to production systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Getting started
&lt;/h3&gt;

&lt;p&gt;In the example we'll walk through in this post, we're going to build a simple records system using MongoDB, Express.js, React.js, and Node.js called &lt;a href="https://github.com/cefjoeii/mern-crud" rel="noopener noreferrer"&gt;mern-crud&lt;/a&gt;. We'll combine several common services on the AWS cloud platform to build a fully functional, containerized application with auto-scaling and load balancing. The core of our front end will be run on Amazon EC2 &lt;a href="https://aws.amazon.com/ecs/" rel="noopener noreferrer"&gt;Container Service (ECS)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In part one, I'll set up our environment and build the application on my local workstation. This will show you how to begin using MongoDB Atlas along with Node.js. In the second part we'll wrap everything up by deploying to ECS with coldbrew-cli, an easy ECS command line utility.&lt;/p&gt;

&lt;p&gt;Below you'll find the basics of building an application that will allow you to perform CRUD operations in MongoDB.&lt;/p&gt;

&lt;p&gt;Requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/" rel="noopener noreferrer"&gt;AWS Account&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html" rel="noopener noreferrer"&gt;IAM Access Key&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/coldbrewcloud/coldbrew-cli" rel="noopener noreferrer"&gt;coldbrew-cli&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.mongodb.com/cloud/atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=container1&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;MongoDB Atlas Cluster&lt;/a&gt; (M0, the free tier of Atlas, is fine)&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; (on local computer)&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/cefjoeii/mern-crud" rel="noopener noreferrer"&gt;mern-crud github repo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/git" rel="noopener noreferrer"&gt;git&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://nodejs.org" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt; installed locally with Boron LTS (v6.11.2)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Amazon EC2 Container Service (ECS) allows you to deploy and manage your Docker instances using your AWS control panel. For a full breakdown of how it works, I recommend checking out &lt;a href="https://github.com/abby-fuller/ecs-demo" rel="noopener noreferrer"&gt;Abby Fuller's "Getting Started with ECS" demo&lt;/a&gt;, which includes details on the role of each part of the underlying Docker deployment and how they work together to provide you with easily repeatable and deployable applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Configure AWS and coldbrew-cli
&lt;/h3&gt;

&lt;h4&gt;
  
  
  AWS Environment Variables
&lt;/h4&gt;

&lt;p&gt;Get started by signing up for your AWS account and creating your Identity and Access Management (IAM) keys. You can review the required permissions &lt;a href="https://github.com/coldbrewcloud/coldbrew-cli/wiki/AWS-IAM-Policies" rel="noopener noreferrer"&gt;from the coldbrew-cli docs&lt;/a&gt;. The IAM keys will be required to authenticate against the AWS API; we can export these as environment variables in the command shell we’ll use to run coldbrew-cli commands:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID="PROVIDEDBYAWS
export AWS_SECRET_ACCESS_KEY="PROVIDEDBYAWS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This permits coldbrew-cli to use our user rights within AWS to create the appropriate underlying infrastructure. &lt;/p&gt;

&lt;h4&gt;
  
  
  Getting started with coldbrew-cli
&lt;/h4&gt;

&lt;p&gt;Next, we'll work with coldbrew-cli, an open source automation utility for deploying AWS Docker containers. It removes many of the steps associated with creating our Docker images, creating the environment, and auto-scaling the AWS groups associated with our Node.js web servers on AWS. The best part of coldbrew is that it's highly portable and configurable. The &lt;a href="https://github.com/coldbrewcloud/coldbrew-cli" rel="noopener noreferrer"&gt;coldbrew documentation&lt;/a&gt; states the following:_coldbrew-cli operates on two simple concepts: applications (apps) and clusters.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An app is the minimum deployment unit.&lt;/li&gt;
&lt;li&gt;One or more apps can run in a cluster, and, they share the computing resources._We'll use this utility to simplify many of the common tasks one would have to do when creating a production ECS cluster. It will handle the following on our behalf: &lt;/li&gt;
&lt;li&gt;Creating a &lt;a href="https://aws.amazon.com/vpc/" rel="noopener noreferrer"&gt;VPC&lt;/a&gt; (You may specify an existing VPC if needed)&lt;/li&gt;
&lt;li&gt;Creating &lt;a href="http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html" rel="noopener noreferrer"&gt;Security Groups&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Setting up &lt;a href="https://aws.amazon.com/elasticloadbalancing/" rel="noopener noreferrer"&gt;Elastic Load Balancing&lt;/a&gt; &amp;amp; &lt;a href="http://docs.aws.amazon.com/autoscaling/latest/userguide/AutoScalingGroup.html" rel="noopener noreferrer"&gt;Auto Scaling Groups&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Building the &lt;a href="https://www.docker.com/what-docker" rel="noopener noreferrer"&gt;Docker Containers&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Building the Docker Image&lt;/li&gt;
&lt;li&gt;Pushing the &lt;a href="http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_Console_Repositories.html" rel="noopener noreferrer"&gt;Docker Image to ECR repository&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Deploying image to containers
Manually, all of the tasks listed above would take a nontrivial effort. To install coldbrew-cli, we can &lt;a href="https://github.com/coldbrewcloud/coldbrew-cli/wiki/Downloads" rel="noopener noreferrer"&gt;download CLI executable&lt;/a&gt; (coldbrew or coldbrew.exe) and put it in our &lt;code&gt;$PATH.&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ which coldbrew
'usr/local/bin/coldbrew
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Prepare our app and MongoDB Atlas
&lt;/h3&gt;

&lt;p&gt;ECS will allow us to run a stateless front end to our application. All of our data will be stored in &lt;a href="https://www.mongodb.com/cloud/atlas?utm_medium=dev-synd&amp;amp;utm_source=dev&amp;amp;utm_content=container1&amp;amp;jmp=dev-ref" rel="noopener noreferrer"&gt;MongoDB Atlas&lt;/a&gt;, the best way to run MongoDB on AWS. In this section, we’ll build our database cluster and connect our application. We can rely on coldbrew commands to manage deployments of new versions of our application going forward.&lt;/p&gt;

&lt;h4&gt;
  
  
  Your Atlas Cluster
&lt;/h4&gt;

&lt;p&gt;Building an Atlas cluster is easy and free. If you need help doing so, &lt;a href="https://www.youtube.com/watch?v=_d8CBOtadRA" rel="noopener noreferrer"&gt;check out this tutorial video&lt;/a&gt; I made that will walk you through the process. Once our free M0 cluster is built, it’s easy to &lt;a href="https://docs.atlas.mongodb.com/setup-cluster-security/" rel="noopener noreferrer"&gt;whitelist the IP addresses of our ECS containers&lt;/a&gt; so that we can store data in MongoDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo6v7xbqjy3gfeynu49p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyo6v7xbqjy3gfeynu49p.png" width="800" height="615"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I named my cluster &lt;code&gt;"mern-demo”&lt;/code&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Downloading and configuring the App
&lt;/h4&gt;

&lt;p&gt;Let's get started by downloading the required &lt;a href="https://github.com/cefjoeii/mern-crud" rel="noopener noreferrer"&gt;repository&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ git clone git@github.com:cefjoeii/mern-crud.git
Cloning into 'mern-crud'...
remote: Counting objects: 303, done.
remote: Total 303 (delta 0), reused 0 (delta 0), pack-reused 303
Receiving objects: 100% (303/303), 3.25 MiB | 0 bytes/s, done.
Resolving deltas: 100% (128/128), done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's go into the repository we cloned and have a look at the code:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdc517xisuo0wgceoijiy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdc517xisuo0wgceoijiy.png" width="800" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;server.js&lt;/code&gt; app is reliant on the database configuration stored in the &lt;code&gt;config/db.js&lt;/code&gt; file. We'll use this to specify our M0 cluster in MongoDB Atlas, but we'll want to ensure we do so using user credentials that follow the &lt;a href="https://en.wikipedia.org/wiki/Principle_of_least_privilege" rel="noopener noreferrer"&gt;principle of least privilege&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Securing our app
&lt;/h4&gt;

&lt;p&gt;Let's create a basic database user to ensure that we are not giving our application full rights to administer the database. All this basic user requires is read and write access.&lt;/p&gt;

&lt;p&gt;In the MongoDB Atlas UI, click the "Security" option at the top of the screen and then click into “MongoDB Users”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F481ukhett59q0oya5pqu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F481ukhett59q0oya5pqu.png" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now click the "Add new user" button and create a user with read and write access to the namespace data is being written to within our cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyyuwdhg1k41l01hdadz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmyyuwdhg1k41l01hdadz.png" width="751" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the user name (I will select mernuser)&lt;/li&gt;
&lt;li&gt;Click the "Show Advanced Options" link&lt;/li&gt;
&lt;li&gt;Select role "readWrite" in the "Database" section "merndb", and leave “collection” blank&lt;/li&gt;
&lt;li&gt;Create a password and save it somewhere to use in your connection string&lt;/li&gt;
&lt;li&gt;Click "Add User" &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We now have a user ready to access our application's MongoDB Atlas database cluster. It's time to get our connection string and start prepping the application.&lt;/p&gt;

&lt;h4&gt;
  
  
  Linking our Managed Cluster to our App
&lt;/h4&gt;

&lt;p&gt;Our connection string can be accessed by going to the "Clusters" section in Atlas, and clicking the "Connect" button.&lt;/p&gt;

&lt;p&gt;This section provides an interface to whitelist IP addresses and methods to connect to your cluster. Let's choose the "Connect Your Application" option and copy our connection string. Let's copy the connection string and take a look at what we need to modify:&lt;/p&gt;

&lt;p&gt;The default connection string provides us with a few parts that require our changes, highlighted in red below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mongodb://mernuser:demopass@mern-demo-shard-00-00-x8fks.mongodb.net:27017,mern-demo-shard-00-01-x8fks.mongodb.net:27017,mern-demo-shard-00-02-x8fks.mongodb.net:27017/merndb?ssl=true&amp;amp;replicaSet=mern-demo-shard-0&amp;amp;authSource=admin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The username and password should be replaced with the credentials we created earlier. We also need to specify the &lt;code&gt;merndb&lt;/code&gt; database our data will be stored in.&lt;/p&gt;

&lt;p&gt;Let's add our connection string to the &lt;code&gt;config/db.js&lt;/code&gt; file and then compile our application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7sfmnrp5n9jnur9vsgw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh7sfmnrp5n9jnur9vsgw.png" width="727" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Save the file, then import the application’s required dependencies from the root directory of the repository with the "&lt;code&gt;npm install&lt;/code&gt;" command:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej8uzr84jzri0nmr1ldu.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fej8uzr84jzri0nmr1ldu.gif" width="563" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's whitelist our local IP address so we can start our app and test it's working locally before we deploy it to Docker. We can click "CONNECT" in the MongoDB Atlas UI and then select "ADD CURRENT IP ADDRESS" to whitelist our local workstation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4hmxxy59u0gvyyxf9qm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm4hmxxy59u0gvyyxf9qm.png" width="664" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can now test the app and see that it starts as expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ npm start
&amp;gt; mern-crud@0.1.0 start /Users/jaygordon/work/mern-crud
&amp;gt; node server

Listening on port 3000
Socket mYwXzo4aovoTJhbRAAAA connected.
Online: 1
Connected to the database.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's test &lt;em&gt;&lt;a href="http://localhost:3000" rel="noopener noreferrer"&gt;http://localhost:3000&lt;/a&gt;&lt;/em&gt; in a browser:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcgg82h907c2uf87zsdq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcgg82h907c2uf87zsdq.gif" width="800" height="644"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The hardest part is over, the next part is deploying with &lt;code&gt;coldbrew-cli&lt;/code&gt;, which is just a few commands. In &lt;a href="https://www.mongodb.com/blog/post/building-a-nodejs-app-with-mongodb-atlas-and-aws-elastic-container-service-part-2" rel="noopener noreferrer"&gt;part two&lt;/a&gt; of this series, we’ll cover how to launch with coldbrew-cli and then destroy our app when we're finished. &lt;/p&gt;




&lt;p&gt;If you’re new to managed MongoDB services, we encourage you to start with our &lt;a href="https://www.mongodb.com/cloud/atlas" rel="noopener noreferrer"&gt;free tier&lt;/a&gt;. For existing customers of third party service providers, be sure to check out our migration offerings and learn about how you can get &lt;a href="https://www.mongodb.com/cloud/atlas/lp/compare/mlab" rel="noopener noreferrer"&gt;3 months of free service&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mongodb</category>
      <category>cloud</category>
      <category>database</category>
    </item>
  </channel>
</rss>
