DEV Community

Ruben Orduz
Ruben Orduz

Posted on • Originally published at Medium on

Accelerating Application Development with Neurelo, Part II

Generated with DALL-E 3

Recap

In Part I of this series, we delved into the functionality and features that Neurelo offers at a high level. This empowers developers and teams to accelerate their application development by allowing them to concentrate on code and engineering while delegating the management of their data layer and data-based API to Neurelo.

Objectives

In this installment of the blog series we will explore those features in further details and more concretely. Think of it as a mix of how-to and quick start guide. By the end of this post I hope you will be acquainted with Neurelo features, and how they work.

Summary

If you are reading this, we can assume that either you or your team is interested in adopting Database Abstraction-aaS. Perhaps the concept has piqued your interest, and you wish to learn more, or you are ready to give Neurelo a spin. Whatever the case may be, in this post, we’ll go through some of the features mentioned in part I and show you concretely how to set these up and make use of them in your app as quickly as possible. Alright, let’s get started.

Side Note: For transparency, it’s worth to note that while this blog series is sponsored by Neurelo, they have explicitly encouraged me to provide my unfiltered opinions and expertise, without any specific directives on branding, tone, or voice. Their influence is limited to ensuring accuracy and providing editorial feedback.

First of all, you should create an account. As of this writing, Neurelo is onboarding new customers via its waitlist. Below is a walkthrough of its web application and how you can get your project onboarded quickly.

Dashboard

Whenever you log into the Neurelo web app, the dashboard will be your landing page. Here, at a glance, you will see all your projects and members of your organization.

Projects

In Neurelo, projects are the top-level entity that encompasses all the data and API aspects of your app, including schema definitions, API documentation, data sources, environments, and other important settings. More information about this will be provided later.

For now, though, let’s walk through setting up your first project.

  1. In the dashboard, click on Projects in the left navigation bar.
  2. Click on the Create Project button. This will pop the following modal window up

Here, you can assign a name to your project, select the Database Engine, decide whether to enable migrations, specify the language your application is written in, and provide a free-form description. Currently, the most crucial field is the Database Engine. As of this writing, this setting will determine the type of database engine you can choose for your data sources for this project.

  1. Click on Create.

After creation you will be presented with the following screen:

For the time being and for the sake of simplicity, let’s click on Empty Project — even if you have an existing data source.

Once you have created the project you should be able to click on it in the Dashboard. Since it’s a blank project, it will look like this:

Here are a few important things to highlight, as there’s a lot happening on this screen. First, there’s the top navigation bar (Definitions, Environments, Data Sources, and Settings). Within the Definitions tab, in the editor-looking widget below it, we see a left navigation bar containing Schema, Custom Queries, and API documentation. To the right, there’s a content display window, which is currently empty, along with several toggles and tools in the toolbar immediately above it (some of these will be used later). However, an empty project isn’t particularly interesting, so let’s add a data source to begin exploring some of Neurelo’s most useful features.

Data Sources

Data sources serve as the heart and soul of a project. In Neurelo, they are the primary representation and abstraction of your database(s). For this step, it’s crucial to know the following in advance: the backing database must be Internet-routable and accessible (you may need to add inbound firewall rules), and it must be of a supported type (fortunately, Neurelo supports most of the top-used database engines). Additionally, you will need the appropriate credentials.

  1. In project top navigation bar, click on Data Source, as shown below

  1. Upon clicking a modal window will pop.

  1. Add the appropriate Name, host and credential information

  2. Take note of the IP addresses given here as you may need to include them in your firewall ingress rules.

  3. (Recommended) click on Test Connection

  4. If the connection test is successful, click on Submit.

Definitions

Once you have successfully added a data source, let’s return to the Definitions tab (in the project’s top navigation bar). Although we have just added a data source, this area should still be blank because Neurelo is still unaware of what is in your data source. So, let’s run an introspection — in this context, introspection means Neurelo will examine your data source and extract all relevant metadata, including schema, structure, relationships, data types, etc., but not the actual data contents. To do so:

  1. In the editor widget, top right side, next to the JSON/YAML toggle, click on the microscope icon in the toolbar (as shown below).

  1. A modal window will pop, prompting you which data source you wish to introspect. Select the one we just added in the previous step.

  2. Click on Start

After a few brief moments, you will be presented with a screen as shown below. The reader may notice that it resembles a diff tool — and it is! Neurelo’s built-in revision system operates much like a code repository and version control system, such as Git. In any case, in this initial introspection, it’s safe to click on Continue.

Assuming everything has gone according to plan, you should see your Definitions screen populated with the schema and relationship representation in JSON. You may inspect it to ensure accuracy. If everything looks correct, in the lower right corner of the editor widget, click on Commit. You will be prompted to add a commit name and message.

API Documentation

After those few steps thus far, we begin to reap the benefits of the Neurelo platform. In the Definitions section, in the editor widget’s left navigation pane, click on API Documentation. Please note that it takes a few seconds to generate. Aside from the Introduction and Authentication sections, there are two important sections to pay particular attention to: Objects and Operations. Objects describe the entities in the database as API objects, and Operations, as the name states, are the API auto-generated operations available on those objects. It’s worth noting that Neurelo also offers custom operations and API endpoints, but this topic will be covered in a later installment of this series. You can click on the collapsible Operations menu to expand it. Here you will find all the purpose-built API operations Neurelo inferred from your data model; not just CRUD, but even advanced read/write operations such as joins and aggregations when such inference is possible. These operations are available for REST or GraphQL right out of the box. No extra code or actions needed. You can select whichever you are using in your application in the top right corner of the editor widget. You may also download as a Postman collection or the OpenAPI spec, but this is entirely optional.

Environments

So far, the definitions and APIs exist only within the confines of your project. To make use of the APIs, you need to deploy them. In Neurelo, environments are the means to deploy your data APIs so that they can be used in your application. You can think of them as a deployment unit consisting of a data source and an API/Definitions commit. Neurelo takes care of all the operational and management aspects. So, let’s create an environment for the API commit we created above.

  1. In the project overview click on Environments in the upper navigation bar
  2. Click on Create Environment and a modal window like the one shown below will pop

  1. Select the desired commit and data source you wish to deploy

  2. Click on Create

Once created you will see something like this

From this point click on the Start Runners button and wait for 30 seconds to a minute until the “red” dot to the left of the Environment name changes to green. If it hasn’t done so automatically in that time, you may refresh the page. After the environment is running, let’s get access tokens so that we can access the API.

  1. Click on the Access Tokens link as shown above. (alternatively, you can click on the environment, then click on Environment Settings in the top navigation)
  2. Click on Create Access Token
  3. Give it a name and click Create.
  4. This will generate a token. Please make sure you make note of it and store it somewhere safe. If you lose it, you will have to create a new one as there is no way to recover the old one.
  5. Click on Close.

By this point you are ready to go and your data API is live.

Testing the API

To ensure that everything is working as expected, you should test the API. I’m choosing the REST interface as it’s the most straightforward. In this example, I’m using an API client called Insomnia, but you may use Postman or even good old gnu curl.

So first bit of information we need to know is the API URL which can be found in the environment’s and it looks as follows

Note this will be the root URL for all of your API calls in this environment.

Then we also need to set X-API-KEY header and set the value access token you generated in a previous step.

Then you can grab any API operations which you can find in the Definitions > API Documentation > Operations section. For the sake of simplicity, I’m choosing the simplest list I had in my definitions, and using a query to limit the result set to 25. It looks as follows:

Now let’s grab the id of one of the actors in this list and try to fetch its details. And the results are as shown below.

Conclusion

In this installment, we saw how straightforward it is to get your data and API onboarded to Neurelo. We explored some of the magic sauce in generating suitable APIs (both REST and GraphQL) and observed how quickly it is to deploy and use them. In a future installment, we will explore how to integrate Neurelo into your app with concrete examples and patterns to follow. In the next installment we will see how to customize the auto-generated API if you need complex queries or data payloads that aren’t inferred and generated.

If you’re interested in learning more about Neurelo, hop on their Discord and join the conversation!

Top comments (0)