DEV Community

Vinicius Fagundes
Vinicius Fagundes

Posted on

Setting Up Your Databricks Account (Free Trial + First Look at the UI)

Enough theory. Let's get you inside Databricks.

In this article we'll create your free account, take a tour of the UI, and run your very first notebook. By the end, you'll have a working Databricks environment and a feel for how everything is organized.

No credit card required.


Your Two Options: Community Edition vs Full Trial

Before we start, you need to know there are two ways to try Databricks for free:

Community Edition 14-Day Free Trial
Cost Free forever Free for 14 days
Cloud provider Databricks-managed AWS, Azure, or GCP
Cluster size Single-node (small) Full cloud clusters
Best for Learning and experimenting Realistic production testing
Credit card needed ❌ No ✅ Yes

💡 Recommendation for this series: Start with Community Edition. It's free, instant, and more than enough to follow every article in this series all the way to your first data warehouse.


Creating Your Community Edition Account

Step 1 — Go to community.cloud.databricks.com

Step 2 — Click Sign Up and fill in your details:

  • First and last name
  • Company (you can put anything here)
  • Email and password

Step 3 — On the next screen, when asked to choose a cloud provider, scroll down and look for the small link that says "Get started with Community Edition". Click that — not the cloud options.

⚠️ This step trips a lot of people up. Don't select AWS/Azure/GCP unless you want the 14-day trial. The Community Edition link is easy to miss.

Step 4 — Verify your email address. Check your inbox for the confirmation link.

Step 5 — Log in. You're in.


Choosing a Cloud Provider (For the Full Trial)

If you do go with the 14-day trial instead, here's how to pick your cloud:

Cloud Best if you...
AWS Already use AWS at work, or have no preference
Azure Work in a Microsoft-heavy environment
GCP Are already on the Google ecosystem

For learning purposes, it genuinely doesn't matter. The Databricks interface is nearly identical across all three.


Tour of the Databricks UI

Once you're logged in, you'll land on the Home screen. Let's walk through the main sections.

🏠 Workspace

Your personal file system inside Databricks. This is where you store notebooks, libraries, and files. Think of it like Google Drive — but for code and data.

You'll organize your work here in folders. By default you get a personal folder tied to your email.

⚡ Compute (Clusters)

This is where you create and manage clusters — the engines that run your code. No cluster = no execution.

In Community Edition you'll always use a single-node cluster. In full Databricks environments, this is where you configure worker nodes, autoscaling, and runtimes.

We'll cover clusters in depth in the next article.

🗄️ Data (Catalog)

The data explorer. This is where you browse databases, tables, and schemas. As you create Delta tables throughout this series, they'll appear here.

In full Databricks environments, this is powered by Unity Catalog — Databricks' governance layer for managing data access across teams.

🔄 Workflows

Databricks' built-in job scheduler. You define multi-step pipelines here — run notebook A, then notebook B, on a schedule or triggered by an event.

We'll use this in the later articles when we wire up our data warehouse pipeline.

🔍 SQL Editor

A dedicated SQL interface for running queries against your tables. If you come from a BI or analytics background, this will feel familiar — it behaves like any SQL client.


Key Menus at a Glance

Menu What you'll use it for
Workspace Organizing notebooks and files
Compute Creating and managing clusters
Data Browsing tables and schemas
Workflows Scheduling and running pipelines
SQL Editor Writing and running SQL queries

Your First Notebook in Under 5 Minutes

Let's make sure everything works. Here's how to create and run your first notebook:

Step 1 — Create a cluster

Go to Compute → click Create Cluster → give it a name (e.g. my-first-cluster) → click Create Cluster.

In Community Edition this takes about 2–3 minutes to start. The status will show as Pending, then Running.

Step 2 — Create a notebook

Go to Workspace → click the + icon → select Notebook.

Give it a name, choose Python as the default language, and attach it to the cluster you just created.

Step 3 — Run your first cell

In the first cell, type:

print("Hello, Databricks!")

spark.version
Enter fullscreen mode Exit fullscreen mode

Press Shift + Enter to run. You should see:

Hello, Databricks!
Out[1]: '3.x.x'  # Your Spark version
Enter fullscreen mode Exit fullscreen mode

If you see output — congratulations. Your cluster is running, your notebook is connected, and Spark is alive. You're ready.

Step 4 — Try a quick DataFrame

In the next cell, paste this:

data = [("Alice", 30), ("Bob", 25), ("Carol", 35)]
columns = ["name", "age"]

df = spark.createDataFrame(data, columns)
df.show()
Enter fullscreen mode Exit fullscreen mode

Output:

+-----+---+
| name|age|
+-----+---+
|Alice| 30|
|  Bob| 25|
|Carol| 35|
+-----+---+
Enter fullscreen mode Exit fullscreen mode

You just created your first Spark DataFrame. It doesn't look like much yet — but this is the foundation of everything you'll build in this series.


Notebook Tips Before You Move On

A few things worth knowing early:

  • Cell types: Notebooks support Python, SQL, Scala, and R. You can mix them in the same notebook using magic commands like %sql or %scala at the top of a cell.
  • Shortcuts: Shift + Enter runs the current cell and moves to the next. Ctrl + Enter runs without moving.
  • Markdown cells: Start a cell with %md to write formatted documentation inside your notebook.
  • Auto-complete: Press Tab while typing to trigger suggestions.

Wrapping Up

Here's what you've done in this article:

  • Created a free Databricks Community Edition account
  • Toured the main sections of the UI: Workspace, Compute, Data, Workflows, SQL Editor
  • Created your first cluster and notebook
  • Ran your first Spark DataFrame

In the next article, we'll go deeper into clusters and notebooks — the two things you'll interact with every single day as a Databricks user.

Top comments (0)