Introduction
One of the best features of cloud services is the management API. Let’s imagine you need to implement some automated tasks. For example, you want your database instance, or multiple instances to stop at certain times, scale up, or do something else like start an on-demand backup. In the case of an in-house deployment, you need to program everything by yourself from start to finish. And believe me, I’ve done it. It may sound easy, but it is not.
However, virtually every cloud service comes with an API to handle all those tasks. That API is documented, aligned with what the service can do, and in some cases, it also has a client SDK. AlloyDB is no exception, and it has a documented API that can be used for automation. In one of my previous blogs, I’ve written how to scale up primary instances using CPU monitoring. Here, I am going to show how you can automate some other tasks.
APIs
The AlloyDB API documentation is available on the main reference page. There, you can find there reverence for version v1, v1beta and shared types such as “Date” and others.
Expanding the API reference might seem overwhelming at first with its long list of resources and types.
In reality, it’s quite straightforward. To help visualize the structure, let’s create a graph of the main AlloyDB API resources.
At a high level, the AlloyDB resource hierarchy begins with a Project, which contains one or more Locations. Nested under a specific location is the Cluster, which in turn contains resources like Instances, Backups, and Users.
All changes to these resources are done through Operations. Any request that modifies a resource, such as creating an instance or a backup, triggers an operation that you can monitor.
While this is a simplified view, it covers the basics. For this post, we’ll focus on just a few of these key resources and show how to manage them using Go and the REST API.
Cluster
In a project we can have one unique cluster per location. You cannot have two clusters with the same name in the same region. So, to define a cluster or clusters we want to modify we have to specify a project and a certain location as root resources. In case of an AlloyDB cluster the location will be represented by a region. You can get the location resource definition here in the reference. Here is an example of how the location resource can be defined in the code.
// List of available locations
type Locations struct {
Locations []Location `json:"locations"`
}
type Location struct {
Name string `json:"name"`
LocationId string `json:"locationId"`
DisplayName string `json:"displayName"`
}
...
// Get list of all locations instances for clusters with defined name in all locations
locationsURL := fmt.Sprintf("%s/projects/%s/locations", apiURL, project)
resp, err := client.Get(locationsURL)
if err != nil {
return nil, fmt.Errorf("failed to get all locations for project %s: %v", project, err)
}
...
// Get list of locations
locations := Locations{}
err = json.Unmarshal(locationsListBody, &locations)
if err != nil {
return nil, fmt.Errorf("failed to unmarshal all locations: %v", err)
}
The examples in this post use the Go language to make direct HTTP requests to the API. You can find the full source code for the Cloud Run function here.
Once we know the location we can define our cluster using cluster name as a parameter. For all clusters in the project we use an alias ALL and it will tell us to use “-” in the request URL to define all clusters.
Our sample code focuses on instance management, so we don’t explicitly define a Cluster type (or struct) in Go. Instead, we simply use the cluster’s name directly in the request URL to target the correct resources.
However, if you were performing actions on the cluster itself, like creating a new one, you would need to define that Cluster type in your code to properly structure the API request. You can find the full description of the cluster API resource in the documentation.
Instance
Each cluster can have one or more instances where one instance is the primary and the rest would belong to one of the read pools. Some operations like backup require the primary instance to be available. To create a proper request body for the instance it is defined as a type or struct in the Go code.
Here is how we would define the instances in the code.
// List of AlloyDB instances from API response
type Instances struct {
Instances []Instance `json:"instances"`
}
// A single instance from the list
type Instance struct {
Name string `json:"name"`
DisplayName string `json:"displayName"`
Uid string `json:"uid"`
CreateTime string `json:"createTime"`
UpdateTime string `json:"updateTime"`
DeleteTime string `json:"deleteTime"`
State string `json:"state"`
InstanceType string `json:"instanceType"`
Description string `json:"description"`
IpAddress string `json:"ipAddress"`
Reconciling bool `json:"reconciling"`
Etag string `json:"etag"`
}
We have two types — one for instance itself and the other for a list of instances where each member is a single instance. It helps when we want to perform an operation on all instances of a cluster. Now, let’s talk about operations or what we can do with an instance.
Operations
What can we do with an instance? We can create, delete, change shape, and, more recently, start and stop them. You can read about starting and stopping instances in one of my previous blogs. For a complete list of all available methods, please refer to the reference documentation.
To illustrate, let’s delete an instance using curl. This is done by sending a DELETE request to the instance URL.
For this example, we’ll assume the following details:
- Project ID: test-project-123
- Region: us-central1
- Cluster Name: my-cluster
- Instance Name: my-instance
Given these parameters, the request would look like this:
curl -X DELETE -H "Authorization: Bearer $(gcloud auth print-access-token)" https://alloydb.googleapis.com/v1beta/projects/test-project-123/locations/us-central1/clusters/my-cluster/instances/my-instance
You noticed that I used an OAuth token to authenticate my request. That command works if you have gcloud SDK on your machine and authenticated in the cloud.
When you post such a request it will return an id of the operation triggered by that request. You can monitor the operation status using a “GET” request to the operations endpoint.
It’s also worth mentioning the failover operation, which is useful for managing High Availability (HA) instances by allowing you to switch between zones.
There are other AlloyDB resources such as backups and users which you can include to your tool and automate but now I want to focus on the way you initiate one or another operation.
Cloud Function
Technically you could use Google Cloud Scheduler to send HTTPS requests directly to AlloyDB API endpoints, but this approach has limitations. One of such limitations is that you often don’t know the exact name of a resource in advance.
For example, if you want to stop all AlloyDB instances in a project, you first need to query the API to get a list of those instances before you can send a ‘stop’ request for each one. The same is true for managing backups, where you must know a backup’s unique name to interact with it.
A more flexible solution is to use a serverless function (like Cloud Functions or Cloud Run) that is triggered by a Pub/Sub message. The message payload can dynamically specify the desired action, the target resources, and any other parameters you need.
Here is an example of what such a message payload might look like:
{
"project": "test-project-123",
"location": "us-central1",
"operation": "STOP",
"cluster": "my-cluster",
"instance": "my-instance",
"retention": 0
}
This message initiates a STOP operation on the my-instance instance of the my-cluster AlloyDB cluster in the us-central1 region.
The retention field here is for a future implementation of backup management, where you can specify a retention period for your manual backups.
In the code, the message would be represented by structs like the following:
type PubSubMessage struct {
Data []byte `json:"data"`
}
type Parameters struct {
Project string `json:"project"`
Location string `json:"location"`
Operation string `json:"operation"`
Cluster string `json:"cluster"`
Instance string `json:"instance"`
Retention int `json:"retention"`
}
When we create a function, we specify an EventArc trigger that will invoke the function whenever a Pub/Sub message is published to a topic. The web console interface allows us to create the Pub/Sub topic at the same time we define the trigger for the function.
After defining all the metadata for the function, we can add the source code. As a reminder, the sample code is available for download from GitHub.
Now, whenever you publish a message to the alloydb-mgmt-topic using the JSON format discussed earlier, you can start, stop, scale, or delete a specific instance or all instances in a project. This can also be combined with monitoring to, for example, scale an instance up or down, as was described in one of the previous blogs.
Summary
AlloyDB service in Google Cloud gives you a great start with automated services and features that require minimal management. However, as your business grows and requires unique features, the AlloyDB API provides the ability to manage and automate all kinds of tasks based on your requirements.
Try the sample function code with the REST API and HTTP requests and also check the examples in the previously published blogs about start and stop automation and vertical autoscaling. Google also provides a Go SDK for AlloyDB along with SDK for other languages. Please try the code and let me know if you would like to see a version of the sample function that is based on the Go SDK.





Top comments (0)