DEV Community

John Gakhokidze
John Gakhokidze

Posted on • Updated on

Google Cloud Shell -Part 1 Configuring rclone

Google Cloud Shell (GCS) provides weekly quota 50 hours.
For additional limitations and restrictions of GCS, please, check this article.

Beside using it for firebase projects (GCP), managing your cloud infrastructure you can have much more from it.

In Part 1:

Configuring rclone - to connect to any cloud storage of choice (I will use Google Drive and AWS S3 as an example). For complete documentation on rclone please check rclone project website.

Rclone

  1. In cloud shell run

wget https://rclone.org/install.sh .

do not pass it through bash, as cloud shell will be re-provisioned, and your installation will be lost, so keep install.sh in your home directory for that case.

  1. Run

sudo sh ./install.sh

In just few seconds rclone is ready.

Configure Google Drive (Not Google Cloud Storage)

Command Comment
rclone config
n) New remote
name> google
Storage> 13 as of July 2020 13 is Google Drive
client_id> leave empty
client_secret> leave empty
scope> Select desired access level
root_folder_id> leave empty
service_account_file> leave empty
Edit advanced config?(y/n) n or you can configure advanced settings.
Important
Use auto config? Select n as cloud shell is headless
You will be presented with URL
Please go to the following link: open link, copy code, and paste in command line
Configure this as a team drive? n
You will be provided with config similar to one below
--------------------
[google]
type = drive
scope = drive
token =SOME_TOKEN,SOME_REFRESH_TOKEN,SOME_DATE
--------------------
y) Yes this is OK (default) hit enter if ok
q) Quit config q to quit, or n for next configuration
Verify your configuration
rclone lsf google: You should see your files on Google Drive

Configure AWS S3

Command Comment
rclone config
n) New remote
name> aws
Storage> 4 as of July 2020 4 is AWS S3
provider> 1 as of July 2020 1 is AWS S3
env_auth> 1 1 / Enter AWS credentials in the next step
access_key_id> Your access key
secret_access_key> Your secret access key
region> Select your region
endpoint> Leave empty unless you want to use specific endpoint more about AWS endpoints check here
location_constraint> Must be the same as your selected region
acl> Select your preferred ACL
server_side_encryption> Select your preferred SSE
sse_kms_key_id> KMS arn if KMS Key encryption is selected
storage_class> Select desired Storage class
Edit advanced config?(y/n) n or you can configure advanced settings
You will be provided with config similar to one below
--------------------
[aws]
type = s3
provider = AWS
env_auth = false
access_key_id = YOUR_ACCESS_KEY_ID
secret_access_key = YOUR_SECRET_ACCESS_KEY
region = YOUR_REGION
location_constraint = YOUR_REGION
acl = private
storage_class = STANDARD
--------------------
y) Yes this is OK (default) If ok select y
You will see your remote config
Name Type
==== ====
aws s3
google drive
q) Quit config q If everything is correct press q
Verify your config
rclone ls aws:bucketname/path You should see your files and folders

See more in Part 2

Top comments (0)