ArgoCD has two types of users — local, that are set in the argocd-cm ConfigMap, and SSO.
Below, we will speak about local user management, and in the next chapter will see how to integrate ArgoCD and Okta, because local users can’t be grouped in groups. See the documentation on the Local users/accounts page.
For any users, their permissions can be configured with roles, that have policies attached describing objects to allow access to and operations that users can perform on.
With this, access can be configured globally per cluster or dedicated to Projects.
Let’s start with adding a simple local user, and then step by step will configure the rest.
- Users and roles in ArgoCD
- Adding a local user
- Roles and RBAC
- ArgoCD Projects
- Projects and roles
- Global roles
- Project roles and authentication tokens
- ArgoCD groups
- Putting all together
Users and roles in ArgoCD
Adding a local user
To add a local user, edit the argocd-cm
ConfigMap and add a accounts.USERNAME
record:
apiVersion: v1
data:
accounts.testuser: apiKey,login
...
With the apiKey
here we've set that the user can generate JWT-tokens for their authentification, see Security, and with the login
- allow this user to use ArgoCD WebUI login.
Save, and check the users' list:
$ argocd account list
NAME ENABLED CAPABILITIES
admin true login
testuser true apiKey, login
The admin user was created during the ArgoCD instance set up, and it has no ability to use tokens. This can be configured by setting this user in the argocd-cm
, although it's recommended to disable the admin user after adding all necessary users.
In general, the idea is to have users for the WebUI access, and project roles — to get tokens that can be used in CI/CD pipelines.
The testuser was just added by us, and currently, it has no password set.
To create him a password, you need to have the current admin’s password:
$ argocd account update-password --account testuser --new-password 1234 --current-password admin-p@ssw0rd
Password updated
Not the best solution, but as is. See the Unable to change the user’s password via argocd CLI discussion for details.
Now, log in with the testuser:
$ argocd login dev-1–18.argocd.example.com --username testuser --name testuser@dev-1–18.argocd.example.com
Password:
‘testuser’ logged in successfully
Context ‘testuser@dev-1–18.argocd.example.com’ updated
Check local ArgoCD CLI contexts:
$ argocd login dcontext
CURRENT NAME SERVER
admin@dev-1–18.argocd.example.com dev-1–18.argocd.example.com
* testuser@dev-1–18.argocd.example.com dev-1–18.argocd.example.com
Okay, now we are under the testuser.
Roles and RBAC
By default, all new users are using the policy.default
from the argocd-rbac-cm
ConfigMap:
$ kubectl -n dev-1–18-devops-argocd-ns get configmap argocd-rbac-cm -o yaml
apiVersion: v1
data:
policy.default: role:readonly
…
ArgoCD has two default roles — role:readonly
, and role:admin
. Also, you can set the policy.default
with the role: ''
to disable access at all.
At this moment, our user can view any resources:
$ argocd cluster list
SERVER NAME VERSION STATUS MESSAGE
[https://kubernetes.default.svc](https://kubernetes.default.svc) in-cluster 1.18+ Successful
But can not create any new, for example, try to add a new cluster:
$ argocd cluster add config-aws-china-eks-account@aws-china-eks-account --kubeconfig ~/.kube/config-aws-china-eks-account@aws-china-eks-account
INFO[0002] ServiceAccount “argocd-manager” already exists in namespace “kube-system”
INFO[0003] ClusterRole “argocd-manager-role” updated
INFO[0004] ClusterRoleBinding “argocd-manager-role-binding” updated
FATA[0006] rpc error: code = PermissionDenied desc = permission denied: clusters, create, [https://21D***ECD.gr7.cn-northwest-1.eks.amazonaws.com.cn,](https://21D***ECD.gr7.cn-northwest-1.eks.amazonaws.com.cn,) sub: testuser, iat: 2021–05–12T14:03:12Z
“permission denied: clusters, create ” — aha, it can’t.
To allow the user to add clusters, edit the argocd-rbac-cm
CondfigMap, add a new role:test-role
with the clusters, create
permissions:
...
data:
policy.default: role:readonly
policy.csv: |
p, role:test-role, clusters, create, *, allow
g, testuser, role:test-role
...
Check it:
$ argocd account can-i create clusters ‘*’
yes
And add a cluster:
$ argocd cluster add config-aws-china-eks-account@aws-china-eks-account --kubeconfig ~/.kube/config-aws-china-eks-account@aws-china-eks-account
INFO[0001] ServiceAccount “argocd-manager” already exists in namespace “kube-system”
INFO[0002] ClusterRole “argocd-manager-role” updated
INFO[0003] ClusterRoleBinding “argocd-manager-role-binding” updated
Cluster ‘https://21D***ECD.gr7.cn-northwest-1.eks.amazonaws.com.cn' added
But what to do, if you want to allow namespace actions? RBAC in the ConfigMap doesn’t allow this.
For example, in my current project we have Web developers, backend developers, and they all have different applications that are deployed in different namespaces, and it’s a good idea to separate their access, so the Web team will not affect Backend’s team resources.
ArgoCD Projects
And here “Projects comes to the rescue”!
Projects allow specifying access for namespaces, repositories, clusters, and so on. And then, we will be able to limit every developer team by their own namespaces only_._
Let’s check how this is working.
So, during an installation, ArgoCD created the default project:
$ argocd proj list
NAME DESCRIPTION DESTINATIONS SOURCES CLUSTER-RESOURCE-WHITELIST NAMESPACE-RESOURCE-BLACKLIST SIGNATURE-KEYS ORPHANED-RESOURCES
default *,* * */* <none> <none> disabled
Projects are Kubernetes Custom Resource objects with the appproject
type:
$ kubectl -n dev-1–18-devops-argocd-ns get appproject
NAME AGE
default 166d
And can be created with a common Kubernetes manifest like this:
kind: AppProject
metadata:
name: example-project
namespace: dev-1-18-devops-argocd-ns
spec:
clusterResourceWhitelist:
- group: '*'
kind: '*'
destinations:
- namespace: argo-test-ns
server: [https://kubernetes.default.svc](https://kubernetes.default.svc)
orphanedResources:
warn: false
sourceRepos:
- '*'
Or with the ArgoCD CLI:
$ argocd proj create test-project -d [https://kubernetes.default.svc,argo-test-ns](https://kubernetes.default.svc,argo-test-ns) -s [https://github.com/argoproj/argocd-example-apps.git](https://github.com/argoproj/argocd-example-apps.git)
Check it:
$ argocd proj list
NAME DESCRIPTION DESTINATIONS SOURCES CLUSTER-RESOURCE-WHITELIST NAMESPACE-RESOURCE-BLACKLIST SIGNATURE-KEYS ORPHANED-RESOURCES
default *,* * */* <none> <none> disabled
test-project [https://kubernetes.default.svc,argo-test-ns](https://kubernetes.default.svc,argo-test-ns) [https://github.com/argoproj/argocd-example-apps.git](https://github.com/argoproj/argocd-example-apps.git) <none> <none> <none> disabled
Now, let’s move an existing application guestbook from the argo-test-ns
to the new project:
$ argocd app set guestbook --project test-project
Check it:
$ argocd app get guestbook
Name: guestbook
Project: test-project
Server: [https://kubernetes.default.svc](https://kubernetes.default.svc)
Namespace: argo-test-ns
URL: https://dev-1–18.argocd.example.com/applications/guestbook
Repo: [https://github.com/argoproj/argocd-example-apps.git](https://github.com/argoproj/argocd-example-apps.git)
…
Now, let’s check how the namespaces limit is working.
When we’ve created the project above, we’ve set the destination — https://kubernetes.default.svc,argo-test-ns
.
Try to create a new application in this project, but set its namespace as argo-test-2-ns:
$ argocd app create guestbook-2 — repo [https://github.com/argoproj/argocd-example-apps.git](https://github.com/argoproj/argocd-example-apps.git) — path guestbook — dest-server [https://kubernetes.default.svc](https://kubernetes.default.svc) — dest-namespace argo-test-2-ns — project test-project
FATA[0001] rpc error: code = InvalidArgument desc = application spec is invalid: InvalidSpecError: application destination {https://kubernetes.default.svc argo-test-2-ns} is not permitted in project ‘test-project’
“application destination {https://kubernetes.default.svc argo-test-2-ns} is not permitted in project ‘test-project’ ” — cool! With the testuser permissions in the test-project Project currently we can’t add an application to a new namespace as the Project has a limit on the destinations.
Update the project and add a new destination — the same cluster, but another namespace argo-test-2-ns:
$ argocd proj add-destination test-project [https://kubernetes.default.svc](https://kubernetes.default.svc) argo-test-2-ns
Check the project now:
$ argocd proj get test-project
Name: test-project
Description:
Destinations: [https://kubernetes.default.svc,argo-test-ns](https://kubernetes.default.svc,argo-test-ns)
[https://kubernetes.default.svc,argo-test-2-ns](https://kubernetes.default.svc,argo-test-2-ns)
…
Try to create the app again:
$ argocd app create guestbook-2 — repo [https://github.com/argoproj/argocd-example-apps.git](https://github.com/argoproj/argocd-example-apps.git) — path guestbook — dest-server [https://kubernetes.default.svc](https://kubernetes.default.svc) — dest-namespace argo-test-2-ns — project test-project
application ‘guestbook-2’ created
And now it’s working.
So, on our Dev Kubernetes cluster, we can set the destination as '*'
, as namespaces here are created dynamically - developers are deploying their branches to dedicated namespaces to have a multi-environment setup for the testing, but on our production cluster, we can set a hard limit on allowed namespaces.
Projects and roles
Global roles
Access to projects can be set globally via the argocd-rbac-cm
ConfigMap, or locally per a project.
Let’s go back to our global role:test-role
, and add a policy to allow access to applications from the test-project
only, and in the policy.default
disabled the read-only access by setting the role: ''
:
...
policy.csv: |
p, role:test-role, clusters, create, *, allow
p, role:test-role, applications, *, test-project/*, allow
g, testuser, role:test-role
policy.default: role:''
...
Switch to the testuser:
$ argocd context testuser@dev-1–18.argocd.example.com
Switched to context ‘testuser@dev-1–18.argocd.example.comd’
Check available applications:
$ argocd app list
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY CONDITIONS REPO PATH TARGET
guestbook [https://kubernetes.default.svc](https://kubernetes.default.svc) argo-test-ns test-project OutOfSync Missing <none> <none> [https://github.com/argoproj/argocd-example-apps.git](https://github.com/argoproj/argocd-example-apps.git) helm-guestbook HEAD
The only one here, as we’ve set in the policy.
Project roles and authentication tokens
Another way is to create a dedicated role for a project. Then, you’ll be able to create a token for this role as for a common user.
Create the test-role in the test-project Project:
$ argocd proj role create test-project test-role
Add a policy — any actions on the _guestbo_ok application:
$ argocd proj role add-policy test-project test-role — action '*' --permission allow --object guestbook
The policy will be added to RBAC rules on the project:
$ kubectl -n dev-1–18-devops-argocd-ns get appproject test-project -o jsonpath=’{.spec.roles[].policies}’
[p, proj:test-project:test-role, applications, *, test-project/guestbook, allow]
Get a token:
$ argocd proj role create-token test-project test-role
eyJ***sCA
Or set it to a variable:
$ token=$(argocd proj role create-token test-project test-role)
Ans check permissions using the token, for example, to sync an application:
$ argocd account can-i sync applications test-project/guestbook --auth-token $token
yes
Okay, this working,
But you can’t add a cluster using the token of this role as we didn’t set it in its policies:
$ argocd account can-i create clusters '*' --auth-token $token
no
Still, you testuser is able to do it as it has the permissions et in the argocd-rbac-cm
with its policy of the test-role - p, role:test-role, clusters, create, *, allow
:
$ argocd account can-i create projects '*'
yes
ArgoCD groups
With RBAC rules you can group roles to groups, but now users, for example:
...
policy.csv: |
g, argocd-admins, role:admin
...
Later, using SSO, we will map user groups to ArgoCD roles.
Putting all together
Okay, now we are familiar with the user management in ArgoCD, so let’s plan how we can use it.
What do we have?
Two teams — the Web, and Backend.
Every team has its own set of applications.
From the global user, we could create a root user for the DevOps team with full access, and an admin and read-only user bounded to a project.
In a Project, we could specify a role that will be used in Github Actions/Jenkins pipelines by using its token.
Later, we will configure Oktaa and SSO, so users will log in to the ArgoCD WebUI by using their Okta’s credentials and their Okta groups, so a Backend team in Okta will have access to the Backend Project in ArgoCD, and Web team from Okta will have access to the Web project in the ArgoCD.
For now, that’s all.
In the following post, we will configure Okta and ArgoCD, see the ArgoCD: Okta integration, and user groups post.
Originally published at RTFM: Linux, DevOps, and system administration.
Top comments (0)