Admin API - Group
Basic Administration using API’s
Setup CMC admin password
Note
Before we get started with the API labs, lets login via the CMC and set the default CMC Administrator password
Point your browser to https://cmc.studentX.cloudian.tech:8443/Cloudian
Where X is your student number.
Instructions
Group Name: System Admin UserId: admin Password: public
Instructions
- Enter current Password (public)
- Enter New Password / Use @StudentX as new password, where X is your student number.
- Click Save
Instructions
Whilst we are logged in, lets also enable QOS so that we can set / test QOS limit setting later.
- Select Cluster Tab
- Select Cluster Config Sub-Tab
- Select Configuration Settings
- Select Quality of Service to expand.
- Click Edit on both options and enable.
- Click on Save
Info
You can run the following Labs from a machine & network with reachability to the admin endpoint with curl and Python installed. For our labs we will use the iDP server to make Admin API calls. However, we first need to retrieve the sysadmin password for the cluster.
Instructions
Lets get the Admin API password and assign it to an environment variable in .bash_profile
for
ease of access. Take care not to expose the password in production allowing unintended access.
- SSH into your Config / Installer node (Node 1) with the root account
- Get the admin API password using the hsctl command
hsctl config get admin.auth.password
Copy the output of the command as shown above This is your Admin API plain text password
Instructions
- Login to your iDP Server (IP and credentials in your student worksheet)
- Save the Admin API password as an environment variable.
Ensure that you replace the Admin Password field with your copied password
from the previous instructions.
echo "export auth_pass=Admin Password" >> ~/.bash_profile && source ~/.bash_profile
- Check that the auth_pass variable has been set correctly
echo $auth_pass
Instructions
- Obtain your Admin API endpoint from: Student Assignment
- Save your admin endpoint in the bash profile, replacing the Admin Endpoint with your own
echo "export admin_endpoint=Admin Endpoint" >> ~/.bash_profile && source ~/.bash_profile
- Check that the admin endpoint has been set
The output should look like:
echo $admin_endpoint
s3-admin.studentX.cloudian.tech
where 'X' is your student number
You are now ready to issue Admin API calls from the Host Server
Create Group (shell)
Info
Most of these API calls require a JSON formatted payload. Lets have a look at the payload required to create a new Group.
Instructions
- Make a new directory for the payload file
mkdir ~/admin_api
- Create a new Json payload file:
echo '{ "active": "true", "groupId": "Engineering", "groupName": "Engineering Group", "ldapEnabled": false, "ldapGroup": "", "ldapMatchAttribute": "", "ldapSearch": "", "ldapSearchUserBase": "", "ldapServerURL": "", "ldapUserDNTemplate": "", "s3endpointshttp": ["ALL"], "s3endpointshttps": ["ALL"], "s3websiteendpoints": ["ALL"] } ' > ~/admin_api/create_group.txt
Info
For this lab, we are creating a new HyperStore group with Engineering for groupId and Engineering Group for the groupName. For future labs we will continue to reference ”Engineering” groupId to do other exercises.
Instructions
- Make the admin API request
Any errors will be output. No Output means success.
curl -X PUT -H "Content-Type: application/json" -k -u sysadmin:$auth_pass -d @/home/training/admin_api/create_group.txt https://$admin_endpoint:19443/group
- We can check the group was created using the same API function with the following command
curl -s -X GET -k -u sysadmin:$auth_pass https://$admin_endpoint:19443/group/list | python -mjson.tool | grep groupId
- Check to ensure the “Engineering” group is displayed like the example below.
Create User (shell)
Instructions
- SSH into your iDP Server as training if you have not already
- Create the JSON payload
echo '{ "active": "true", "address1": "", "address2": "", "city": "", "country": "", "emailAddr": "", "fullName": "Engineering Student", "groupId": "Engineering", "ldapEnabled": false, "phone": "", "state": "", "userId": "engineer1", "userType": "User", "website": "", "zip": "" }' > ~/admin_api/create_user.txt
Instructions
- Create the new user from the payload using the Admin API.
curl -X PUT -H "Content-Type: application/json" -k -u sysadmin:$auth_pass -d @/home/training/admin_api/create_user.txt https://$admin_endpoint:19443/user
Info
This creates a user who is a member of the Engineering Group, however this user cannot log into the CMC yet because the User password has not been set. This maybe a good thing in your environment where there is no requirement for the user account to access cmc.
Instructions
- Lets get the S3 credentials for the newly created user.
curl -s -X GET -k -u sysadmin:$auth_pass "https://$admin_endpoint:19443/user/credentials/list/active?userId=engineer1&groupId=Engineering" | python -mjson.tool
Important
You can see the S3 access key and secret key for this new user (take a copy of these for later)
Info
If you want this new user to have access to the CMC you will need to create a password for the user.
Passwords must meet the following conditions by default:
- Minimum of nine characters, maximum of 64 charactes
Must contain:
- At least one lower case letter
- At least one upper case letter
- At least one number
- At least one special character such as !, @, #, $, %, ^, etc.
Instructions
- Set a CMC password for the engineer1 user as @Engineer1
curl -X POST -k -u sysadmin:$auth_pass "https://$admin_endpoint:19443/user/password?userId=engineer1&groupId=Engineering&password=@Engineer1"
Note
This command is also useful if you forget the admin password for the CMC as you can change the admin password using the same method where userId=admin&groupId=0
We can now login to the CMC using the new CMC credentials.
Info
- Ensure Group Name is set to Engineering
- Ensure User ID: is engineer1
- Ensure Password: is @Engineer1
Note
You will see that although you can login as engineer1 there is a warning to let you know there are no Storage Policies yet....
Create Storage Policy (shell)
Instructions
- If not already, ssh to your iDP Server as the training user.
- Create the JSON payload for RF4 storage policy
echo '{ "compressionType": "NONE", "consistencyLevel": { "metadataR": [ "QUORUM" ], "metadataW": [ "QUORUM" ], "read": [ "QUORUM" ], "readEC": [ "QUORUM" ], "write": [ "QUORUM" ], "writeEC": [ "QUORUM" ], "writeNew": [] }, "defaultProtectionScheme": "HSFS", "ecscheme": null, "elasticsearchEnabled": false, "encryptionEnabled": false, "encryptionType": "NONE", "groups": [], "policyDescription": "4 Replicas", "policyId": "", "policyName": "RF4", "region": "", "replicationScheme": { "DC1": "4" }, "sizeThreshold": 0, "status": "ACTIVE", "systemDefault": true }' > ~/admin_api/create_policy.txt
Instructions
- Issue the Admin API call to create the policy from the JSON payload
curl -s -X PUT -H "Content-Type: application/json" -k -u sysadmin:$auth_pass -d @/home/training/admin_api/create_policy.txt https://$admin_endpoint:19443/bppolicy | python -mjson.tool
Info
The Key Elements of the payload are:- (They may appear in a different order than above)
ConsistencyLevel: You can set the CL of the policy here, in our example we are setting all to QUORUM
defaultProtectionScheme: “HSFS” denotes this a a Replication Policy
policyName: ”RF4” is the policy name. This is just a label and does not enforce any sort of data protection based on name.
replicationScheme: This is where you state the protection required. In this example, we are creating 4 copies in DC1.
Instructions
- Check that the RF4 Storage Policy has been created in the CMC and is in an ACTIVE state, you will need to login to the CMC as the admin user to view the Storage Policies tab.
Create QOS for user (shell)
Instruction
- If not already, ssh to your iDP Server as the training user.
- First lets ensure there are no QOS limits for engineer1
curl -s -X GET -k -u sysadmin:$auth_pass "https://$admin_endpoint:19443/qos/limits?userId=engineer1&groupId=Engineering" | python -mjson.tool
- A Value of -1 denotes there is no QOS limit for each of the QOS metrics.
Instruction
- Set QOS Storage Bytes for engineer1 to 10240 Kibibytes
curl -s -X POST -k -u sysadmin:$auth_pass "https://$admin_endpoint:19443/qos/limits?userId=engineer1&groupId=Engineering&storageQuotaKBytes=10240&storageQuotaCount=-1&wlRequestRate=-1&hlRequestRate=-1&wlDataKBytesIn=-1&hlDataKBytesIn=-1&wlDataKBytesOut=-1&hlDataKBytesOut=-1"
- We have to define an amount for ALL Limits even if we are only changing 1. We can check it has been updated using the previous curl command
curl -s -X GET -k -u sysadmin:$auth_pass "https://$admin_endpoint:19443/qos/limits?userId=engineer1&groupId=Engineering" | python -mjson.tool
Test QOS (Shell)
Instructions
-
Upload 5Mib file to myrf4bucket
-
Ensure that the upload fails. It fails because we already uploaded a 5Mb object into engineerbucket. We have set a QOS limit of 10Mib for the engineer1 users' buckets, this means the amount of data stored for the whole user must be less than 10Mib. 5Mib+5Mib would not be less than 10Mib. If we want to allow upto & including 10Mib we must use 1 byte more in the QOS setting
curl -s -X POST -k -u sysadmin:$auth_pass "https://$admin_endpoint:19443/qos/limits?userId=engineer1&groupId=Engineering&storageQuotaKBytes=10241&storageQuotaCount=-1&wlRequestRate=-1&hlRequestRate=-1&wlDataKBytesIn=-1&hlDataKBytesIn=-1&wlDataKBytesOut=-1&hlDataKBytesOut=-1"
-
This increases the QOS by 1 kibibyte (10241 instead of 10240). You can now upload the second 5mib object
s3cmd put 5mb s3://myrf4bucket
-
Lets set the QOS back to Unlimited
curl -s -X POST -k -u sysadmin:$auth_pass "https://$admin_endpoint:19443/qos/limits?userId=engineer1&groupId=Engineering&storageQuotaKBytes=-1&storageQuotaCount=-1&wlRequestRate=-1&hlRequestRate=-1&wlDataKBytesIn=-1&hlDataKBytesIn=-1&wlDataKBytesOut=-1&hlDataKBytesOut=-1"
Retrieving Usage Data by User
Note
The rest of these Admin API commands are largely pointless at this stage as we have not uploaded any data. Feel free to login to the CMC as the engineer1 user, create a bucket and upload some data to it, however we will be doing this in the S3 API Labs later. The commands below will work, but there will be no/nil values returned.
Instructions
- If not already, ssh to your iDP Server as the training user.
- Issue the following Admin API call to retrieve the Usage Data for the engineer1 user
curl -X GET -k -u sysadmin:$auth_pass "https://$admin_endpoint:19443/system/bytecount?groupId=Engineering&userId=engineer1 && echo"
Note
It should show 0 bytes as we have not uploaded any data yet
Retrieving Usage Data for All Groups
Instructions
- Issue the following Admin API call to retrieve all Usage Data for the ALL groups
curl -X GET -k -u sysadmin:$auth_pass "https://$admin_endpoint:19443/system/bytecount?groupId=ALL&userId=* && echo"
Note
It should show 0 bytes as we have not uploaded any data yet.
Important
For the usage data calculated by the previous two commands, it gives you the number of bytes BEFORE the storage policy is applied. For example, if you are using RF3, a 1KiB object stored would return 1024 bytes, NOT 3072 bytes.
Retrieving a List of Buckets Owned by Group
Instructions
- Issue the following Admin API call to retrieve a list of buckets owned by the Engineering group
curl -X GET -k -u sysadmin:$auth_pass "https://$admin_endpoint:19443/system/bucketlist?groupId=Engineering" | python -mjson.tool