This doc brings a brief summary on how to request a hosted S3 buckets within the Operate First managed deployments. We use the Rook operator.
Deploying a following resource within your application will grant you a bucket:
--- apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: CLAIM_NAME # Must be unique: you can't name it the same as any of your secrets or configmap. More details below spec: generateBucketName: CLAIM_NAME- storageClassName: openshift-storage.noobaa.io additionalConfig: maxObjects: "1000" maxSize: "2G"
You can use
bucketName property instead of a
generateBucketName if you wish to set a fixed bucket name. Please be advised that a bucket name must be remain unique in the whole cluster, therefore we would prefer if users either used unique prefixes for the bucket names or refrained from using
bucketName property (use the
generateBucketName instead please).
Once deployed and bound, the
ObjectBucketClaim resource will be updated. Additionally a new
Secret and a
ConfigMap are created in the same namespace. Both the secret as well as the configmap use the same name as the claim resource. Please make sure you don’t overwrite any of your current secrets/configmaps.
The autocreated secret
CLAIM_NAME contains 2 properties, which provide access credentials for this bucket:
The autocreated configmap
CLAIM_NAME contains 4 additional properties, which specifies means to access the bucket:
BUCKET_HOSTwhich corresponds to an internal cluster route to the Rook operator deployment (
BUCKET_NAMEwhich holds the unique name (in the cluster) of the bucket, prefixed with what was specified in
In order to use the bucket within your deployment, you can mount the
Secret and a
... spec: containers: - name: mycontainer ... envFrom: - configMapRef: name: CLAIM_NAME - secretRef: name: CLAIM_NAME
Accessing S3 bucket from outside of the cluster is possible via