Skip to main content
How to enable StorageGRID in your environment
본 한국어 번역은 사용자 편의를 위해 제공되는 기계 번역입니다. 영어 버전과 한국어 버전이 서로 어긋나는 경우에는 언제나 영어 버전이 우선합니다.

rclone을 사용하여 StorageGRID에서 개체를 마이그레이션, 저장 및 삭제합니다

기여자

rclone은 S3 작업을 위한 무료 명령줄 도구 및 클라이언트입니다. rclone을 사용하여 StorageGRID에서 오브젝트 데이터를 마이그레이션, 복사, 삭제할 수 있습니다. rclone에는 아래 예와 같이 "퍼지" 기능을 사용하여 비어 있지 않은 경우에도 버킷을 삭제할 수 있는 기능이 포함되어 있습니다.

rclone을 설치하고 구성합니다

워크스테이션 또는 서버에 rclone을 설치하려면 에서 다운로드하십시오 "rclone.org".

초기 구성 단계

  1. config 스크립트를 실행하거나 수동으로 파일을 생성하여 rclone 구성 파일을 생성합니다.

  2. 이 예에서는 rclone 구성에서 원격 StorageGRID S3 엔드포인트 이름에 sgdemo를 사용합니다.

    1. 구성 파일 ~/.config/rclone/rclone.conf를 생성합니다

          [sgdemo]
          type = s3
          provider = Other
          access_key_id = ABCDEFGH123456789JKL
          secret_access_key = 123456789ABCDEFGHIJKLMN0123456789PQRST+V
          endpoint = sgdemo.netapp.com
    2. rclone 구성을 실행합니다

      rclone config

      2023/04/13 14:22:45 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults
      No remotes found - make a new one
      n) New remote
      s) Set configuration password
      q) Quit config
      n/s/q> n
      name> sgdemo
      Option Storage.
      Type of storage to configure.
      Enter a string value. Press Enter for the default ("").
      Choose a number from below, or type in your own value.
       1 / 1Fichier
         \ "fichier"
       2 / Alias for an existing remote
         \ "alias"
       3 / Amazon Drive
         \ "amazon cloud drive"
       4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS
         \ "s3"
       5 / Backblaze B2
         \ "b2"
       6 / Better checksums for other remotes
         \ "hasher"
       7 / Box
         \ "box"
       8 / Cache a remote
         \ "cache"
       9 / Citrix Sharefile
         \ "sharefile"
      10 / Compress a remote
         \ "compress"
      11 / Dropbox
         \ "dropbox"
      12 / Encrypt/Decrypt a remote
         \ "crypt"
      13 / Enterprise File Fabric
         \ "filefabric"
      14 / FTP Connection
         \ "ftp"
      15 / Google Cloud Storage (this is not Google Drive)
         \ "google cloud storage"
      16 / Google Drive
         \ "drive"
      17 / Google Photos
         \ "google photos"
      18 / Hadoop distributed file system
         \ "hdfs"
      19 / Hubic
         \ "hubic"
      20 / In memory object storage system.
         \ "memory"
      21 / Jottacloud
         \ "jottacloud"
      22 / Koofr
         \ "koofr"
      23 / Local Disk
         \ "local"
      24 / Mail.ru Cloud
         \ "mailru"
      25 / Mega
         \ "mega"
      26 / Microsoft Azure Blob Storage
         \ "azureblob"
      27 / Microsoft OneDrive
         \ "onedrive"
      28 / OpenDrive
         \ "opendrive"
      29 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
         \ "swift"
      30 / Pcloud
         \ "pcloud"
      31 / Put.io
         \ "putio"
      32 / QingCloud Object Storage
         \ "qingstor"
      33 / SSH/SFTP Connection
         \ "sftp"
      34 / Sia Decentralized Cloud
         \ "sia"
      35 / Sugarsync
         \ "sugarsync"
      36 / Tardigrade Decentralized Cloud Storage
         \ "tardigrade"
      37 / Transparently chunk/split large files
         \ "chunker"
      38 / Union merges the contents of several upstream fs
         \ "union"
      39 / Uptobox
         \ "uptobox"
      40 / Webdav
         \ "webdav"
      41 / Yandex Disk
         \ "yandex"
      42 / Zoho
         \ "zoho"
      43 / http Connection
         \ "http"
      44 / premiumize.me
         \ "premiumizeme"
      45 / seafile
         \ "seafile"
      Storage> 4
      Option provider.
      Choose your S3 provider.
      Enter a string value. Press Enter for the default ("").
      Choose a number from below, or type in your own value.
       1 / Amazon Web Services (AWS) S3
         \ "AWS"
       2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
         \ "Alibaba"
       3 / Ceph Object Storage
         \ "Ceph"
       4 / Digital Ocean Spaces
         \ "DigitalOcean"
       5 / Dreamhost DreamObjects
         \ "Dreamhost"
       6 / IBM COS S3
         \ "IBMCOS"
       7 / Minio Object Storage
         \ "Minio"
       8 / Netease Object Storage (NOS)
         \ "Netease"
       9 / Scaleway Object Storage
         \ "Scaleway"
      10 / SeaweedFS S3
         \ "SeaweedFS"
      11 / StackPath Object Storage
         \ "StackPath"
      12 / Tencent Cloud Object Storage (COS)
         \ "TencentCOS"
      13 / Wasabi Object Storage
         \ "Wasabi"
      14 / Any other S3 compatible provider
         \ "Other"
      provider> 14
      Option env_auth.
      Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
      Only applies if access_key_id and secret_access_key is blank.
      Enter a boolean value (true or false). Press Enter for the default ("false").
      Choose a number from below, or type in your own value.
       1 / Enter AWS credentials in the next step.
         \ "false"
       2 / Get AWS credentials from the environment (env vars or IAM).
         \ "true"
      env_auth> 1
      Option access_key_id.
      AWS Access Key ID.
      Leave blank for anonymous access or runtime credentials.
      Enter a string value. Press Enter for the default ("").
      access_key_id> ABCDEFGH123456789JKL
      Option secret_access_key.
      AWS Secret Access Key (password).
      Leave blank for anonymous access or runtime credentials.
      Enter a string value. Press Enter for the default ("").
      secret_access_key> 123456789ABCDEFGHIJKLMN0123456789PQRST+V
      Option region.
      Region to connect to.
      Leave blank if you are using an S3 clone and you don't have a region.
      Enter a string value. Press Enter for the default ("").
      Choose a number from below, or type in your own value.
         / Use this if unsure.
       1 | Will use v4 signatures and an empty region.
         \ ""
         / Use this only if v4 signatures don't work.
       2 | E.g. pre Jewel/v10 CEPH.
         \ "other-v2-signature"
      region> 1
      Option endpoint.
      Endpoint for S3 API.
      Required when using an S3 clone.
      Enter a string value. Press Enter for the default ("").
      endpoint> sgdemo.netapp.com
      Option location_constraint.
      Location constraint - must be set to match the Region.
      Leave blank if not sure. Used when creating buckets only.
      Enter a string value. Press Enter for the default ("").
      location_constraint>
      Option acl.
      Canned ACL used when creating buckets and storing or copying objects.
      This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
      For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
      Note that this ACL is applied when server-side copying objects as S3
      doesn't copy the ACL from the source but rather writes a fresh one.
      Enter a string value. Press Enter for the default ("").
      Choose a number from below, or type in your own value.
         / Owner gets FULL_CONTROL.
       1 | No one else has access rights (default).
         \ "private"
         / Owner gets FULL_CONTROL.
       2 | The AllUsers group gets READ access.
         \ "public-read"
         / Owner gets FULL_CONTROL.
       3 | The AllUsers group gets READ and WRITE access.
         | Granting this on a bucket is generally not recommended.
         \ "public-read-write"
         / Owner gets FULL_CONTROL.
       4 | The AuthenticatedUsers group gets READ access.
         \ "authenticated-read"
         / Object owner gets FULL_CONTROL.
       5 | Bucket owner gets READ access.
         | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
         \ "bucket-owner-read"
         / Both the object owner and the bucket owner get FULL_CONTROL over the object.
       6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
         \ "bucket-owner-full-control"
      acl>
      Edit advanced config?
      y) Yes
      n) No (default)
      y/n> n
      --------------------
      [sgdemo]
      type = s3
      provider = Other
      access_key_id = ABCDEFGH123456789JKL
      secret_access_key = 123456789ABCDEFGHIJKLMN0123456789PQRST+V
      endpoint = sgdemo.netapp.com:443
      --------------------
      y) Yes this is OK (default)
      e) Edit this remote
      d) Delete this remote
      y/e/d>
      Current remotes:
      Name                 Type
      ====                 ====
      sgdemo               s3
      e) Edit existing remote
      n) New remote
      d) Delete remote
      r) Rename remote
      c) Copy remote
      s) Set configuration password
      q) Quit config
      e/n/d/r/c/s/q> q

기본 명령 예

  • * 버킷 생성: *

    rclone mkdir remote:bucket

    rclone mkdir sgdemo:test01

    참고 SSL 인증서를 무시해야 하는 경우 — 확인 안 함 - 인증서를 사용합니다.
  • * 모든 버킷 나열: *

    rclone lsd remote:

    rclone LSD sgdemo 수:

  • * 특정 버킷의 오브젝트 목록: *

    rclone ls remote:bucket

    rclone ls sgdemo: test01

        65536 TestObject.0
        65536 TestObject.1
        65536 TestObject.10
        65536 TestObject.12
        65536 TestObject.13
        65536 TestObject.14
        65536 TestObject.15
        65536 TestObject.16
        65536 TestObject.17
        65536 TestObject.18
        65536 TestObject.2
        65536 TestObject.3
        65536 TestObject.5
        65536 TestObject.6
        65536 TestObject.7
        65536 TestObject.8
        65536 TestObject.9
      33554432 bigobj
          102 key.json
           47 locked01.txt
    4294967296 sequential-read.0.0
           15 test.txt
          116 version.txt
  • * 버킷 삭제: *

    rclone rmdir remote:bucket

    rclone rmdir sgdemo:test02

  • * 개체 넣기: *

    rclone copy filename remote:bucket

    rclone copy~/test/testfile.txt sgdemo:test01

  • * 개체 가져오기: *

    rclone copy remote:bucket/objectname filename

    rclone copy sgdemo:test01/testfile.txt~/test/testfileS3.txt

  • * 개체 삭제: *

    rclone delete remote:bucket/objectname

    rclone delete sgdemo:test01/testfile.txt

  • * 버킷에서 오브젝트 마이그레이션 *

    rclone sync source:bucket destination:bucket --progress

    rclone sync source_directory destination:bucket --progress

    rclone sync sgdemo: test01 sgdemo: clone01 — 진행률

    Transferred:   	    4.032 GiB / 4.032 GiB, 100%, 95.484 KiB/s, ETA 0s
    Transferred:           22 / 22, 100%
    Elapsed time:       1m4.2s
    참고 progress(진행) 또는 -P를 사용하여 작업의 진행 상황을 표시합니다. 그렇지 않으면 출력이 없습니다.
  • * 버켓과 모든 오브젝트 내용 삭제 *

    rclone purge remote:bucket --progress

    rclone purge sgdemo: test01 — 진행률

    Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
    Checks:                46 / 46, 100%
    Deleted:               23 (files), 1 (dirs)
    Elapsed time:        10.2s

    rclone ls sgdemo: test01

    2023/04/14 09:40:51 Failed to ls: directory not found

지그프리드 헤프와 아론 클라인 작사