Mit rclone können Sie Objekte auf StorageGRID migrieren, VERSCHIEBEN und LÖSCHEN
Von Siegfried Hepp und Aron Klein
Rclone ist ein kostenloses Kommandozeilen-Tool und Client für S3-Vorgänge. Sie können rclone verwenden, um Objektdaten auf StorageGRID zu migrieren, zu kopieren und zu löschen. Rclone bietet die Möglichkeit, Buckets auch dann zu löschen, wenn es nicht leer ist. Die Funktion „purge“ ist in einem Beispiel unten dargestellt.
Installieren und Konfigurieren von rclone
Um rclone auf einer Workstation oder einem Server zu installieren, laden Sie es von herunter "rclone.org".
Erste Konfigurationsschritte
-
Erstellen Sie die rclone-Konfigurationsdatei, indem Sie entweder das Konfigurationsskript ausführen oder die Datei manuell erstellen.
-
In diesem Beispiel verwende ich sgdemo für den Namen des entfernten StorageGRID S3-Endpunkts in der rclone-Konfiguration.
-
Erstellen Sie die Konfigurationsdatei ~/.config/rclone/rclone.conf
[sgdemo] type = s3 provider = Other access_key_id = ABCDEFGH123456789JKL secret_access_key = 123456789ABCDEFGHIJKLMN0123456789PQRST+V endpoint = sgdemo.netapp.com
-
Führen Sie rclone config aus
# Rclone config
2023/04/13 14:22:45 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> sgdemo
Option Storage. Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value. 1 / 1Fichier \ "fichier" 2 / Alias for an existing remote \ "alias" 3 / Amazon Drive \ "amazon cloud drive" 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, SeaweedFS, and Tencent COS \ "s3" 5 / Backblaze B2 \ "b2" 6 / Better checksums for other remotes \ "hasher" 7 / Box \ "box" 8 / Cache a remote \ "cache" 9 / Citrix Sharefile \ "sharefile" 10 / Compress a remote \ "compress" 11 / Dropbox \ "dropbox" 12 / Encrypt/Decrypt a remote \ "crypt" 13 / Enterprise File Fabric \ "filefabric" 14 / FTP Connection \ "ftp" 15 / Google Cloud Storage (this is not Google Drive) \ "google cloud storage" 16 / Google Drive \ "drive" 17 / Google Photos \ "google photos" 18 / Hadoop distributed file system \ "hdfs" 19 / Hubic \ "hubic" 20 / In memory object storage system. \ "memory" 21 / Jottacloud \ "jottacloud" 22 / Koofr \ "koofr" 23 / Local Disk \ "local" 24 / Mail.ru Cloud \ "mailru" 25 / Mega \ "mega" 26 / Microsoft Azure Blob Storage \ "azureblob" 27 / Microsoft OneDrive \ "onedrive" 28 / OpenDrive \ "opendrive" 29 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH) \ "swift" 30 / Pcloud \ "pcloud" 31 / Put.io \ "putio" 32 / QingCloud Object Storage \ "qingstor" 33 / SSH/SFTP Connection \ "sftp" 34 / Sia Decentralized Cloud \ "sia" 35 / Sugarsync \ "sugarsync" 36 / Tardigrade Decentralized Cloud Storage \ "tardigrade" 37 / Transparently chunk/split large files \ "chunker" 38 / Union merges the contents of several upstream fs \ "union" 39 / Uptobox \ "uptobox" 40 / Webdav \ "webdav" 41 / Yandex Disk \ "yandex" 42 / Zoho \ "zoho" 43 / http Connection \ "http" 44 / premiumize.me \ "premiumizeme" 45 / seafile \ "seafile"
Storage> 4
Option provider. Choose your S3 provider. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value. 1 / Amazon Web Services (AWS) S3 \ "AWS" 2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun \ "Alibaba" 3 / Ceph Object Storage \ "Ceph" 4 / Digital Ocean Spaces \ "DigitalOcean" 5 / Dreamhost DreamObjects \ "Dreamhost" 6 / IBM COS S3 \ "IBMCOS" 7 / Minio Object Storage \ "Minio" 8 / Netease Object Storage (NOS) \ "Netease" 9 / Scaleway Object Storage \ "Scaleway" 10 / SeaweedFS S3 \ "SeaweedFS" 11 / StackPath Object Storage \ "StackPath" 12 / Tencent Cloud Object Storage (COS) \ "TencentCOS" 13 / Wasabi Object Storage \ "Wasabi" 14 / Any other S3 compatible provider \ "Other" provider> 14
Option env_auth. Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Enter a boolean value (true or false). Press Enter for the default ("false"). Choose a number from below, or type in your own value. 1 / Enter AWS credentials in the next step. \ "false" 2 / Get AWS credentials from the environment (env vars or IAM). \ "true" env_auth> 1
Option access_key_id. AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). access_key_id> ABCDEFGH123456789JKL
Option secret_access_key. AWS Secret Access Key (password). Leave blank for anonymous access or runtime credentials. Enter a string value. Press Enter for the default (""). secret_access_key> 123456789ABCDEFGHIJKLMN0123456789PQRST+V
Option region. Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value. / Use this if unsure. 1 | Will use v4 signatures and an empty region. \ "" / Use this only if v4 signatures don't work. 2 | E.g. pre Jewel/v10 CEPH. \ "other-v2-signature" region> 1
Option endpoint. Endpoint for S3 API. Required when using an S3 clone. Enter a string value. Press Enter for the default (""). endpoint> sgdemo.netapp.com
Option location_constraint. Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only. Enter a string value. Press Enter for the default (""). location_constraint>
Option acl. Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server-side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value. / Owner gets FULL_CONTROL. 1 | No one else has access rights (default). \ "private" / Owner gets FULL_CONTROL. 2 | The AllUsers group gets READ access. \ "public-read" / Owner gets FULL_CONTROL. 3 | The AllUsers group gets READ and WRITE access. | Granting this on a bucket is generally not recommended. \ "public-read-write" / Owner gets FULL_CONTROL. 4 | The AuthenticatedUsers group gets READ access. \ "authenticated-read" / Object owner gets FULL_CONTROL. 5 | Bucket owner gets READ access. | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-read" / Both the object owner and the bucket owner get FULL_CONTROL over the object. 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. \ "bucket-owner-full-control" acl>
Edit advanced config? y) Yes n) No (default) y/n> n
-------------------- [sgdemo] type = s3 provider = Other access_key_id = ABCDEFGH123456789JKL secret_access_key = 123456789ABCDEFGHIJKLMN0123456789PQRST+V endpoint = sgdemo.netapp.com:443 -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d>
Current remotes:
Name Type ==== ==== sgdemo s3
e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q
-
Beispiele für grundlegende Befehle
-
Erstellen Sie einen Eimer:
rclone mkdir remote:bucket
# Rclone mkdir sgdemo:test01
Verwenden Sie --no-Check-Certificate, wenn Sie SSL-Zertifikate ignorieren müssen. -
Alle Buckets auflisten:
rclone lsd remote:
# Rclone lsd sgdemo:
-
Objekte in einem bestimmten Bucket auflisten:
rclone ls remote:bucket
# Rclone ls sgdemo:test01
65536 TestObject.0 65536 TestObject.1 65536 TestObject.10 65536 TestObject.12 65536 TestObject.13 65536 TestObject.14 65536 TestObject.15 65536 TestObject.16 65536 TestObject.17 65536 TestObject.18 65536 TestObject.2 65536 TestObject.3 65536 TestObject.5 65536 TestObject.6 65536 TestObject.7 65536 TestObject.8 65536 TestObject.9 33554432 bigobj 102 key.json 47 locked01.txt 4294967296 sequential-read.0.0 15 test.txt 116 version.txt
-
Ein Eimer löschen:
rclone rmdir remote:bucket
# Rclone rmdir sgdemo:test02
-
Legen Sie ein Objekt:
rclone copy filename remote:bucket
# Rclone copy ~/Test/testfile.txt sgdemo:test01
-
Holen Sie sich ein Objekt:
rclone copy remote:bucket/objectname filename
# Rclone copy sgdemo:test01/testfile.txt ~/Test/testfileS3.txt
-
Ein Objekt löschen:
rclone delete remote:bucket/objectname
# Rclone delete sgdemo:test01/testfile.txt
-
Objekte in einen Bucket migrieren
rclone sync source:bucket destination:bucket --progress
rclone sync source_directory destination:bucket --progress
# Rclone Sync sgdemo:test01 sgdemo:clone01 --progress
Transferred: 4.032 GiB / 4.032 GiB, 100%, 95.484 KiB/s, ETA 0s Transferred: 22 / 22, 100% Elapsed time: 1m4.2s
Verwenden Sie --progress oder -P, um den Fortschritt der Aufgabe anzuzeigen. Andernfalls gibt es keine Ausgabe. -
Löschen eines Buckets und aller Objektinhalte
rclone purge remote:bucket --progress
# Rclone purge sgdemo:test01 --progress
Transferred: 0 B / 0 B, -, 0 B/s, ETA - Checks: 46 / 46, 100% Deleted: 23 (files), 1 (dirs) Elapsed time: 10.2s
# Rclone ls sgdemo:test01
2023/04/14 09:40:51 Failed to ls: directory not found