Migrate to Rabata.io
Save 70% on Storage Costs

Drop-in S3-compatible replacement with straightforward pricing, zero hidden fees, and superior performance. Migrate in minutes with rclone or AWS CLI.

Calculate Your Migration Savings

Enter your current usage to see how much you'll save by migrating to Rabata.io. Our calculator includes all cost factors: storage, requests, and data transfer.

100 GB 100 TB
0 100M
0 10M
0 10M
0 GB 10 TB
1 day 2 years

How long data typically stays stored before deletion. Affects providers with minimum retention requirements.

Compare Against

Custom Provider Pricing

Why Migrate to Rabata.io?

Straightforward Pricing

Pay only for storage. $0.01/GB/month for hot storage, $49/10TB/month for backup storage. That's it.

No Hidden Fees

Zero charges for requests, data transfer, or API calls. No surprise bills at the end of the month.

Simple to Start

30-day free trial with no credit card required. Full S3 API compatibility means your existing tools work out of the box.

No Strings Attached

Cancel anytime, export your data freely. We make it easy to migrate in and just as easy to leave if needed.

Superior Performance

Rabata consistently ranks in the top 3 providers for speed across upload, download, and mixed operations in independent benchmarks →

GDPR Compliant

EU Datacenter (eu-west-2) with strict data protection compliance. Your data stays where you want it.

Migration Guide

Choose your current provider below for step-by-step migration instructions using rclone. All guides include configuration examples and verification steps.

Getting Started with Rclone

Rclone is a powerful command-line tool for managing files on cloud storage. It supports Rabata.io natively as an S3-compatible provider.

Step 1: Install Rclone

Choose your platform:

# Linux/Mac (recommended)
curl https://rclone.org/install.sh | sudo bash
# macOS with Homebrew
brew install rclone
# Ubuntu/Debian
sudo apt install rclone
# Fedora
sudo dnf install rclone

Windows: Download from rclone.org/downloads

Step 2: Configure Rabata.io in Rclone

First, create your Rabata.io account and generate access keys:

  1. Sign up at Rabata.io (30-day free trial, no credit card)
  2. Go to Dashboard → Access Keys
  3. Click "Create Access Key" and save your credentials
  4. Create a bucket in your preferred region (eu-west-2 or us-east-1)

Now configure rclone:

rclone config

Follow the interactive prompts:

  • n - New remote
  • Name: rabata
  • Storage: s3 (Amazon S3 Compliant Storage Providers)
  • Provider: Other (Any other S3 compatible provider)
  • env_auth: false (Enter credentials manually)
  • access_key_id: [Your Rabata Access Key ID]
  • secret_access_key: [Your Rabata Secret Access Key]
  • region: eu-west-2 (or your chosen region)
  • endpoint: https://s3.eu-west-2.rabata.io (adjust for your region)
  • location_constraint: eu-west-2 (must match region)
  • acl: private (or public-read if needed)
  • Accept defaults for remaining options
  • Confirm and quit

Step 3: Verify Connection

rclone lsd rabata:

Test upload:

echo "test" > test.txt
rclone copy test.txt rabata:your-bucket-name/

Verify:

rclone ls rabata:your-bucket-name/

✓ If you see your test file, rclone is configured correctly!

Migrating from AWS S3

AWS S3 to Rabata.io migration is straightforward since both use the same S3 API.

Step 1: Configure Both Providers

Configure AWS (if not already done):

rclone config
  • Name: aws
  • Storage: s3
  • Provider: AWS
  • Enter your AWS credentials and region

Configure Rabata.io following the "Getting Started" tab instructions.

Step 2: List Your AWS Buckets

See what needs to be migrated:

rclone lsd aws:

Check bucket size:

rclone size aws:your-bucket-name

Step 3: Sync Data to Rabata.io

Dry run first (see what would be copied):

rclone sync aws:source-bucket rabata:destination-bucket --dry-run -P

Actual sync with progress:

rclone sync aws:source-bucket rabata:destination-bucket -P --checksum

For faster transfers with large files, use multi-threading:

rclone sync aws:source-bucket rabata:destination-bucket -P --checksum --transfers=16 --checkers=32

For very large migrations, use --fast-list to reduce API calls:

rclone sync aws:source-bucket rabata:destination-bucket -P --checksum --fast-list --transfers=16

Step 4: Verify Migration

Check file counts match:

rclone ls aws:source-bucket | wc -l
rclone ls rabata:destination-bucket | wc -l

Verify with checksums:

rclone check aws:source-bucket rabata:destination-bucket --checksum

Step 5: Update Your Application

Update your S3 client configuration to point to Rabata.io.

AWS CLI:

aws configure set s3.endpoint_url https://s3.eu-west-2.rabata.io

Python (boto3):

s3_client = boto3.client('s3',
    endpoint_url='https://s3.eu-west-2.rabata.io',
    aws_access_key_id='your-rabata-key',
    aws_secret_access_key='your-rabata-secret',
    region_name='eu-west-2'
)

Node.js (AWS SDK):

const s3 = new AWS.S3({
    endpoint: 'https://s3.eu-west-2.rabata.io',
    accessKeyId: 'your-rabata-key',
    secretAccessKey: 'your-rabata-secret',
    region: 'eu-west-2',
    s3ForcePathStyle: true
});

💡 Pro tip: Keep AWS as backup during transition. Once verified, delete from AWS to stop charges.

Migrating from Azure Blob Storage

Azure Blob Storage uses a different API, but rclone handles the translation seamlessly.

Step 1: Configure Azure and Rabata

Configure Azure:

rclone config
  • Name: azure
  • Storage: azureblob
  • Enter your Azure Storage Account name and key

Configure Rabata.io following the "Getting Started" tab instructions.

Step 2: List Azure Containers

List containers:

rclone lsd azure:

Check container size:

rclone size azure:your-container-name

Step 3: Migrate to Rabata.io

Dry run:

rclone sync azure:source-container rabata:destination-bucket --dry-run -P

Full migration with progress:

rclone sync azure:source-container rabata:destination-bucket -P --checksum --transfers=16 --checkers=32

For Azure's blob tiers (hot/cool/archive), all become hot storage on Rabata. Use --fast-list for large containers:

rclone sync azure:source-container rabata:destination-bucket -P --fast-list --checksum --transfers=16

Step 4: Verify

Compare file counts:

rclone ls azure:source-container | wc -l
rclone ls rabata:destination-bucket | wc -l

Checksum verification:

rclone check azure:source-container rabata:destination-bucket

Step 5: Update Application Code

Replace Azure SDK calls with S3-compatible SDK.

Before (Azure SDK):

from azure.storage.blob import BlobServiceClient
blob_service = BlobServiceClient.from_connection_string("connection_string")

After (S3 SDK):

import boto3
s3_client = boto3.client('s3',
    endpoint_url='https://s3.eu-west-2.rabata.io',
    aws_access_key_id='your-rabata-key',
    aws_secret_access_key='your-rabata-secret',
    region_name='eu-west-2'
)

⚠️ Note: Azure blob metadata keys may need adjustment for S3 compatibility. Test thoroughly.

Migrating from Google Cloud Storage

Google Cloud Storage is S3-compatible via their interoperability API, making migration straightforward.

Step 1: Configure GCP and Rabata

Configure Google Cloud Storage:

rclone config
  • Name: gcs
  • Storage: s3
  • Provider: GCS (Google Cloud Storage)
  • Enter your GCS HMAC credentials
  • Region: leave blank for default
  • Endpoint: storage.googleapis.com

Configure Rabata.io following the "Getting Started" tab instructions.

Step 2: List GCS Buckets

List buckets:

rclone lsd gcs:

Check bucket size:

rclone size gcs:your-bucket-name

Step 3: Migrate Data

Test with dry run:

rclone sync gcs:source-bucket rabata:destination-bucket --dry-run -P

Full migration:

rclone sync gcs:source-bucket rabata:destination-bucket -P --checksum --transfers=16 --checkers=32 --fast-list

For multi-regional buckets, consider regional migrations to minimize egress charges:

rclone sync gcs:source-bucket rabata:destination-bucket -P --checksum --fast-list --transfers=16 --buffer-size=128M

Step 4: Verify Migration

File count comparison:

rclone ls gcs:source-bucket | wc -l
rclone ls rabata:destination-bucket | wc -l

Full verification:

rclone check gcs:source-bucket rabata:destination-bucket --checksum

Step 5: Update Your Code

Before (Google Cloud Client):

from google.cloud import storage
client = storage.Client()
bucket = client.bucket('bucket-name')

After (S3-compatible):

import boto3
s3 = boto3.client('s3',
    endpoint_url='https://s3.eu-west-2.rabata.io',
    aws_access_key_id='your-rabata-key',
    aws_secret_access_key='your-rabata-secret',
    region_name='eu-west-2'
)

💡 Cost tip: GCP charges for egress. Consider starting your migration during off-peak hours or in batches.

Migrating from Other S3-Compatible Providers

Most S3-compatible providers (DigitalOcean Spaces, Backblaze B2, Wasabi, etc.) work the same way.

Step 1: Configure Source Provider

rclone config
  • Name: source
  • Storage: s3
  • Provider: Other
  • Enter your source provider's Access Key ID, Secret Access Key, Region, Endpoint URL, and Location constraint

Common endpoint examples:

  • DigitalOcean: nyc3.digitaloceanspaces.com
  • Backblaze B2: s3.us-west-002.backblazeb2.com
  • Wasabi: s3.wasabisys.com
  • Cloudflare R2: [account-id].r2.cloudflarestorage.com

Step 2: Configure Rabata.io

See the "Getting Started" tab for Rabata configuration.

Step 3: Migrate Your Data

Always test first:

rclone sync source:bucket-name rabata:bucket-name --dry-run -P

Actual migration:

rclone sync source:bucket-name rabata:bucket-name -P --checksum --transfers=16 --checkers=32 --fast-list

For incremental sync (only new/changed files):

rclone sync source:bucket-name rabata:bucket-name -P --checksum --update --use-server-modtime

Common Rclone Flags Explained

  • -P - Show progress during transfer
  • --checksum - Verify files using checksums instead of size/time
  • --transfers=16 - Transfer 16 files in parallel (adjust based on bandwidth)
  • --checkers=32 - Check 32 files in parallel for comparison
  • --fast-list - Use recursive list (faster for large buckets)
  • --dry-run - Test without making changes
  • --update - Skip files that are newer on destination
  • --size-only - Skip based on size only (faster but less safe)

Step 4: Verify and Update

Verify migration:

rclone check source:bucket-name rabata:bucket-name --checksum

Update your application endpoint - simply change the endpoint URL to:

https://s3.[region].rabata.io

Keep the same S3 SDK and API calls - no code changes needed, just configuration.

Migration Checklist

1

Sign Up for Rabata.io

Create your account and start your 30-day free trial. No credit card required.

Start free trial
2

Generate Access Keys

In your dashboard, create access keys for API authentication. Save them securely.

3

Create Buckets

Create buckets in your preferred region matching your source bucket structure.

4

Install and Configure Rclone

Follow the guide above to configure both source and Rabata.io in rclone.

5

Test Migration

Run a dry-run migration on a small bucket to verify configuration.

6

Perform Full Migration

Migrate your data using rclone sync with appropriate flags for your use case.

7

Verify Data Integrity

Use rclone check with checksums to ensure all data transferred correctly.

8

Update Application Configuration

Point your S3 clients to Rabata.io endpoints. No code changes needed, just configuration.

9

Test Application

Verify all read/write operations work correctly with the new endpoint.

10

Cleanup Old Provider

Once confident, delete data from old provider to stop charges. Keep backups as needed.

Ready to Migrate?

Start your 30-day free trial now. No credit card required, no strings attached.

Need help with your migration? Our team is here to assist with large-scale migrations and custom requirements.