In an earlier article on setting up KDE Neon I mentioned that I was going to use KIO-Gdrive to access my Google Drive though Dolphin. That was a total failure due to a bug with the way KIO-Gdrive handles the token needed for access – it seems to forget the token every few operations. Digging around there don’t seem to be any other options that are going to tightly integrate with KDE. The next best option seems to be Rclone. Don’t get me wrong I’m not having a go at Rclone it just feels like a solution that is going to be clunky compared to the KDE option. With that sort of positive attitude in place lets get going.
Note: this article started out in early 2023 (It’s now mid-2024) with me trying to get everything working on Neon before I gave up on Neon and switched to plain Debian. I liked that Neon tracked the latest KDE releases but it clearly isn’t ready to be a daily driver and possibly never will be.
Installing Rclone
You’ve got two reasonable choices when it comes to installing Rclone, either go with the Debian package which is version 1.60.1 (Nov 2022) or download and install direct from the Rclone website which would give you version 1.68.0 (Sep 2024). Considering it’s Sep 2024 at the time of writing I think the only realistic option is to perform a manual install.
To install it’s just a matter of running the following command. This is covered in more detail in the official documentation which also covers a million other ways to install Rclone. Note, Debian doesn’t come with curl by default so you might need to sudo apt install curl
.
sudo -v ; curl https://rclone.org/install.sh | sudo bash
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 4734 100 4734 0 0 28778 0 --:--:-- --:--:-- --:--:-- 28865 Archive: rclone-current-linux-amd64.zip creating: tmp_unzip_dir_for_rclone/rclone-v1.68.0-linux-amd64/ inflating: tmp_unzip_dir_for_rclone/rclone-v1.68.0-linux-amd64/README.txt [text] inflating: tmp_unzip_dir_for_rclone/rclone-v1.68.0-linux-amd64/README.html [text] inflating: tmp_unzip_dir_for_rclone/rclone-v1.68.0-linux-amd64/rclone.1 [text] inflating: tmp_unzip_dir_for_rclone/rclone-v1.68.0-linux-amd64/rclone [binary] inflating: tmp_unzip_dir_for_rclone/rclone-v1.68.0-linux-amd64/git-log.txt [text] Purging old database entries in /usr/share/man... Processing manual pages under /usr/share/man... Checking for stray cats under /usr/share/man... ... snip ... Checking for stray cats under /usr/local/man... Checking for stray cats under /var/cache/man/oldlocal... 1 man subdirectory contained newer manual pages. 1 manual page was added. 0 stray cats were added. 0 old database entries were purged. rclone v1.68.0 has successfully installed. Now run "rclone config" for setup. Check https://rclone.org/docs/ for more details.
Configuring Rclone
Configuring Rclone is a huge topic and I’d mostly end up copying the official documentation if I tried a comprehensive document. The issue is simply that there are many many cloud storage providers and they all tend to configure things slightly differently. This means there’s really no option but to follow the official guide. Fortunately, Rclone provides a relatively pain free configuration system for most providers, you simply run rclone config
and follow the prompts. I did find that the official documentation wasn’t always as up to date as it could be. For example I tried configuring Dropbox as a test and the prompts I got were somewhat different to the ones shown on the configuration page.
Dropbox Example
A basic Dropbox configuration is quite simple and is shown below. You’ll first be asked to what you want to do so select n to create a new remote. Give it a name and press enter. Now you need to select the type of remote you want. Dropbox is currently type 12 but this could change as the list is alphabetical and there are around 60 different types currently. The next two questions ask you for client_id
and client_secret
, both should be left blank for now, the official instructions provide details on how to get your own id. For now, say no to advanced configuration. Rclone will next ask you if it’s ok to open a browser, this is done to get an access token. If you are already logged into Dropbox in your browser this is a very quick process. Confirm that the configuration is ok and your good to go.
No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n Enter name for new remote. name> Dropbox Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. ... SNIP ... 12 / Dropbox \ (dropbox) ... SNIP ... Storage> 12 Option client_id. OAuth Client Id. Leave blank normally. Enter a value. Press Enter to leave empty. client_id> Option client_secret. OAuth Client Secret. Leave blank normally. Enter a value. Press Enter to leave empty. client_secret> Edit advanced config? y) Yes n) No (default) y/n> Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes (default) n) No y/n> 2024/09/10 10:38:13 NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=... SNIP ... 2024/09/10 10:38:13 NOTICE: Log in and authorize rclone for access 2024/09/10 10:38:13 NOTICE: Waiting for code... 2024/09/10 10:38:57 NOTICE: Got code Configuration complete. Options: - type: dropbox - token: ... SNIP ... Keep this "Dropbox" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> Current remotes: Name Type ==== ==== Dropbox dropbox e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q>
To list all the files in your Dropbox use the following command. The ending colon is important, it’s how Rclone differentiates between local folders and remote configurations. As you can see I have but a single file in my Dropbox account.
rclone ls Dropbox: 43 test.txt
Copying a local directory to a remote directory can be carried out as shown below. This copied the local directory /home/doozer/sync/dropox
up the the root of the remote account Dropbox
. To copy remote folder to a local folder just swap the two paths over. Note that by default there is no output if the command succeeds.
rclone copy /home/doozer/sync/dropbox/ Dropbox:
Copy vs Sync
Here’s where things get a bit more complex. If you run the command rclone copy Dropbox: /home/doozer/sync/dropbox/
it will pull everything from the remote directory and overwrite the local version regardless of whether the local version has changed. If there are files in the local directory that aren’t in the remote they won’t be touched. Essentially, it works like the cp command. There are certainly situations where you want this behaviour but it’s probably not what you want most of the time when working with cloud storage. What you probably want is a sync.
A basic sync operation looks like this. In my Dropbox account I have just two files test.txt and test2.txt. I modified test.txt remotely (there is limited support for modifying files on the Dropbox website). When I ran the sync command it detected that test.txt had changed and overwrote the local copy I already had got from running the copy command above. Since I had the --interactive
flag specified Rclone asked me if I wanted to overwrite the local file. When starting out the --interactive
flag is a must.
rclone sync --interactive Dropbox: /home/doozer/sync/dropbox/ rclone: copy "test.txt"? y) Yes, this is OK (default) n) No, skip this s) Skip all copy operations with no more questions !) Do all copy operations with no more questions q) Exit rclone now. y/n/s/!/q> 2024/09/10 11:22:51 NOTICE: Transferred: 55 B / 55 B, 100%, 0 B/s, ETA - Checks: 2 / 2, 100% Transferred: 1 / 1, 100% Elapsed time: 20.0s
Next I created a new file locally called test3.txt and re-ran the sync command. Unlike the copy command which wouldn’t have cared about the test3.txt file the sync command makes the destination (path2) look like the source (path1) even if it needs to delete files.
rclone sync --interactive Dropbox: /home/doozer/sync/dropbox/ rclone: delete "test3.txt"? y) Yes, this is OK (default) n) No, skip this s) Skip all delete operations with no more questions !) Do all delete operations with no more questions q) Exit rclone now. y/n/s/!/q> 2024/09/10 11:27:16 NOTICE: Transferred: 0 B / 0 B, -, 0 B/s, ETA - Checks: 3 / 3, 100% Deleted: 1 (files), 0 (dirs), 21 B (freed) Elapsed time: 8.1s
Bisync
Bisync is similar to how the client software for most cloud storage applications works (think the Google Drive client). It performs a two way synchronization so anything new on the local machine gets uploaded to remote and anything new on remote gets downloaded to local. The issue here is that it’s a complicated process and it means giving up some control to the software to make decisions about what it should do with files.
Due to the way the bisync command works it needs to store some local data about the files it’s syncing. The first use of the bisync command and subsequent commands will be slightly different as it has to build it’s cache of managed files. The suggested first run command is shown below with an explanation to follow.
rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run
Bisync doesn’t have source and destination paths but the order of them does matter as it can be used by the conflict resolution algorithm. The default conflict resolution is to let path1 win.
The first flag is --create-empty-src-dirs
, this tells Rclone to create and remove empty directories as it finds them. This is is what most cloud storage clients do so I’ll be using it. The default behaviour is to not propagate empty directories. Next up is the --compare size,modtime,checksum
flag which tells Rclone what to compare when checking for changed files. The default is size and modtime only. Following that is –slow-hash-sync-only which which is used to speed up checking on sync’s that involve a large number of files where most of them don’t change e.g. typical cloud storage usage. There is a slight risk to using this. If the file is modified but doesn’t change in size and the modtime also doesn’t change it won’t get sync’ed. The –-resilient
flag is the last bisync specific flag. This flag just allows bisync to automatically recover from less serious failures. A more serious failure will require a –resync.
Most of the rest of the flags are fairly generic Rclone flags. Looking at -MvP
the M means copy metadata such as ownership, v means verbose, and P means show progress. The --drive-skip-gdocs
flag is fairly self explanatory, it tells Rclone to ignore Google Docs. This is important because Google Docs aren’t like normal files (they are just pointers to magic) and having them anywhere other than on the Google Drive is an accident waiting to happen. I once lost an important file (under Windows) because I forgot it was a Google Sheet and moved it out of a local sync’ed folder. The client interpreted that as a delete and immediately propagated the delete. I was left with the completely useless pointer file. The flag --fix-case
is used to correct any case mismatches when dealing with paths that don’t support case sensitivity (why do they even exist?). --dry-run
just tells Rclone to tell you what would happen and not actually do anything.
Finally, the --resync
flag tells bisync that it needs to follow a special sync procedure. Essentially it copies from path2 to path1 and then from path1 to path2. If path1 is local and path two is remote then everything from remote will be copied local and then local will be copied to remote. If the local is empty then effectively this should just be the same as a remote to local copy.
The command I used with Dropbox is shown below. Notice that it tried to copy the two test files from the remote and it detected slow hashing on my local machine (I believe this is because it has to hash files as it encounters them whereas cloud storage maintains hashes). I then re-ran the command without the --dry-run
flag and then again without the --resync
flag. On the final run without resyncing the system detected no changes.
rclone bisync /home/doozer/sync/dropbox/ Dropbox: --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run 2024/09/10 15:34:41 NOTICE: bisync is IN BETA. Don't use in production! 2024/09/10 15:34:41 INFO : Slow hash detected on Path1. Will ignore checksum due to slow-hash settings 2024/09/10 15:34:41 NOTICE: Dropbox root '': will use dropbox for same-side diffs on Path2 only 2024/09/10 15:34:41 NOTICE: Ignoring checksums during --resync as --slow-hash-sync-only is set. 2024/09/10 15:34:41 INFO : Bisyncing with Comparison Settings: { "Modtime": true, "Size": true, "Checksum": true, "HashType1": 0, "HashType2": 32, "NoSlowHash": false, "SlowHashSyncOnly": true, "SlowHashDetected": true, "DownloadHash": false } 2024/09/10 15:34:41 INFO : Synching Path1 "/home/doozer/sync/dropbox/" with Path2 "Dropbox:/" 2024/09/10 15:34:41 INFO : Copying Path2 files to Path1 2024/09/10 15:34:41 NOTICE: - Path2 Resync is copying files to - Path1 2024/09/10 15:34:42 NOTICE: test.txt: Skipped copy as --dry-run is set (size 55) 2024/09/10 15:34:42 NOTICE: test2.txt: Skipped copy as --dry-run is set (size 62) 2024/09/10 15:34:42 NOTICE: - Path1 Resync is copying files to - Path2 2024/09/10 15:34:42 INFO : Resync updating listings 2024/09/10 15:34:42 INFO : Bisync successful Transferred: 117 B / 117 B, 100%, 0 B/s, ETA - Transferred: 2 / 2, 100% Elapsed time: 1.0s 2024/09/10 15:34:42 NOTICE: Transferred: 117 B / 117 B, 100%, 0 B/s, ETA - Transferred: 2 / 2, 100% Elapsed time: 1.0s 2024/09/10 15:34:42 INFO : Dropbox root '': Committing uploads - please wait...
Next I created a test3.txt file on my local machine and re-ran the command (without –resync). The output is shown below.
rclone bisync /home/doozer/sync/dropbox/ Dropbox: --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case 2024/09/10 15:40:02 NOTICE: bisync is IN BETA. Don't use in production! 2024/09/10 15:40:02 INFO : Slow hash detected on Path1. Will ignore checksum due to slow-hash settings 2024/09/10 15:40:02 NOTICE: Dropbox root '': will use dropbox for same-side diffs on Path2 only 2024/09/10 15:40:02 INFO : Bisyncing with Comparison Settings: { "Modtime": true, "Size": true, "Checksum": true, "HashType1": 0, "HashType2": 32, "NoSlowHash": false, "SlowHashSyncOnly": true, "SlowHashDetected": true, "DownloadHash": false } 2024/09/10 15:40:02 INFO : Synching Path1 "/home/doozer/sync/dropbox/" with Path2 "Dropbox:/" 2024/09/10 15:40:02 INFO : Building Path1 and Path2 listings 2024/09/10 15:40:02 INFO : Path1 checking for diffs 2024/09/10 15:40:02 INFO : - Path1 File is new - test3.txt 2024/09/10 15:40:02 INFO : Path1: 1 changes: 1 new, 0 modified, 0 deleted 2024/09/10 15:40:02 INFO : Path2 checking for diffs 2024/09/10 15:40:02 INFO : Applying changes 2024/09/10 15:40:02 INFO : - Path1 Queue copy to Path2 - Dropbox:/test3.txt 2024/09/10 15:40:02 INFO : - Path1 Do queued copies to - Path2 2024/09/10 15:40:05 INFO : test3.txt: Copied (new) 2024/09/10 15:40:05 INFO : Updating listings 2024/09/10 15:40:05 INFO : Validating listings for Path1 "/home/doozer/sync/dropbox/" vs Path2 "Dropbox:/" 2024/09/10 15:40:05 INFO : Bisync successful Transferred: 15 B / 15 B, 100%, 7 B/s, ETA 0s Checks: 5 / 5, 100% Transferred: 1 / 1, 100% Elapsed time: 3.1s 2024/09/10 15:40:05 INFO : Transferred: 15 B / 15 B, 100%, 7 B/s, ETA 0s Checks: 5 / 5, 100% Transferred: 1 / 1, 100% Elapsed time: 3.1s 2024/09/10 15:40:05 INFO : Dropbox root '': Committing uploads - please wait...
As expected it correctly uploaded the test3.txt file. A more interesting example is what happens when test3.txt is modified locally and remotely. The results look like like this:
rclone bisync /home/doozer/sync/dropbox/ Dropbox: --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case 2024/09/10 15:44:40 NOTICE: bisync is IN BETA. Don't use in production! 2024/09/10 15:44:40 INFO : Slow hash detected on Path1. Will ignore checksum due to slow-hash settings 2024/09/10 15:44:40 NOTICE: Dropbox root '': will use dropbox for same-side diffs on Path2 only 2024/09/10 15:44:40 INFO : Bisyncing with Comparison Settings: { "Modtime": true, "Size": true, "Checksum": true, "HashType1": 0, "HashType2": 32, "NoSlowHash": false, "SlowHashSyncOnly": true, "SlowHashDetected": true, "DownloadHash": false } 2024/09/10 15:44:40 INFO : Synching Path1 "/home/doozer/sync/dropbox/" with Path2 "Dropbox:/" 2024/09/10 15:44:40 INFO : Building Path1 and Path2 listings 2024/09/10 15:44:40 INFO : Path1 checking for diffs 2024/09/10 15:44:40 INFO : - Path1 File changed: size (larger), time (newer) - test3.txt 2024/09/10 15:44:40 INFO : Path1: 1 changes: 0 new, 1 modified, 0 deleted 2024/09/10 15:44:40 INFO : (Modified: 1 newer, 0 older, 1 larger, 0 smaller) 2024/09/10 15:44:40 INFO : Path2 checking for diffs 2024/09/10 15:44:40 NOTICE: WARNING: hash unexpectedly blank despite Fs support (, 96841ef7c8e82f49bd2c819505ac26fdee7d92f90966aa6172948ba6430309bd) (you may need to --resync!) 2024/09/10 15:44:40 INFO : - Path2 File changed: size (larger), time (newer) - test3.txt 2024/09/10 15:44:40 INFO : Path2: 1 changes: 0 new, 1 modified, 0 deleted 2024/09/10 15:44:40 INFO : (Modified: 1 newer, 0 older, 1 larger, 0 smaller) 2024/09/10 15:44:40 INFO : Applying changes 2024/09/10 15:44:40 NOTICE: - WARNING New or changed in both paths - test3.txt 2024/09/10 15:44:40 NOTICE: - Path1 Renaming Path1 copy - /home/doozer/sync/dropbox/test3.txt.conflict1 2024/09/10 15:44:40 INFO : test3.txt: Moved (server-side) to: test3.txt.conflict1 2024/09/10 15:44:40 NOTICE: - Path1 Queue copy to Path2 - Dropbox:/test3.txt.conflict1 2024/09/10 15:44:40 NOTICE: - Path2 Renaming Path2 copy - Dropbox:/test3.txt.conflict2 2024/09/10 15:44:42 INFO : test3.txt: Moved (server-side) to: test3.txt.conflict2 2024/09/10 15:44:42 NOTICE: - Path2 Queue copy to Path1 - /home/doozer/sync/dropbox/test3.txt.conflict2 2024/09/10 15:44:42 INFO : - Path2 Do queued copies to - Path1 2024/09/10 15:44:42 INFO : test3.txt.conflict2: Copied (new) 2024/09/10 15:44:42 INFO : - Path1 Do queued copies to - Path2 2024/09/10 15:44:45 INFO : test3.txt.conflict1: Copied (new) 2024/09/10 15:44:45 INFO : Updating listings 2024/09/10 15:44:45 INFO : Validating listings for Path1 "/home/doozer/sync/dropbox/" vs Path2 "Dropbox:/" 2024/09/10 15:44:45 INFO : Bisync successful Transferred: 86 B / 86 B, 100%, 10 B/s, ETA 0s Checks: 6 / 6, 100% Renamed: 2 Transferred: 4 / 4, 100% Server Side Moves: 2 @ 43 B Elapsed time: 5.2s 2024/09/10 15:44:45 INFO : Transferred: 86 B / 86 B, 100%, 10 B/s, ETA 0s Checks: 6 / 6, 100% Renamed: 2 Transferred: 4 / 4, 100% Server Side Moves: 2 @ 43 B Elapsed time: 5.2s 2024/09/10 15:44:45 INFO : Dropbox root '': Committing uploads - please wait...
What bisync has done is take the file in conflict and make two copies of it, one for the local version and one for the remote version. It’s then up to you to figure out which you want to keep or how to merge them. I really like this solution to the problem. The whole conflict resolution algorithm is described here. Rerunning the bisync with the files still in conflict works smoothly as new “conflict” copies have been created so the conflict has essentially vanished.
Notice that it gave a warning about the hash being blank despite Dropbox supporting hashing. I suspect this was caused by me editing the file on the website and then immediately requesting a sync, Dropbox probably hadn’t hashed the file yet. Rerunning the bisync later didn’t show this warning message.
There’s much more to bisync than this but this is a good start. The are literally pages of command options to go through to get a full understanding. The silly thing is I just wanted to update a single file on my Google Drive.