Using rsync for backups is kinda outdated. You should probably use a versionning tool. If you never used a VCS, learning one won’t be a waste of time—it’s an increasingly important skill for programmers.
I think that git is the best fit for your particular use case: each commit is a full snapshot of your working directory (about every other VCS stores commits as a chained list, making it impossible to delete a particular commit). It also uses a compression scheme optimized for storing successive snapshots, making it more space-efficient than rsync.
With that said, if you’re dead set on using rsync, the parabolic function you suggest can be implemented like this (I’m only demonstrating the algorithm here, not the rsync stuff):
#!perl -w
use v5.16;
use List::MoreUtils qw(uniq);
my @snapshots;
my $capacity = 100.5;
my $ratio = .70;
my @keep_me = reverse uniq map { $capacity - int $_**2 / $ratio**2 / $
+capacity } 1..$ratio * $capacity;
for (1..1000) {
push @snapshots, $_;
if (@snapshots > $capacity) {
@snapshots = @snapshots[@keep_me];
}
}
say "@snapshots";
This example assumes that you do a total of 1000 snapshot, but only have enough disk space to store 100. It lists the snapshots that are kept: 760 803 825 834 (…) 998 999 1000. Rather that blindly keeping the 100 latest snapshots (901..1000), it keeps snapshots from much further back, getting increasingly sparse the further back in time you go. You could also try other functions than y=x^2; I’d suggest an exponential.