Jellyfin Forum
Automatic run backup of your Jellyfin Instance - Printable Version

+- Jellyfin Forum (https://forum.jellyfin.org)
+-- Forum: Support (https://forum.jellyfin.org/f-support)
+--- Forum: Guides, Walkthroughs & Tutorials (https://forum.jellyfin.org/f-guides-walkthroughs-tutorials)
+--- Thread: Automatic run backup of your Jellyfin Instance (/t-automatic-run-backup-of-your-jellyfin-instance)



Automatic run backup of your Jellyfin Instance - cesar_bianchi - 2023-10-13

Hi Guys,

Everyday I found one or more forum topics about "How to implement a backup solution for Jellyfin Instances" or "Requesting a Native Backup feature To Jellyfin Developers". So, based in this fact, I decided share my solution here with our community to help another users with the same question.
 
So, first of all, is important to say: Bellow I'll present my solution, that's ok ? There are a lot of different possibilities to implement a backup solution for files and folders, and my goal here is just share one kind. A kind that I adopted here and is working fine.
 
Second and not less important: This solution is applicable only in Linux environments, but with a few little adjustments you can extend to other types of environments too, like Windows, Docker, etc. Of course, if you proceed any adapt and your "new solution" works fine, please, don't forget to share in this thread too, to help our community!
 
Third and final: I'm using an AWS Account to store my backup files. So, you'll need a valid AWS subscription to apply the same solution. The "Free Tiers" solutions provided by AWS won't works fine with this case. Yes, I can imagine your feels after read it, but you need to know: When you are looking for a safe solution for backup, probably you'll need spend some money for it. I can tell to you: This solution is not cost free but it's cheap. Totally low cost.
 
Before start, let’s talk about the solution and architecture.
 
In summary, this solution considering to store the Jellyfin files and folders related to instance configurations, properties, users, images and metadata, in a Public Cloud, using safe methods, to run automatically everyday, or a week, or a month. Consider basics "know how" in Linux too, as "sudo", "chmod", "nano" and other popular commands. Is important know also about Linux filesystem and paths. So, in the end, you will have a safe place, outside your phisical environment, with your jellyfin instance files, to restore in the future in case of disasters or accidental deletion.

So, there is a important disclaimer here: We’ll won’t coverage as part of a backup schema your content media files, but after read this documentation, you can able to extend the same solution to coverage your content media files too.

All the sample files (scripts) that you will need are attached in this post, in the end of the page as "sample_files.zip"
 
So, let’s start! 
 
1 - First step: How to create an AWS Subscription
If you already have an AWS Subscription, please skip to "Step 2", but if not, you'll need create one. This is a simple step and you can find all instructions here: https://portal.aws.amazon.com/billing/signup
 
To simplify this tutorial, I won't describe in details "How to create an AWS Subscription" because there are a lot of details inside the AWS Oficial Home Page. I suggest to you read direct in oficial source!
 
If you don't know anything about AWS, you can read and learn more here: https://aws.amazon.com/pt/what-is-aws/
 
2 - Creating the cloud resources and services to store your backup files in AWS.
To simplify this tutorial, I'll use an AWS Solution called "AWS Cloud Formation" to created the necessary services quickly, but if you are an AWS heavy user you can create each manually and define your preferences also.
 
Basically, we'll create through Cloud Formation a new S3 Bucket, a new IAM-Policy and a new IAM-User and then assign the new policy to the new user. To do it, I wrote a Cloud Formation Script (attached in this post). Please download it, extract the zip file and store the "CloudFormTemplate.json" in a local folder.
 
Pay attention here: After download the script file, you'll need apply some adjustments before run it in Cloud Formation Web Console.
  • Define the name of your new S3 Bucket. It's required because AWS accepts only unique names for S3 Buckets, so I couldn't define a generic name in file script. To do it, open the script "CloudFormTemplate.json" and find by "DEFINE_HERE_YOUR_UNIQUE_BUCKET_NAME". Replace the string content with your preferred name. Needs to be in lowercase, without special chars, like "myjellyfinbucket7856". Don’t forget to save file changes with the same file name.
    [Image: attachment.php?aid=647]
 

3 - Running Cloud Formation Script in AWS Console.
  • 3.1 - Log in your AWS Console (https://aws.amazon.com/pt/console/) and then in the top of the screen, on "search field", look by “Cloud Formation”.
  • 3.2 - In AWS AWS Cloud Formation Home, click in “Create Stack” button.
  • 3.3 – Select “Template is ready” and “Upload a template file” options. Then, use the button “Choose File” to select the CloudFormTemplate.json file, saved in your local system. Press “Next” button.
  • 3.4 – Specify a "stack name", like “jellyfinbackupresources” and press “Next” button in all next steps. Finally, in last step, check the option “I acknowledge that AWS CloudFormation might create IAM resources with custom names and then click on “Submit” button.
  • The creation stack process will start. Wait few minutes. Use the "refresh" button to update process status.
  • 3.5 – If everything was ok, you’ll see the bellow screen, with a lot of records and their status detail.
    [Image: attachment.php?aid=642]

4 – Getting the User Key and Secret Key to access Cloud Storage.
After running Cloud Formation Script and instance all necessary cloud services, we need collect the "user keys and secrets" to use in the futher, when we'll configure the local backup script in Linux. So, for then, open your AWS Console Management and in the top of screen, through “Search Field” find by “IAM”.
  • 4.1 – In the AWS IAM Page, in the left side, click in “Users” option.
  • 4.2 – Click over a user called “JellyfinUserBackup”. This user was created through "Cloud Formation Script".
  • 4.3 – Inside “JellyfinUserBackup” user details page, click in “Security Credentials”
  • 4.4 – Found a group options called “Access keys” and then click in “Create Access Key” button.
  • 4.5 – In “Use Case” options, select “Command Line Interface (CLI)”;
  • 4.6 – In bottom of page, check the option “I understand the above recommendation and want to proceed to create an access key.” And click in “Next” button and “Create Access Key”
  • 4.7 – In next page, you will see two important fields, called “Access Key” and “Secret access key”. Copy and past their contents in a new text file on your machine and then save the in a safe place. Use the option “show” in AWS page to decode Secret Access Key before copy/paste.
  • Important Note: Never share the keys details! They are the credentials to log in your cloud environment. Be careful.
 
5 – Getting the S3 Bucket URI
S3 URI is the unique resource identifier within the context of the S3 protocol. They follow this naming convention : S3://bucket-name/key-name
For example , if you have a bucket named mybucket and a file called puppy.jpg within it. The S3 URI would appear as - **S3://mybucket/puppy.jpg **
In our case, we will use S3 URI with a parameter in our shell script as a “destination path” of our backup folder. So, for it we need collect the S3 URI for further use.
 
To get your S3 URI, follow these steps:
  • 5.1 – Open your AWS Console Management and in the top of page, through “Search Field” find by “S3”. The S3 Home page will showed and all buckets created before in your accounts will be listed. Found the bucket with the same name define in Cloud Formation Script (Step 2).
  • 5.2 – Click on your bucket, and them, click in “create folder” button.
  • 5.3 – In “folder name” field, type “jellyfin_backup” and then click in “create folder” button.
  • 5.4 – After created “jellyfin_backup” folder, select the folder using a “check” option (left side) and then click in “Copy S3 URI” button. Paste the value in a new text file and save in a safe place in your machine.
    [Image: attachment.php?aid=646]
 
6 – Setup your Linux environment with AWS CLI
After proceed with all cloud configs, now we'll create and configure a local bash script to run automatically for upload the jellyfin files in AWS Cloud. For it, we will use a "Cronjob" and "AWS CLI". The first step is install and configure AWS CLI in your linux environment.
  • 6.1 – To install AWS CLI in your Linux environment, follow all the steps described here: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
  • 6.2 – After install AWS CLI, we’ll setup it with your User Credentials. For it, running the follow command
    sudo aws configure 
  • 6.3 - When the system ask your “AWS Access Key ID”, put the field content collected before, in step 4.7, or considering consult your text file to get your Access Key. After input in terminal, press <enter>
  • 6.4 - When the system ask your “AWS Secret Access Key”, put the field content collected before, in step 4.7, or considering consult your text file to get your Secret Access Key. After input in terminal, press <enter>
  • 6.5 - When the system ask your “Default region name” press <Enter> (let empty);
  • 6.6 – When the system ask your “Default output format” type “text” and press <Enter>;
 
Now, your AWS CLI is already done to use.
 
 
7 – Create a shell script to run a local backup and sync in AWS
After install and configure AWS CLI in your Linux environment, we need create a shell script file to perform backup. In this case, I'll put a simple sample in this post (attachments session), but you can increment this file with your preferences.
 
To perform it, run these bellow commands in your Linux Terminal.
  1. sudo mkdir /etc/scripts/
  2. sudo chmod 777 /etc/scripts/
  3. sudo nano /etc/scripts/ backup_jellyfin_on_aws_script.sh
 
Copy and Past the sample script (attached in this post with name "backup_jellyfin_on_aws_script.sh") in your terminal. Attempting for these important notes bellow:
  • Important Note 1: This sample consider the "origin" paths to backup about my instance. Before use, check if the “origin” paths are the same of your instance. Basically we are covering "Resources" and "Datar" paths.
  • Important Note 2: Anyway, you will need replace the “destination path” inputing your S3 URI collected in step 5.4. They are explicit in script with “<YOUR_S3_URI_PATH>”.
  • Important Note 3: Here, I protected two jellyfin instance folders: “Resources” and “Data”. You can extend the protected directories with same logic. You can protected also your "Media Content Directories". For it, increment the script file with others "aws sync" commands.
    [Image: attachment.php?aid=644]


Finally, after replace the “origin” and “destination” paths, save the script file in your Linux environment. We suggest do not change the path where the file will save (“/etc/scripts/”) and do not change the name file too (“backup_jellyfin_on_aws_script.sh”)
 
  • Apply read/write permissions to script file, using this command in your Linux terminal:
            sudo chmod 777 /etc/scripts/ backup_jellyfin_on_aws_script.sh
 
 
7 – Last step: Configure Crontab to run your local script everyday.
 
  • To setup your crontab to run your script everyday, type the follow command in Linux terminal:
    sudo crontab -e
 
The crontab config option will appear. Use the sample (copy and paste) attached here with name "crontab sample.txt" to schedule your script defined in “Section 6” of this tutorial to run automatically. In this case, the script was configured to run every day at 05:00 AM, but you can choose your preferred time. To do it, change the first parameters in line (minute, hour, week day, etc).

[Image: attachment.php?aid=643]

After copy and paste the crontab example in your terminal, save the file and restart your machine.
To check if the backups are perfoming right, log on in your AWS Console Management, open S3 services, open your bucket and navigate under the folders. You can download too any file or folder. For to do it, check (ou select the folders) that you want download and use the download buttons.
 
  • Important Note: By default, the S3 Bucket was created in “Step 3” with a policy to save money. This policy, in summary, move all files and folders with more 5 days to “Glacier” storage tier.
 
This tier is recommend for backup files with “unfrequent” access. So, if you one day need download your files, you will need first change the “storage tier” of file. You can do it reading these article: https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html

Let me know if this solutions works to you!