Store Your Files on S3 Using the Ruby Shrine Gem (Part 1 of our series), details how to set up the Shrine gem and configure it for uploading files to S3. Our goal for the series is to upload files as a background process, as well as check the files for correctness as a security measure. Follow our step by step instructions right here.
It’s time to tackle smart Ruby file uploads to S3, with our blog series: Store Your Files on S3 Using the Ruby Shrine Gem. This post - Part 1, covers your initial setup and configuration, whereas Part 2 - Direct File Uploads outlines the configuration for direct uploads to S3, and Part 3, Uploading Files from a Remote URL covers uploads using a regular remote URL.
Shrine is a Ruby gem to assist in handling file uploads. It is inspired by both the Refile and CarrierWave gems. Shrine’s biggest advantage over Refile and CarrierWave is that it contains many functionalities needed for for file uploading not covered completely by its competitors. These functions include background file processing (moving files between temporary and permanent storage, as well as deletion in both), Amazon S3 integration with Ruby, direct upload, fetching files from a remote URL, validation by MIME type and many, many more. All these features are organized into plugins. To apply a specific feature, you need only to enable the appropriate plugin and enter a simple configuration.
Note: We want to use Amazon S3 (with Shrine) for file storage as it is one of the most popular cloud storage solutions in the current market, used by businesses large and small around the world.
For the purposes of this article, we want to be able to achieve the following behaviors, using the Shrine gem:
1. Files should be uploaded to AWS S3 for staging and production.
2. Direct file upload to S3 from the client application (after receiving a special pre-signed URL from the API)
3. Upload to S3 after receiving a regular remote URL for the file from the client app
4. Uploads from remote URLs should be done as a background process and only after validation rules pass (i.e. URL is correct and file is available)
Setting up Shrine for our Ruby application - uploading files to Amazon S3
Installing the Shrine gem
Let’s start with a gem installation by typing the Shrine gem name in a Gemfile:
gem 'shrine'
gem 'aws-sdk-s3'
After we run a bundle install we will be ready to use power of Shrine.
Shrine configuration: storage & plugins
We should configure Shrine by using initializers. Here we can select the proper storage types provided by Shrine (places where the files will be stored) and also enable our specified shrine plugins globally.
Our goal is uploading files to S3 for our production and staging environments. However, this also begs the question: which kind of storage types should we use for development and testing? For these environments, storing files on S3 may not always be desirable.
Shrine automatically provides implementations of storage for the local file system and S3. By using an additional gem named `shrine-memory` we can extend the range of possible storage types to a specific memory storage. The most appropriate choice (and answer to our question) is using this memory storage for our testing environment, and the local file system for development.
We need the `shrine-memory` gem (only for our testing environment), so let's add the following into our Gemfile to include it:
group :test do
gem 'shrine-memory'
end
The configuration for the above setup should look similar to the following:
# config/initializers/shrine.rb
require 'shrine'
if Rails.env.development?
require "shrine/storage/file_system"
Shrine.storages = {
cache: Shrine::Storage::FileSystem.new("public", prefix: "uploads/cache"),
store: Shrine::Storage::FileSystem.new("public", prefix: "uploads/store")
}
elsif Rails.env.test?
require 'shrine/storage/memory'
Shrine.storages = {
cache: Shrine::Storage::Memory.new,
store: Shrine::Storage::Memory.new
}
else
require "shrine/storage/s3"
s3_options = {
access_key_id: Rails.application.secrets.s3_access_key_id,
secret_access_key: Rails.application.secrets.s3_secret_access_key,
region: Rails.application.secrets.s3_region,
bucket: Rails.application.secrets.s3_bucket
}
Shrine.storages = {
cache: Shrine::Storage::S3.new(prefix: "cache", **s3_options),
store: Shrine::Storage::S3.new(prefix: "store", **s3_options)
}
end
Shrine.plugin :activerecord
Shrine has support for both Sequel and ActiveRecord, so enabling a specified Object Relational Mapping (ORM) framework is required. In our case it's ActiveRecord. The storage object which is the value for the "cache" key determines the temporary storage used for keeping files. The “store” key contains permanent storage.
Linking the uploader
Information about our file locations and metadata needs to be stored in our database, so we will need to perform a database migration (to add the new columns) to be ready for use:
class AddFileDataToAttachments < ActiveRecord::Migration
def change
add_column :attachments, :file_data, :text
end
end
The next thing we will need to add is the uploader class:
# app/uploaders/attachment.rb
class AttachmentUploader < Shrine
plugin :determine_mime_type
end
The `determine_mime_type` plugin allows us to determine and store the MIME type of the file analyzed from the file’s content. Storing the MIME type is necessary to validate the type of file.
Finally, after the column to keep the file data in is created and the uploader class is added, the next step is linking the uploader with the actual model class.
# app/models/attachment.rb
class Attachment < ApplicationRecord
include AttachmentUploader.attachment(:file)
end
Stay tuned for the next part of our Store Your Files on S3 Using the Ruby Shrine Gem series, with Part 2: Direct File Uploads.
iRonin can help with all your Ruby development using Amazon S3 needs, whether it’s configuring your uploads to be more efficient, allowing your web app to have more concurrent connections without overloading, or developing a whole new application. Get in touch with us today to find out how we can help your business with web application development using Amazon S3.