DEV Community

Twilight
Twilight

Posted on

your media files have an expiration date

A photo uploaded to your app today gets views. The same photo from two years ago sits in storage, loaded maybe once when someone scrolls back through an old profile. You pay the same rate for both.

I have seen this pattern in every media-heavy application I have worked on. The hot data is a thin slice. The cold data grows without stopping. If you treat all objects the same, your storage bill reflects the worst case: premium pricing for data nobody touches.

Tigris gives you two mechanisms to deal with this. You can transition old objects to cheaper storage tiers, or you can expire them outright. Both happen on a schedule you define. This post covers when and how to use each one.

how media access decays

Think about a social media feed. A user uploads a photo. For the first week, that photo appears in followers' feeds. It loads fast because your CDN caches it. After a month, the photo surfaces only when someone visits the user's profile. After a year, it loads during an occasional deep scroll or a search result.

Log files follow a similar curve. You need today's logs for debugging. Last month's logs exist for compliance. Last year's logs exist because nobody deleted them.

The access pattern is predictable: frequency drops off a cliff after a short window. Your storage costs, if you do nothing, stay flat.

the four storage tiers

Tigris stores your data globally across multiple regions by default. Within that global infrastructure, you get four tiers to choose from.

Standard is the default. High durability, low latency, what you expect from object storage. Use it for anything that loads in response to a user action.

Infrequent Access (STANDARD_IA) costs less to store but charges for retrieval. The data remains available with the same low latency. Use it when you need the data to load fast but you do not expect frequent requests: profile photos older than 30 days, generated reports, backup snapshots.

Archive (GLACIER) has the lowest storage cost but requires restoration before access. Restoration takes about an hour. Use it for data you must retain but almost never read: old backups, compliance logs, completed project archives.

Archive with Instant Retrieval (GLACIER_IR) sits between Infrequent Access and Archive. You get archive-level pricing with immediate retrieval. Use it when you want cheap storage but cannot wait an hour for restoration: quarterly reports, seasonal content that resurfaces, audit records that regulators might request on short notice.

Here is the tradeoff in plain terms: you pay more to store data you access often, and you pay less for data you rarely touch. The retrieval pricing inverts: cheaper tiers charge more per GET request. If you transition data to a cheap tier and then access it every day, you lose money. The tiers work when your access patterns match the pricing.

transitioning objects between tiers

You configure lifecycle rules at the bucket level. A rule tells Tigris: after X days, move objects from their current tier to a specified tier.

You can set the transition trigger by days (after the object's last modified date) or by a specific calendar date.

transitioning after 30 days

This is the common pattern. Objects start in Standard. After 30 days of no modification, they move to Infrequent Access.

Create a file called lifecycle.json:

{
  "Rules": [
    {
      "Status": "Enabled",
      "Transitions": [
        {
          "Days": 30,
          "StorageClass": "STANDARD_IA"
        }
      ]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Apply it:

aws s3api put-bucket-lifecycle-configuration \
  --bucket my-media-bucket \
  --lifecycle-configuration file://lifecycle.json
Enter fullscreen mode Exit fullscreen mode

Objects in this bucket will transition to Infrequent Access after 30 days. The transition happens at UTC midnight.

transitioning to archive at a fixed date

Some data has a known shelf life. Annual reports from 2024 should move to archive storage after the year ends.

{
  "Rules": [
    {
      "Status": "Enabled",
      "Transitions": [
        {
          "Date": "2025-12-31T00:00:00Z",
          "StorageClass": "GLACIER"
        }
      ]
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Same apply command. Every object in the bucket moves to Archive on December 31, 2025.

combining transition and expiration

A single lifecycle rule can include both transition and expiration. The one-rule-per-bucket limit refers to rule objects, not actions. You can transition objects at 30 days and expire them at 365 days in the same rule.

{
  "Rules": [
    {
      "Status": "Enabled",
      "Transitions": [
        {
          "Days": 30,
          "StorageClass": "STANDARD_IA"
        }
      ],
      "Expiration": {
        "Days": 365
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Objects in this bucket move to Infrequent Access after 30 days and get deleted after 365 days.

expiring objects with TTL

Some data has no business existing after a certain point. Temporary upload tokens, generated thumbnails you regenerate on demand, session recordings older than 90 days. Expiration rules delete objects on a schedule so you do not have to.

The configuration mirrors transitions:

{
  "Rules": [
    {
      "Status": "Enabled",
      "Expiration": {
        "Days": 30
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Every object in this bucket gets deleted 30 days after its last modified date. The expiration time rounds to UTC midnight.

You can also set a fixed date:

{
  "Rules": [
    {
      "Status": "Enabled",
      "Expiration": {
        "Date": "2025-12-31T00:00:00Z"
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

This deletes all objects in the bucket at the end of 2025. Useful for event-specific buckets: create a conference photo bucket, set a fixed expiration date, and the uploads disappear after the event wraps.

setting the tier on upload

You do not have to wait for lifecycle rules. You can set the storage tier at upload time using the x-amz-storage-class header or the --storage-class flag.

With the AWS CLI:

aws s3api put-object \
  --bucket my-bucket \
  --key quarterly-report-q3.pdf \
  --body report.pdf \
  --storage-class GLACIER_IR
Enter fullscreen mode Exit fullscreen mode

With the REST API:

PUT /quarterly-report-q3.pdf HTTP/1.1
Host: my-bucket.storage.fly.tigris.dev
x-amz-storage-class: GLACIER_IR
Enter fullscreen mode Exit fullscreen mode

This puts the object directly into Archive with Instant Retrieval, skipping Standard. Use this when you know at upload time that the data will not need frequent access.

You can also set a default tier at bucket creation time. Every object uploaded to that bucket inherits the default unless overridden by the header.

restoring objects from archive

Objects in the Archive tier are not available for GET requests. A GET on an archived object returns a 403 error. You must restore it first.

aws s3api restore-object \
  --bucket my-bucket \
  --key old-backup.tar.gz \
  --restore-request Days=3
Enter fullscreen mode Exit fullscreen mode

The Days parameter controls how long the object stays restored. After three days in this example, the object moves back to Archive.

Check restoration status with:

aws s3api head-object \
  --bucket my-bucket \
  --key old-backup.tar.gz
Enter fullscreen mode Exit fullscreen mode

The response includes a Restore header. During restoration, you see ongoing-request="true". When complete, the header shows the expiry date.

This restore step is the main reason Archive costs less. Tigris does not keep the data in a retrieval-ready state. If you need data available without a restore step, use Archive with Instant Retrieval instead.

a practical setup for a media app

Consider a photo-sharing app. Here is a tiering strategy that matches how people use the product.

Objects uploaded in the last 7 days: Standard tier. These photos appear in feeds and search results. They load often and need to load fast.

Objects between 7 and 90 days old: Infrequent Access. These photos load when someone visits a profile or clicks a shared link. The access is occasional but still needs low latency.

Objects older than 90 days: Archive with Instant Retrieval. Deep archive browsing is rare. When it happens, the user expects the photo to load without waiting. GLACIER_IR covers this case.

The current Tigris lifecycle configuration supports transitioning from Standard to one target tier per rule. To build a multi-step transition chain (Standard to IA to GLACIER_IR), you need application-level logic or multiple buckets. Most teams I have worked with pick one transition point and stop there. Moving from Standard to Infrequent Access at 30 days covers the bulk of the savings.

For expiration, create a separate bucket for temporary data. Generated thumbnails, video transcodes, upload staging files. Set a 30-day TTL on that bucket and let the objects delete themselves.

the cost argument

Storage costs compound. An app that uploads 10 GB of media per day accumulates 3.65 TB per year. If all of that sits in Standard storage, you pay Standard rates on every byte.

Moving objects to Infrequent Access after 30 days means you pay Standard rates on 300 GB (30 days of uploads) and Infrequent Access rates on the remaining 3.35 TB. The savings depend on the specific pricing, but the principle holds: cheaper storage on the data you touch least.

The mistake I see teams make is waiting too long to set up lifecycle rules. They treat it as an optimization to do later. Later, their storage bill is large enough that the savings from tiering would have paid for the engineering time several times over.

Set up your lifecycle rules when you create the bucket. It takes five minutes. The rule runs in the background from day one.

a few constraints to know

Tigris rounds all transition and expiration times to UTC midnight. If you set a 30-day transition, the object moves at midnight UTC on the 30th day after its last modified date.

One lifecycle rule per bucket. A rule can include both transitions and expiration, but you cannot stack multiple rules with different schedules on the same bucket.

The lifecycle configuration JSON accepts only the fields shown in the examples above when using the AWS CLI. Extra fields will cause errors.

getting started

Create a bucket in the Tigris Dashboard at console.storage.dev. Apply a lifecycle rule. Watch your objects transition on schedule. The whole process takes less than ten minutes.

If you are already using S3-compatible tooling, the migration path is direct. The AWS CLI and SDK work with Tigris without modification. Point your endpoint at Tigris, set your credentials, and run the same commands you already know.

Top comments (0)