A lightweight, high-performance AWS S3 client library for Go that implements the standard fs.FS
interface, allowing you to work with S3 buckets as if they were local filesystems.
Attribution: This library is extracted from Sneller's lightweight S3 client. Most of the credit goes to the Sneller team for the original implementation and design.
- Standard
fs.FS
Interface: Compatible with any Go code that acceptsfs.FS
- Lightweight: Minimal dependencies, focused on performance
- Range Reads: Efficient partial file reading with HTTP range requests
- Multi-part Uploads: Support for large file uploads
- Pattern Matching: Built-in glob pattern support for file listing
- Context Support: Full context cancellation support
- Lazy Loading: Optional HEAD-only requests until actual read
- Multiple Auth Methods: Environment variables, IAM roles, manual keys
Use When:
- ✅ Building applications that need to treat S3 as a filesystem (compatible with
fs.FS
) - ✅ Requiring lightweight, minimal-dependency S3 operations
- ✅ Working with large files that benefit from range reads and multipart uploads
Not For:
- ❌ Applications requiring the full AWS SDK feature set (SQS, DynamoDB, etc.)
- ❌ Requiring advanced S3 features (bucket policies, lifecycle, object locking, versioning, etc.)
- ❌ Projects that need official AWS support and enterprise features
package main
import (
"context"
"fmt"
"io"
"io/fs"
"github.com/kelindar/s3"
"github.com/kelindar/s3/aws"
)
func main() {
// Create signing key from ambient credentials
key, err := aws.AmbientKey("s3", s3.DeriveForBucket("my-bucket"))
if err != nil {
panic(err)
}
// Create Bucket instance
bucket := s3.NewBucket(key, "my-bucket")
// Upload a file
etag, err := bucket.Write(context.Background(), "hello.txt", []byte("Hello, World!"))
if err != nil {
panic(err)
}
fmt.Printf("Uploaded with ETag: %s\n", etag)
// Read the file back
file, err := bucket.Open("hello.txt")
if err != nil {
panic(err)
}
defer file.Close()
content, err := io.ReadAll(file)
if err != nil {
panic(err)
}
fmt.Printf("Content: %s\n", content)
}
This is the recommended way to use the library, as it automatically discovers credentials from the environment, IAM roles, and other sources. It supports the following sources:
- Environment variables (
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
) - IAM roles (EC2, ECS, Lambda)
- AWS credentials file (
~/.aws/credentials
) - Web identity tokens
key, err := aws.AmbientKey("s3", s3.DeriveForBucket("my-bucket"))
If you prefer to manage credentials manually, you can derive a signing key directly:
key := aws.DeriveKey(
"", // baseURI (empty for AWS S3)
"your-access-key", // AWS Access Key ID
"your-secret-key", // AWS Secret Key
"us-east-1", // AWS Region
"s3", // Service
)
You can customize the behavior of the bucket by setting options:
bucket := s3.NewBucket(key, "my-bucket")
bucket.Client = httpClient // Optional: Custom HTTP client
bucket.Lazy = true // Optional: Use HEAD instead of GET for Open()
If you need to work with files, the library provides standard fs.FS
operations. Here's an example of uploading, reading, and checking for file existence:
// Upload a file
etag, err := bucket.Write(context.Background(), "path/to/file.txt", []byte("content"))
// Read a file
file, err := bucket.Open("path/to/file.txt")
if err != nil {
panic(err)
}
defer file.Close()
content, err := io.ReadAll(file)
// Check if file exists
_, err := bucket.Open("path/to/file.txt")
if errors.Is(err, fs.ErrNotExist) {
fmt.Println("File does not exist")
}
If you need to work with directories, the library provides standard fs.ReadDirFS
operations. Here's an example of listing directory contents and walking the directory tree:
// List directory contents
entries, err := fs.ReadDir(bucket, "path/to/directory")
for _, entry := range entries {
fmt.Printf("%s (dir: %t)\n", entry.Name(), entry.IsDir())
}
// Walk directory tree
err = fs.WalkDir(bucket, ".", func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
fmt.Printf("Found: %s\n", path)
return nil
})
The library supports pattern matching using the fsutil.WalkGlob
function. Here's an example of finding all .txt
files:
import (
"github.com/kelindar/s3/fsutil"
)
// Find all .txt files
err := fsutil.WalkGlob(bucket, "", "*.txt", func(path string, f fs.File, err error) error {
if err != nil {
return err
}
defer f.Close()
fmt.Printf("Text file: %s\n", path)
return nil
})
If you need to read a specific range of bytes from a file, you can use the OpenRange
function. In the following example, we read the first 1KB of a file:
// Read first 1KB of a file
reader, err := bucket.OpenRange("large-file.dat", "", 0, 1024)
if err != nil {
panic(err)
}
defer reader.Close()
data, err := io.ReadAll(reader)
For large files, you can use the WriteFrom
method which automatically handles multipart uploads. This method is more convenient than manually managing upload parts:
// Open a large file
file, err := os.Open("large-file.dat")
if err != nil {
panic(err)
}
defer file.Close()
// Get file size
stat, err := file.Stat()
if err != nil {
panic(err)
}
// Upload using multipart upload (automatically used for files > 5MB)
err = bucket.WriteFrom(context.Background(), "large-file.dat", file, stat.Size())
if err != nil {
panic(err)
}
The WriteFrom
method automatically:
- Determines optimal part size based on file size
- Uploads parts in parallel for better performance
- Handles multipart upload initialization and completion
- Respects context cancellation for upload control
You can work with subdirectories by creating a sub-filesystem using the Sub
method. In the following example, we create a sub-filesystem for the data/2023/
prefix and list all files within that prefix:
import "io/fs"
// Create a sub-filesystem for a specific prefix
subFS, err := bucket.Sub("data/2023/")
if err != nil {
panic(err)
}
// Now work within that prefix
files, err := fs.ReadDir(subFS, ".")
The library uses standard Go fs
package errors. You can check for specific errors using the errors.Is
function:
import (
"errors"
"fmt"
"io/fs"
)
file, err := bucket.Open("nonexistent.txt")
if errors.Is(err, fs.ErrNotExist) {
fmt.Println("File not found")
} else if errors.Is(err, fs.ErrPermission) {
fmt.Println("Access denied")
}
Set environment variables for integration tests:
export AWS_TEST_BUCKET=your-test-bucket
go test ./...
Licensed under the Apache License, Version 2.0. See LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request.