-
Notifications
You must be signed in to change notification settings - Fork 823
Closed
Description
Having a common interface for reading and writing blob data across all cloud providers is very useful, however with the addition of the io/fs package in Go 1.16 means that we can integrate even more into the standard library.
Possible Solution
A couple of changes would have to be added to comply with the fs.FS implementation
type Bucket struct {
// used for (*Bucket.Open)
ctx context.Context
// ...
}
func applyPrefixParam(ctx context.Context, opener BucketURLOpener, u *url.URL) ) (*Bucket, error) {
// ...
// Add the context before returning. This doesn't seem like the most elegant way but I
// I can't think of an alternative, so would have to be improved. This is only used for
// (*Bucket).Open
b.ctx = ctx
return bucket, nil
}
func (*Bucket) Open(name string) (fs.File, error) {
// See above of how we would pass in the context.
// How the *ReaderOptions can be passed in is a bit harder.
return b.NewReader(b.ctx, name, nil)
}
func (*Reader) Stat() (fs.FileInfo, error) {
// FileInfo implementation is completely missing as is, and would have to be added.
}
// (*Reader).Read and (*Reader).Close is already implemented.
The biggest change would have to be the addition of the fs.FileInfo
to the (*Reader).Stat
, as well as passing in the context and *ReaderOptions
.
Alternatives and References
- bishtawi/s3fs, jszwec/s3fs and other similar provide the wrapper for
s3
compatible backend - golang.org/x/perf/storage/fs/gcs Provides the fs.FS functionality to Google Cloud Storage.
Additional context
At the moment the fs.FS interface is Read-only, it means that for writing files there is nothing that can be done.
vic3lord, emcfarlane, Streppel, garsue, tmc and 4 more
Metadata
Metadata
Assignees
Labels
No labels