-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Open
Labels
kind/featureA new featureA new featureneed/triageNeeds initial labeling and prioritizationNeeds initial labeling and prioritization
Description
Checklist
- My issue is specific & actionable.
- I am not suggesting a protocol enhancement.
- I have searched on the issue tracker for my issue.
Description
Right now, moving to new repo is tedious, user needs to manually:
- move pins
- move mfs
- preload local blocks to other node
- (optionally) move identity, config
Example:
- user is running legacy badgerv1 datastore, and they would like to move all their data to a fresh node
- user is running ipfs-cluster without GC, and would like manually do a spring cleaning and drop unpinned blocks by moving only the currently pinned data to the new repo.
Solution
Would be good to have CLI for this that that takes target repo path or RPC port and performs export/import between two Kubo instances.
Copy to repote RPC (online, over HTTP)
$ ipfs repo copy /ip4/127.0.0.1/tcp/5002
Copy to local repo (offline, direct filesystem access)
$ ipfs repo copy /path/to/new/repo
And ability to specify what to include/exclude (by default we don't migrate config):
$ ipfs repo copy --pins=true --mfs=true --config=false --identity=false --all-blocks=true
pins
migratesipfs pin ls --names -t recursive
(migrate pins and their name/type + preload)mfs
migratesipfs files ls / -l
(lazy, no preload, or walk and only preload things that were already preloaded on old node)all-blocks
migrates (preloads) all blocks from blockstoreipfs refs local
ipns
(false by default) ipns keys and last verisons of published recordsconfig
(false by default) config TBD (all? only peering/routing/remote pinnignservicesidentity
(false by default) warn that its dangerous
Metadata
Metadata
Assignees
Labels
kind/featureA new featureA new featureneed/triageNeeds initial labeling and prioritizationNeeds initial labeling and prioritization