Skip to content

BP5 strategies for large I/O #3679

@liangwang0734

Description

@liangwang0734

Hi Adios,

I'd appreciate some suggestions on properly configuring BP5 for our application, especially to reduce memory consumption:

  • We use adias2 as file I/O, so no stream, interaction, code coupling is needed.
  • It's a mpi program involving 20k cores or even more
  • Every some (say, 5000) steps we write, say, ten, 1d arrays of size ~10^11 or bigger to a file.
  • Every, say, 1000 steps we write a 3d array of size (20, 1e5, 1e5) or even bigger to a file.
  • right now, for all io we set the substream number to 0, hoping to have one data file per node.

We are happy with the I/o speed so far, but in some extreme cases, the first (1d-array) writing fails due to out-of-memory errors.

Could you advise on strategies like number of substreams, aggregators, buffer size, shared memory size, etc., based on the machine memory and core number per node.

I understand my description right now might be vague but I can add more details to help with your advice.

Thank you again for making this great piece of software.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions