-
Notifications
You must be signed in to change notification settings - Fork 2.8k
feat: add load format 'prefetch_auto' for parallel mmap prefetching #7209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Introduce a new load format 'prefetch_auto' that performs concurrent mmap with MAP_POPULATE to prefetch safetensors files into the page cache. This helps maximize storage bandwidth and improve model loading performance, especially on systems with high disk I/O capacity. Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @BraveY, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request aims to significantly improve the cold start performance of the inference engine by optimizing the model loading process. It introduces a new prefetch_auto
load format that leverages concurrent mmap
with MAP_POPULATE
to prefetch model weight files into the page cache, thereby maximizing disk bandwidth utilization during initialization. This change is expected to reduce model loading times, especially on systems with high-speed storage like NVMe drives, leading to faster Pod scaling and deployment.
Highlights
- New Load Format: Introduced a new
load_format
option calledprefetch_auto
. - Parallel Prefetching: Implemented parallel file prefetching using
mmap
with theMAP_POPULATE
flag to load weight files into the page cache concurrently. - Performance Improvement: Integrated the new prefetching logic into the model loading process to accelerate cold start performance, particularly for large models on fast storage.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a prefetch_auto
load format to accelerate model loading using mmap
with MAP_POPULATE
. Key areas for consideration include the platform-specific nature of MAP_POPULATE
and handling potential zero-size weight files.
fileno=f.fileno(), | ||
length=file_size, | ||
prot=mmap.PROT_READ, | ||
flags=mmap.MAP_SHARED | mmap.MAP_POPULATE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with open(st_file, "rb") as f: | ||
file_size = os.path.getsize(st_file) | ||
mm = mmap.mmap( | ||
fileno=f.fileno(), | ||
length=file_size, | ||
prot=mmap.PROT_READ, | ||
flags=mmap.MAP_SHARED | mmap.MAP_POPULATE | ||
) | ||
mm.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider handling the edge case where file_size
is 0 to prevent potential errors with mmap.mmap
. An empty file check could be added.
Style Guide References
with open(st_file, "rb") as f: | |
file_size = os.path.getsize(st_file) | |
mm = mmap.mmap( | |
fileno=f.fileno(), | |
length=file_size, | |
prot=mmap.PROT_READ, | |
flags=mmap.MAP_SHARED | mmap.MAP_POPULATE | |
) | |
mm.close() | |
def _mmap_single_file(st_file: str) -> None: | |
file_size = os.path.getsize(st_file) | |
if file_size == 0: | |
logger.info(f"Skipping mmap for empty file: {st_file}") | |
return | |
with open(st_file, "rb") as f: | |
mm = mmap.mmap( | |
fileno=f.fileno(), | |
length=file_size, | |
prot=mmap.PROT_READ, | |
flags=mmap.MAP_SHARED | mmap.MAP_POPULATE | |
) | |
mm.close() |
…ight Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
I am curious if this optimized time includes the time of the model from cpu memory to gpu memory? Or just the time from disk to cpu memory |
Update the test result between this PR and @xianzhiT's PR: Multi-thread with the config:
The loading time is 1m24s: 15:19:24->15:20:48 My PR with barrier commit id 4e31c43. Config:
The loading time is 1m35s: 15:28:34->15:30:09 My PR without barrier commit id 1ae9d55.
The loading time is 1m25s:16:25:03->16:26:28. We obtained the same performance boost. Excellent work! |
Yes, it includes the time of the model from cpu memory to gpu memory. |
Good idea. I've also noticed in practice that if the model weights are in the page cache, the startup is much faster. This PR seems more user-friendly than #7277. |
def prefetch_weight_files(hf_weights_files: List[str]) -> None: | ||
"""Prefetch and mmap weight files in parallel for the current distributed rank.""" | ||
world_size = 1 | ||
rank = 0 | ||
if torch.distributed.is_initialized(): | ||
world_size = torch.distributed.get_world_size() | ||
rank = torch.distributed.get_rank() | ||
local_files = hf_weights_files[rank::world_size] | ||
mmap_files_concurrently(local_files) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BraveY I've tested this in my environment and it turns out to be very useful on a single node. But when I launched a distributed serving, it does not work pretty well. So In distributed serving cases, I think it should prefetch all weights on each node?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirmed, I've identified this issue as well. We need to prefetch all weights on each node. I'll implement the fix shortly.
Signed-off-by: Yang Kaiyong <yangkaiyong.yky@antgroup.com>
Motivation
The purpose of this PR is to optimize the cold start performance of the inference engine when model weights are already stored on the local disk. The current model loading approach fails to fully utilize available disk bandwidth during initial startup, resulting in suboptimal loading speeds. By implementing a disk bandwidth-optimized loading strategy for cold start scenarios, we can significantly accelerate the engine's initialization process. This improvement will directly enhance Pod scaling efficiency and deployment speed in production environments, enabling faster resource provisioning and workload handling capabilities.
Modifications
Introduce a new load format 'prefetch_auto' that performs concurrent mmap with MAP_POPULATE to prefetch safetensors files into the page cache. This helps maximize storage bandwidth and improve model loading performance, especially on systems with high disk I/O capacity.
Checklist
Test Plan
Our test env:
lsblk result:
The loaded model is DeepSeek-R1, the weight file size: 642GB. All the weight file is mounted in /data. Three 3.5 TB NVMe Samsung MZQL23T8HCLS-00B7C drives are configured in RAID 0 with LVM, mounted to the /data directory, providing a total storage capacity of 10 TB.
We set the
tp=8
in start command.Test Result
Time Reduction: The prefetch loader reduces loading time from 384s to 96s (75% improvement).