https://github.com/sgl-project/sglang/blob/7d0edf3caed4e10b9e2b4217f34a1a6700d32b74/python/sglang/srt/lora/lora_manager.py#L139 We should have a more robust check. @jcbjcbjc we can use `null` lora slot? Like: - Reserve some slot in `LoraMemPool` and if `lora_path` is `None` in the loop, assign the `weight_indices[i]` for that request to the reserved `null` slot index. And set some parameters so that applying `LoRA` (here rank 0) has absolutely no effect. But then backend kernel should handle this case as well?