-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Feat/simplified relearning steps logic with updated FSRS training API #3867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/simplified relearning steps logic with updated FSRS training API #3867
Conversation
rslib/src/scheduler/fsrs/params.rs
Outdated
progress_thread.join().ok(); | ||
if let Ok(fsrs) = FSRS::new(Some(current_params)) { | ||
let current_rmse = fsrs.evaluate(items.clone(), |_| true)?.rmse_bins; | ||
let optimized_fsrs = FSRS::new(Some(¶ms))?; | ||
let optimized_rmse = optimized_fsrs.evaluate(items.clone(), |_| true)?.rmse_bins; | ||
if current_rmse <= optimized_rmse { | ||
if num_of_relearning_steps <= 1 && current_rmse <= optimized_rmse { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realize that you have added this so that a user who already has problematic parameters can save the new parameters produced after this PR.
However, this also means that after they have got correct parameters and they optimize again after a few days/weeks, they can get worse parameters because the RMSE comparison is effectively disabled for them.
So, by adding num_of_relearning_steps <= 1 &&
here, you are replacing a one-time issue with a recurrent issue. IMO, a better solution is to revert this line back and advise a user who complains about S increasing after a failure to clear their FSRS parameters field and optimize again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Considering that the issue is difficult to detect, manual intervention (as I suggested above) is likely to be difficult. So, here's another solution. (Please check if the let current_fsrs = …
part is correct or not.)
if current_rmse <= optimized_rmse {
if num_of_relearning_steps <= 1 {
params = current_params.to_vec();
} else {
let current_fsrs = FSRS::new(Some(¤t_params))?;
let memory_state = MemoryState {
stability: 1.0,
difficulty: 1.0,
};
let s_fail = current_fsrs
.next_states(Some(memory_state), 0.9, 2)?
.again;
let mut s_short_term = s_fail.memory;
for _ in 0..num_of_relearning_steps {
s_short_term = current_fsrs
.next_states(Some(s_short_term), 0.9, 0)?
.good
.memory;
}
if s_short_term.stability < memory_state.stability {
params = current_params.to_vec();
}
}
}
The idea is that if current parameters have better RMSE and
- R steps ≤ 1, then we simply keep the current parameters,
- if R steps > 1, then we check if the weird behaviour occurs with the current parameters. If it doesn't, we keep the current parameters. If it does, we keep the new parameters (even though they have worse RMSE).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait a minute...
With the updated FSRS, the weird behaviour won't occur even if the user is still using the old parameters because the parameter_clipper will clamp the parameters during scheduling, right? So, we don't need this check and can simply revert to the old one, right?
if current_rmse <= optimized_rmse {
params = current_params.to_vec();
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The extra clipper is only applied when num_relearning_steps > 1.
You can check the details here: https://github.com/open-spaced-repetition/fsrs-rs/pull/297/files
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but you didn't answer my question. Why do we need to change this line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your previous guess is correct. If the num_relearning_steps
is greater than one, the PR's design will always replace the old param with the new param to avoid sticking with problematic parameters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the old parameters won't cause the problem after the FSRS update because the scheduler will clamp the PLS, right? (Assuming that the clamping works in the scheduler too, not just the optimizer)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model doesn't know the num_relearning_steps
. Only the optimizer knows the num_relearning_steps
. So the extra clipper only works during the optimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok then, what do you think about my suggestion in #3867 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I updated the PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM except for one issue I described above.
…updated-FSRS-training-API
I'm getting a bit lost, can you guys explain how FSRS will behave once this and open-spaced-repetition/fsrs-rs#297 are merged? |
If you have two or more relearning steps, FSRS optimizer will clamp w[17] and w[18] to ensure the post-lapse stability after relearning will not exceed the pre-lapse stability. |
@dae, I get this error when I try to format the code:
Maybe this channel is too old for anki/cargo/format/rust-toolchain.toml Lines 1 to 4 in d8c83ac
This nightly channel works for me:
|
…updated-FSRS-training-API
Thanks, please feel free to make that change as part of your PR. |
🚀 |
related PR: