Skip to content

Conversation

L-M-Sherlock
Copy link
Contributor

progress_thread.join().ok();
if let Ok(fsrs) = FSRS::new(Some(current_params)) {
let current_rmse = fsrs.evaluate(items.clone(), |_| true)?.rmse_bins;
let optimized_fsrs = FSRS::new(Some(&params))?;
let optimized_rmse = optimized_fsrs.evaluate(items.clone(), |_| true)?.rmse_bins;
if current_rmse <= optimized_rmse {
if num_of_relearning_steps <= 1 && current_rmse <= optimized_rmse {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize that you have added this so that a user who already has problematic parameters can save the new parameters produced after this PR.

However, this also means that after they have got correct parameters and they optimize again after a few days/weeks, they can get worse parameters because the RMSE comparison is effectively disabled for them.

So, by adding num_of_relearning_steps <= 1 && here, you are replacing a one-time issue with a recurrent issue. IMO, a better solution is to revert this line back and advise a user who complains about S increasing after a failure to clear their FSRS parameters field and optimize again.

Copy link
Contributor

@user1823 user1823 Mar 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Considering that the issue is difficult to detect, manual intervention (as I suggested above) is likely to be difficult. So, here's another solution. (Please check if the let current_fsrs = … part is correct or not.)

if current_rmse <= optimized_rmse {
    if num_of_relearning_steps <= 1 {
        params = current_params.to_vec();
    } else {
        let current_fsrs = FSRS::new(Some(&current_params))?;
        let memory_state = MemoryState {
            stability: 1.0,
            difficulty: 1.0,
        };
        
        let s_fail = current_fsrs
            .next_states(Some(memory_state), 0.9, 2)?
            .again;
        let mut s_short_term = s_fail.memory;
        
        for _ in 0..num_of_relearning_steps {
            s_short_term = current_fsrs
                .next_states(Some(s_short_term), 0.9, 0)?
                .good
                .memory;
        }

        if s_short_term.stability < memory_state.stability {
            params = current_params.to_vec();
        }
    }
}

The idea is that if current parameters have better RMSE and

  • R steps ≤ 1, then we simply keep the current parameters,
  • if R steps > 1, then we check if the weird behaviour occurs with the current parameters. If it doesn't, we keep the current parameters. If it does, we keep the new parameters (even though they have worse RMSE).

Copy link
Contributor

@user1823 user1823 Mar 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait a minute...
With the updated FSRS, the weird behaviour won't occur even if the user is still using the old parameters because the parameter_clipper will clamp the parameters during scheduling, right? So, we don't need this check and can simply revert to the old one, right?

if current_rmse <= optimized_rmse {
   params = current_params.to_vec();
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The extra clipper is only applied when num_relearning_steps > 1.

You can check the details here: https://github.com/open-spaced-repetition/fsrs-rs/pull/297/files

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but you didn't answer my question. Why do we need to change this line?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your previous guess is correct. If the num_relearning_steps is greater than one, the PR's design will always replace the old param with the new param to avoid sticking with problematic parameters.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But the old parameters won't cause the problem after the FSRS update because the scheduler will clamp the PLS, right? (Assuming that the clamping works in the scheduler too, not just the optimizer)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model doesn't know the num_relearning_steps. Only the optimizer knows the num_relearning_steps. So the extra clipper only works during the optimization.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok then, what do you think about my suggestion in #3867 (comment)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, I updated the PR.

Copy link
Contributor

@user1823 user1823 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM except for one issue I described above.

@Expertium
Copy link
Contributor

I'm getting a bit lost, can you guys explain how FSRS will behave once this and open-spaced-repetition/fsrs-rs#297 are merged?

@L-M-Sherlock
Copy link
Contributor Author

If you have two or more relearning steps, FSRS optimizer will clamp w[17] and w[18] to ensure the post-lapse stability after relearning will not exceed the pre-lapse stability.

@L-M-Sherlock
Copy link
Contributor Author

@dae, I get this error when I try to format the code:

$ ./ninja format
    Finished `release` profile [optimized] target(s) in 0.15s
[2/2; 1 active; 0.087s] format:rust
FAILED: /Users/jarrettye/Codes/anki/out/tests/cargo_format.fmt 
/Users/jarrettye/Codes/anki/out/rust/release/runner run --stamp=/Users/jarrettye/Codes/anki/out/tests/cargo_format.fmt --cwd=cargo/format cargo fmt  --all
`cargo metadata` exited with an error: error: failed to load manifest for workspace member `/Users/jarrettye/Codes/anki/pylib/rsbridge`

Caused by:
  failed to load manifest for dependency `anki`

Caused by:
  failed to load manifest for dependency `fsrs`

Caused by:
  failed to parse manifest at `/Users/jarrettye/Codes/open-spaced-repetition/fsrs-rs/Cargo.toml`

Caused by:
  failed to parse the `edition` key

Caused by:
  this version of Cargo is older than the `2024` edition, and only supports `2015`, `2018`, and `2021` editions.

This utility formats all bin and lib files of the current crate using rustfmt.

Usage: cargo fmt [OPTIONS] [-- <rustfmt_options>...]

Arguments:
  [rustfmt_options]...  Options passed to rustfmt

Options:
  -q, --quiet
          No output printed to stdout
  -v, --verbose
          Use verbose output
      --version
          Print rustfmt version and exit
  -p, --package <package>...
          Specify package to format
      --manifest-path <manifest-path>
          Specify path to Cargo.toml
      --message-format <message-format>
          Specify message-format: short|json|human
      --all
          Format all packages, and also their local path-based dependencies
      --check
          Run rustfmt in check mode
  -h, --help
          Print help
Failed with code Some(1): cargo fmt --all
ninja: build stopped: subcommand failed.

Maybe this channel is too old for edition2024:

[toolchain]
channel = "nightly-2023-09-02"
profile = "minimal"
components = ["rustfmt"]

This nightly channel works for me:

channel = "nightly-2025-03-20"

@dae
Copy link
Member

dae commented Mar 20, 2025

Thanks, please feel free to make that change as part of your PR.

@L-M-Sherlock L-M-Sherlock marked this pull request as ready for review March 20, 2025 06:11
@dae
Copy link
Member

dae commented Mar 20, 2025

🚀

@dae dae merged commit d52889f into ankitects:main Mar 20, 2025
1 check passed
@L-M-Sherlock L-M-Sherlock deleted the Feat/simplified-relearning-steps-logic-with-updated-FSRS-training-API branch March 20, 2025 07:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants