-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
That’s a regression that happened between version 1.0 and 2.0.
Try this minimal example with several tqdm versions:
from tqdm import tqdm
from time import sleep
for i in tqdm(range(10000)):
if i > 3000:
sleep(1)
As you’ll see, it works great with tqdm 1.0, but it takes almost forever to update with version 2.0 and 3.0. In fact, it takes about 3000 seconds to update, because the first update did 3000 updates in one second, so tqdm assumes it’s ok to wait for 3000 more updates before updating. But that’s only an acceptable behaviour for loops with a rather constant iteration speed. The smoothing
argument was developed for such cases (see #48, great work on this by the way!), but the current miniters
/mininterval
behaviour contradicts it.
Of course, specifying miniters=1
does the expected behaviour, so I’m wondering: why not do that by default?
In tqdm 1.0, miniters
was set to 1
and mininterval
was set to 0.5
, which meant: "we update the display at every iteration if the time taken by the iteration is longer than 0.5 seconds, otherwise we wait several iterations, enough to make at least a 0.5 second interval".
Since tqdm 2.0, miniters
is set to None
and mininterval
is set to 0.1
. From what I understand, it means "we update the display after waiting for several iterations, enough to make at least a 0.1 second interval".
Unfortunately, from what the example above shows, tqdm doesn’t respect this rule since we don’t have an update every 0.1 second. The behaviour seems more complex now, it tries to assume how much time tqdm should wait instead of basically counting when it should update its display, like it was in tqdm 1.0.