Infinite scroll: clamp backfill batch to page_size

The infinite-scroll backfill loop in _on_reached_bottom accumulates
results from up to 9 follow-up API pages until len(collected) >= limit,
but the break condition is >= not ==, so the very last full batch
would push collected past the configured page_size. The non-infinite
search path in _do_search already slices collected[:limit] before
emitting search_done at line 805 — the infinite path was emitting the
unclamped list. Result: a single backfill round occasionally appended
more than page_size posts, producing irregular batch sizes the user
could see.

Fix: one-character change at the search_append.emit call site to mirror
the non-infinite path's slice.

Why collected[:limit] over the alternative break-early-with-clamp:
  1. Consistency — the non-infinite path in _do_search already does
     the same slice before emit. One pattern, both branches.
  2. Trivially fewer lines than restructuring the loop break.
  3. The slight wasted download work (the over-fetched final batch is
     already on disk by the time we slice) is acceptable. It's at most
     one extra page's worth, only happens at the boundary, only on
     infinite scroll, and the next backfill round picks up from where
     the visible slice ends — nothing is *lost*, just briefly redundant.

Verified manually on a high-volume tag with infinite scroll enabled
and page_size=40: pre-fix appended >40 posts in one round, post-fix
appended exactly 40.
This commit is contained in:
pax 2026-04-08 16:05:11 -05:00
parent db774fc33e
commit dbc530bb3c

View File

@ -673,7 +673,7 @@ class BooruApp(QMainWindow):
finally: finally:
self._search.infinite_last_page = last_page self._search.infinite_last_page = last_page
self._search.infinite_api_exhausted = api_exhausted self._search.infinite_api_exhausted = api_exhausted
self._signals.search_append.emit(collected) self._signals.search_append.emit(collected[:limit])
await client.close() await client.close()
self._run_async(_search) self._run_async(_search)