Compare commits

..

No commits in common. "main" and "v0.2.7" have entirely different histories.
main ... v0.2.7

27 changed files with 331 additions and 709 deletions

View File

@ -1,37 +1,5 @@
# Changelog
## [Unreleased]
### Added
- Settings → Cache: **Clear Tag Cache** button — wipes the per-site `tag_types` rows (including the `__batch_api_probe__` sentinel) so Gelbooru/Moebooru backends re-probe and re-populate tag categories from scratch. Useful when a stale cache from an earlier build leaves some category types mis-labelled or missing
### Changed
- Thumbnail drag-start threshold raised from 10px to 30px to match the rubber band's gate — small mouse wobbles on a thumb no longer trigger a file drag
- Settings → Cache layout: Clear Tag Cache moved into row 1 alongside Clear Thumbnails and Clear Image Cache as a 3-wide non-destructive row; destructive Clear Everything + Evict stay in row 2
### Fixed
- Grid blanked out after splitter drag or tile/float toggle until the next scroll — `ThumbnailGrid.resizeEvent` now re-runs `_recycle_offscreen` against the new geometry so thumbs whose pixmap was evicted by a column-count shift get refreshed into view. **Behavior change:** no more blank grid after resize
- Status bar overwrote the per-post info set by `_on_post_selected` with `"N results — Loaded"` the moment the image finished downloading, hiding tag counts / post ID until the user re-clicked; `on_image_done` now preserves the incoming `info` string
- `category_fetcher._do_ensure` no longer permanently flips `_batch_api_works` to False when a transient network error drops a tag-API request mid-call; the unprobed path now routes through `_probe_batch_api`, which distinguishes clean 200-with-zero-matches (structurally broken, flip) from timeout/HTTP-error (transient, retry next call)
- Bookmark→library save and bookmark Save As now plumb the active site's `CategoryFetcher` through to the filename template, so `%artist%`/`%character%` tokens render correctly instead of silently dropping out when saving a post that wasn't previewed first
- Info panel no longer silently drops tags that failed to land in a cached category — any tag from `post.tag_list` not rendered under a known category section now appears in an "Other" bucket, so partial cache coverage can't make individual tags invisible
- `BooruClient._request` retries now cover `httpx.RemoteProtocolError` and `httpx.ReadError` in addition to the existing timeout/connect/network set — an overloaded booru that drops the TCP connection mid-response no longer fails the whole search on the first try
- VRAM retained when no video is playing — `stop()` now frees the GL render context (textures + FBOs) instead of just dropping the hwdec surface pool. Context is recreated lazily on next `play_file()` via `ensure_gl_init()` (~5ms, invisible behind network fetch)
### Refactored
- `category_fetcher` batch tag-API params are now built by a shared `_build_tag_api_params` helper instead of duplicated across `fetch_via_tag_api` and `_probe_batch_api`
- `detect.detect_site_type` — removed the leftover `if True:` indent marker; no behavior change
- `core.http.make_client` — single constructor for the three `httpx.AsyncClient` instances (cache download pool, API pool, detect probe). Each call site still keeps its own singleton and connection pool; only the construction is shared
- Silent `except: pass` sites in `popout/window`, `video_player`, and `window_state` now carry one-line comments naming the absorbed failure and the graceful fallback (or were downgraded to `log.debug(..., exc_info=True)`). No behavior change
- Popout docstrings purged of in-flight-refactor commit markers (`skeleton`, `14a`, `14b`, `future commit`) that referred to now-landed state-machine extraction; load-bearing commit 14b reference kept in `_dispatch_and_apply` as it still protects against reintroducing the bug
- `core/cache.py` tempfile cleanup: `BaseException` catch now documents why it's intentionally broader than `Exception`
- `api/e621` and `api/moebooru` JSON parse guards narrowed from bare `except` to `ValueError`
- `gui/media/video_player.py``import time` hoisted to module top
- `gui/post_actions.is_in_library` — dead `try/except` stripped
### Removed
- Unused `Favorite` alias in `core/db.py` — callers migrated to `Bookmark` in 0.2.5, nothing referenced the fallback anymore
## v0.2.7
### Fixed

View File

@ -1,7 +1,16 @@
# booru-viewer
A Qt6 booru client for people who keep what they save and rice what they run. Browse, search, and archive Danbooru, e621, Gelbooru, and Moebooru on Linux and Windows. Fully themeable.
<img src="screenshots/linux.png" alt="Linux — System Qt6 theme" width="700">
[![tests](https://github.com/pxlwh/booru-viewer/actions/workflows/tests.yml/badge.svg)](https://github.com/pxlwh/booru-viewer/actions/workflows/tests.yml)
A booru client for people who keep what they save and rice what they run.
Qt6 desktop app for Linux and Windows. Browse, search, and archive Danbooru, e621, Gelbooru, and Moebooru. Fully themeable.
## Screenshot
**Linux — Styled via system Qt6 theme**
<picture><img src="screenshots/linux.png" alt="Linux — System Qt6 theme" width="700"></picture>
Supports custom styling via `custom.qss` — see [Theming](#theming).

View File

@ -10,9 +10,9 @@ from dataclasses import dataclass, field
import httpx
from ..config import DEFAULT_PAGE_SIZE
from ..config import USER_AGENT, DEFAULT_PAGE_SIZE
from ..cache import log_connection
from ._safety import redact_url
from ._safety import redact_url, validate_public_request
log = logging.getLogger("booru")
@ -100,11 +100,21 @@ class BooruClient(ABC):
return c
# Slow path: build it. Lock so two coroutines on the same loop don't
# both construct + leak.
from ..http import make_client
with BooruClient._shared_client_lock:
c = BooruClient._shared_client
if c is None or c.is_closed:
c = make_client(extra_request_hooks=[self._log_request])
c = httpx.AsyncClient(
headers={"User-Agent": USER_AGENT},
follow_redirects=True,
timeout=20.0,
event_hooks={
"request": [
validate_public_request,
self._log_request,
],
},
limits=httpx.Limits(max_connections=10, max_keepalive_connections=5),
)
BooruClient._shared_client = c
return c
@ -152,18 +162,9 @@ class BooruClient(ABC):
wait = 2.0
log.info(f"Retrying {url} after {resp.status_code} (wait {wait}s)")
await asyncio.sleep(wait)
except (
httpx.TimeoutException,
httpx.ConnectError,
httpx.NetworkError,
httpx.RemoteProtocolError,
httpx.ReadError,
) as e:
# Retry on transient DNS/TCP/timeout failures plus
# mid-response drops — RemoteProtocolError and ReadError
# are common when an overloaded booru closes the TCP
# connection between headers and body. Without them a
# single dropped response blows up the whole search.
except (httpx.TimeoutException, httpx.ConnectError, httpx.NetworkError) as e:
# Retry on transient DNS/TCP/timeout failures. Without this,
# a single DNS hiccup or RST blows up the whole search.
if attempt == 1:
raise
log.info(f"Retrying {url} after {type(e).__name__}: {e}")

View File

@ -213,31 +213,6 @@ class CategoryFetcher:
and bool(self._client.api_user)
)
def _build_tag_api_params(self, chunk: list[str]) -> dict:
"""Params dict for a tag-DAPI batch request.
The ``lstrip("&")`` and ``startswith("api_key=")`` guards
accommodate users who paste their credentials with a leading
``&`` or as ``api_key=VALUE`` either form gets normalised
to a clean namevalue mapping.
"""
params: dict = {
"page": "dapi",
"s": "tag",
"q": "index",
"json": "1",
"names": " ".join(chunk),
"limit": len(chunk),
}
if self._client.api_key and self._client.api_user:
key = self._client.api_key.strip().lstrip("&")
user = self._client.api_user.strip().lstrip("&")
if key and not key.startswith("api_key="):
params["api_key"] = key
if user and not user.startswith("user_id="):
params["user_id"] = user
return params
async def fetch_via_tag_api(self, posts: list["Post"]) -> int:
"""Batch-fetch tag types via the booru's tag DAPI.
@ -269,7 +244,21 @@ class CategoryFetcher:
BATCH = 500
for i in range(0, len(missing), BATCH):
chunk = missing[i:i + BATCH]
params = self._build_tag_api_params(chunk)
params: dict = {
"page": "dapi",
"s": "tag",
"q": "index",
"json": "1",
"names": " ".join(chunk),
"limit": len(chunk),
}
if self._client.api_key and self._client.api_user:
key = self._client.api_key.strip().lstrip("&")
user = self._client.api_user.strip().lstrip("&")
if key and not key.startswith("api_key="):
params["api_key"] = key
if user and not user.startswith("user_id="):
params["user_id"] = user
try:
resp = await self._client._request("GET", tag_api_url, params=params)
resp.raise_for_status()
@ -357,41 +346,29 @@ class CategoryFetcher:
async def _do_ensure(self, post: "Post") -> None:
"""Inner dispatch for ensure_categories.
Dispatch:
- ``_batch_api_works is True``: call ``fetch_via_tag_api``
directly. If it populates categories we're done; a
transient failure leaves them empty and we fall through
to the HTML scrape.
- ``_batch_api_works is None``: route through
``_probe_batch_api``, which only flips the flag to
True/False on a clean HTTP response. Transient errors
leave it ``None`` so the next call retries the probe.
Previously this path called ``fetch_via_tag_api`` and
inferred the result from empty ``tag_categories`` but
``fetch_via_tag_api`` swallows per-chunk failures with
``continue``, so a mid-call network drop poisoned
``_batch_api_works = False`` for the site permanently.
- ``_batch_api_works is False`` or unavailable: straight
to HTML scrape.
Tries the batch API when it's known to work (True) OR not yet
probed (None). The result doubles as an inline probe: if the
batch produced categories, it works (save True); if it
returned nothing useful, it's broken (save False). Falls
through to HTML scrape as the universal fallback.
"""
if self._batch_api_works is True and self._batch_api_available():
if self._batch_api_works is not False and self._batch_api_available():
try:
await self.fetch_via_tag_api([post])
except Exception as e:
log.debug("Batch API ensure failed (transient): %s", e)
if post.tag_categories:
return
elif self._batch_api_works is None and self._batch_api_available():
try:
result = await self._probe_batch_api([post])
except Exception as e:
log.info("Batch API probe error (will retry next call): %s: %s",
type(e).__name__, e)
result = None
if result is True:
# Probe succeeded — results cached and post composed.
return
# result is False (broken API) or None (transient) — fall through
# Leave _batch_api_works at None → retry next call
else:
if post.tag_categories:
if self._batch_api_works is None:
self._batch_api_works = True
self._save_probe_result(True)
return
# Batch returned nothing → broken API (Rule34) or
# the specific post has only unknown tags (very rare).
if self._batch_api_works is None:
self._batch_api_works = False
self._save_probe_result(False)
# HTML scrape fallback (works on Rule34/Safebooru.org/Moebooru,
# returns empty on Gelbooru proper which is fine because the
# batch path above covers Gelbooru)
@ -503,7 +480,21 @@ class CategoryFetcher:
# Send one batch request
chunk = missing[:500]
params = self._build_tag_api_params(chunk)
params: dict = {
"page": "dapi",
"s": "tag",
"q": "index",
"json": "1",
"names": " ".join(chunk),
"limit": len(chunk),
}
if self._client.api_key and self._client.api_user:
key = self._client.api_key.strip().lstrip("&")
user = self._client.api_user.strip().lstrip("&")
if key and not key.startswith("api_key="):
params["api_key"] = key
if user and not user.startswith("user_id="):
params["user_id"] = user
try:
resp = await self._client._request("GET", tag_api_url, params=params)

View File

@ -4,7 +4,10 @@ from __future__ import annotations
import logging
from ..http import make_client
import httpx
from ..config import USER_AGENT
from ._safety import validate_public_request
from .danbooru import DanbooruClient
from .gelbooru import GelbooruClient
from .moebooru import MoebooruClient
@ -26,83 +29,95 @@ async def detect_site_type(
url = url.rstrip("/")
from .base import BooruClient as _BC
# Reuse shared client for site detection. Event hooks mirror
# Reuse shared client for site detection. event_hooks mirrors
# BooruClient.client so detection requests get the same SSRF
# validation and connection logging as regular API calls.
if _BC._shared_client is None or _BC._shared_client.is_closed:
_BC._shared_client = make_client(extra_request_hooks=[_BC._log_request])
_BC._shared_client = httpx.AsyncClient(
headers={"User-Agent": USER_AGENT},
follow_redirects=True,
timeout=20.0,
event_hooks={
"request": [
validate_public_request,
_BC._log_request,
],
},
limits=httpx.Limits(max_connections=10, max_keepalive_connections=5),
)
client = _BC._shared_client
# Try Danbooru / e621 first — /posts.json is a definitive endpoint
try:
params: dict = {"limit": 1}
if api_key and api_user:
params["login"] = api_user
params["api_key"] = api_key
resp = await client.get(f"{url}/posts.json", params=params)
if resp.status_code == 200:
data = resp.json()
if isinstance(data, dict) and "posts" in data:
# e621/e926 wraps in {"posts": [...]}, with nested file/tags dicts
posts = data["posts"]
if isinstance(posts, list) and posts:
p = posts[0]
if isinstance(p.get("file"), dict) and isinstance(p.get("tags"), dict):
return "e621"
return "danbooru"
elif isinstance(data, list) and data:
# Danbooru returns a flat list of post objects
if isinstance(data[0], dict) and any(
k in data[0] for k in ("tag_string", "image_width", "large_file_url")
):
if True: # keep indent level
# Try Danbooru / e621 first — /posts.json is a definitive endpoint
try:
params: dict = {"limit": 1}
if api_key and api_user:
params["login"] = api_user
params["api_key"] = api_key
resp = await client.get(f"{url}/posts.json", params=params)
if resp.status_code == 200:
data = resp.json()
if isinstance(data, dict) and "posts" in data:
# e621/e926 wraps in {"posts": [...]}, with nested file/tags dicts
posts = data["posts"]
if isinstance(posts, list) and posts:
p = posts[0]
if isinstance(p.get("file"), dict) and isinstance(p.get("tags"), dict):
return "e621"
return "danbooru"
elif resp.status_code in (401, 403):
if "e621" in url or "e926" in url:
return "e621"
return "danbooru"
except Exception as e:
log.warning("Danbooru/e621 probe failed for %s: %s: %s",
url, type(e).__name__, e)
elif isinstance(data, list) and data:
# Danbooru returns a flat list of post objects
if isinstance(data[0], dict) and any(
k in data[0] for k in ("tag_string", "image_width", "large_file_url")
):
return "danbooru"
elif resp.status_code in (401, 403):
if "e621" in url or "e926" in url:
return "e621"
return "danbooru"
except Exception as e:
log.warning("Danbooru/e621 probe failed for %s: %s: %s",
url, type(e).__name__, e)
# Try Gelbooru — /index.php?page=dapi
try:
params = {
"page": "dapi", "s": "post", "q": "index", "json": "1", "limit": 1,
}
if api_key and api_user:
params["api_key"] = api_key
params["user_id"] = api_user
resp = await client.get(f"{url}/index.php", params=params)
if resp.status_code == 200:
data = resp.json()
if isinstance(data, list) and data and isinstance(data[0], dict):
if any(k in data[0] for k in ("file_url", "preview_url", "directory")):
# Try Gelbooru — /index.php?page=dapi
try:
params = {
"page": "dapi", "s": "post", "q": "index", "json": "1", "limit": 1,
}
if api_key and api_user:
params["api_key"] = api_key
params["user_id"] = api_user
resp = await client.get(f"{url}/index.php", params=params)
if resp.status_code == 200:
data = resp.json()
if isinstance(data, list) and data and isinstance(data[0], dict):
if any(k in data[0] for k in ("file_url", "preview_url", "directory")):
return "gelbooru"
elif isinstance(data, dict):
if "post" in data or "@attributes" in data:
return "gelbooru"
elif resp.status_code in (401, 403):
if "gelbooru" in url or "safebooru.org" in url or "rule34" in url:
return "gelbooru"
elif isinstance(data, dict):
if "post" in data or "@attributes" in data:
return "gelbooru"
elif resp.status_code in (401, 403):
if "gelbooru" in url or "safebooru.org" in url or "rule34" in url:
return "gelbooru"
except Exception as e:
log.warning("Gelbooru probe failed for %s: %s: %s",
url, type(e).__name__, e)
except Exception as e:
log.warning("Gelbooru probe failed for %s: %s: %s",
url, type(e).__name__, e)
# Try Moebooru — /post.json (singular)
try:
params = {"limit": 1}
if api_key and api_user:
params["login"] = api_user
params["password_hash"] = api_key
resp = await client.get(f"{url}/post.json", params=params)
if resp.status_code == 200:
data = resp.json()
if isinstance(data, list) or (isinstance(data, dict) and "posts" in data):
# Try Moebooru — /post.json (singular)
try:
params = {"limit": 1}
if api_key and api_user:
params["login"] = api_user
params["password_hash"] = api_key
resp = await client.get(f"{url}/post.json", params=params)
if resp.status_code == 200:
data = resp.json()
if isinstance(data, list) or (isinstance(data, dict) and "posts" in data):
return "moebooru"
elif resp.status_code in (401, 403):
return "moebooru"
elif resp.status_code in (401, 403):
return "moebooru"
except Exception as e:
log.warning("Moebooru probe failed for %s: %s: %s",
url, type(e).__name__, e)
except Exception as e:
log.warning("Moebooru probe failed for %s: %s: %s",
url, type(e).__name__, e)
return None

View File

@ -92,7 +92,7 @@ class E621Client(BooruClient):
resp.raise_for_status()
try:
data = resp.json()
except ValueError as e:
except Exception as e:
log.warning("e621 search JSON parse failed: %s: %s — body: %s",
type(e).__name__, e, resp.text[:200])
return []

View File

@ -28,7 +28,7 @@ class MoebooruClient(BooruClient):
resp.raise_for_status()
try:
data = resp.json()
except ValueError as e:
except Exception as e:
log.warning("Moebooru search JSON parse failed: %s: %s — body: %s",
type(e).__name__, e, resp.text[:200])
return []

View File

@ -17,7 +17,7 @@ from urllib.parse import urlparse
import httpx
from PIL import Image
from .config import cache_dir, thumbnails_dir
from .config import cache_dir, thumbnails_dir, USER_AGENT
log = logging.getLogger("booru")
@ -77,14 +77,23 @@ def _get_shared_client(referer: str = "") -> httpx.AsyncClient:
c = _shared_client
if c is not None and not c.is_closed:
return c
# Lazy import: core.http imports from core.api._safety, which
# lives inside the api package that imports this module, so a
# top-level import would circular through cache.py's load.
from .http import make_client
# Lazy import: core.api.base imports log_connection from this
# module, so a top-level `from .api._safety import ...` would
# circular-import through api/__init__.py during cache.py load.
from .api._safety import validate_public_request
with _shared_client_lock:
c = _shared_client
if c is None or c.is_closed:
c = make_client(timeout=60.0, accept="image/*,video/*,*/*")
c = httpx.AsyncClient(
headers={
"User-Agent": USER_AGENT,
"Accept": "image/*,video/*,*/*",
},
follow_redirects=True,
timeout=60.0,
event_hooks={"request": [validate_public_request]},
limits=httpx.Limits(max_connections=10, max_keepalive_connections=5),
)
_shared_client = c
return c
@ -487,8 +496,6 @@ async def _do_download(
progress_callback(downloaded, total)
os.replace(tmp_path, local)
except BaseException:
# BaseException on purpose: also clean up the .part file on
# Ctrl-C / task cancellation, not just on Exception.
try:
tmp_path.unlink(missing_ok=True)
except OSError:

View File

@ -185,6 +185,10 @@ class Bookmark:
tag_categories: dict = field(default_factory=dict)
# Back-compat alias — will be removed in a future version.
Favorite = Bookmark
class Database:
def __init__(self, path: Path | None = None) -> None:
self._path = path or db_path()

View File

@ -1,73 +0,0 @@
"""Shared httpx.AsyncClient constructor.
Three call sites build near-identical clients: the cache module's
download pool, ``BooruClient``'s shared API pool, and
``detect.detect_site_type``'s reach into that same pool. Centralising
the construction in one place means a future change (new SSRF hook,
new connection limit, different default UA) doesn't have to be made
three times and kept in sync.
The module does NOT manage the singletons themselves each call site
keeps its own ``_shared_client`` and its own lock, so the cache
pool's long-lived large transfers don't compete with short JSON
requests from the API layer. ``make_client`` is a pure constructor.
"""
from __future__ import annotations
from typing import Callable, Iterable
import httpx
from .config import USER_AGENT
from .api._safety import validate_public_request
# Connection pool limits are identical across all three call sites.
# Keeping the default here centralises any future tuning.
_DEFAULT_LIMITS = httpx.Limits(max_connections=10, max_keepalive_connections=5)
def make_client(
*,
timeout: float = 20.0,
accept: str | None = None,
extra_request_hooks: Iterable[Callable] | None = None,
) -> httpx.AsyncClient:
"""Return a fresh ``httpx.AsyncClient`` with the project's defaults.
Defaults applied unconditionally:
- ``User-Agent`` header from ``core.config.USER_AGENT``
- ``follow_redirects=True``
- ``validate_public_request`` SSRF hook (always first on the
request-hook chain; extras run after it)
- Connection limits: 10 max, 5 keepalive
Parameters:
timeout: per-request timeout in seconds. Cache downloads pass
60s for large videos; the API pool uses 20s.
accept: optional ``Accept`` header value. The cache pool sets
``image/*,video/*,*/*``; the API pool leaves it unset so
httpx's ``*/*`` default takes effect.
extra_request_hooks: optional extra callables to run after
``validate_public_request``. The API clients pass their
connection-logging hook here; detect passes the same.
Call sites are responsible for their own singleton caching
``make_client`` always returns a fresh instance.
"""
headers: dict[str, str] = {"User-Agent": USER_AGENT}
if accept is not None:
headers["Accept"] = accept
hooks: list[Callable] = [validate_public_request]
if extra_request_hooks:
hooks.extend(extra_request_hooks)
return httpx.AsyncClient(
headers=headers,
follow_redirects=True,
timeout=timeout,
event_hooks={"request": hooks},
limits=_DEFAULT_LIMITS,
)

View File

@ -24,7 +24,6 @@ from .db import Database
if TYPE_CHECKING:
from .api.base import Post
from .api.category_fetcher import CategoryFetcher
_CATEGORY_TOKENS = {"%artist%", "%character%", "%copyright%", "%general%", "%meta%", "%species%"}
@ -37,8 +36,7 @@ async def save_post_file(
db: Database,
in_flight: set[str] | None = None,
explicit_name: str | None = None,
*,
category_fetcher: "CategoryFetcher | None",
category_fetcher=None,
) -> Path:
"""Copy a Post's already-cached media file into `dest_dir`.
@ -91,13 +89,6 @@ async def save_post_file(
explicit_name: optional override. When set, the template is
bypassed and this basename (already including extension)
is used as the starting point for collision resolution.
category_fetcher: keyword-only, required. The CategoryFetcher
for the post's site, or None when the site categorises tags
inline (Danbooru, e621) so ``post.tag_categories`` is always
pre-populated. Pass ``None`` explicitly rather than omitting
the argument the ``=None`` default was removed so saves
can't silently render templates with empty category tokens
just because a caller forgot to plumb the fetcher through.
Returns:
The actual `Path` the file landed at after collision

View File

@ -4,7 +4,6 @@ from __future__ import annotations
import logging
from pathlib import Path
from typing import Callable, TYPE_CHECKING
from PySide6.QtCore import Qt, Signal, QObject, QTimer
from PySide6.QtGui import QPixmap
@ -28,9 +27,6 @@ from ..core.cache import download_thumbnail
from ..core.concurrency import run_on_app_loop
from .grid import ThumbnailGrid
if TYPE_CHECKING:
from ..core.api.category_fetcher import CategoryFetcher
log = logging.getLogger("booru")
@ -47,19 +43,9 @@ class BookmarksView(QWidget):
bookmarks_changed = Signal() # emitted after bookmark add/remove/unsave
open_in_browser_requested = Signal(int, int) # (site_id, post_id)
def __init__(
self,
db: Database,
category_fetcher_factory: Callable[[], "CategoryFetcher | None"],
parent: QWidget | None = None,
) -> None:
def __init__(self, db: Database, parent: QWidget | None = None) -> None:
super().__init__(parent)
self._db = db
# Factory returns the fetcher for the currently-active site, or
# None when the site categorises tags inline (Danbooru, e621).
# Called at save time so a site switch between BookmarksView
# construction and a save picks up the new site's fetcher.
self._category_fetcher_factory = category_fetcher_factory
self._bookmarks: list[Bookmark] = []
self._signals = BookmarkThumbSignals()
self._signals.thumb_ready.connect(self._on_thumb_ready, Qt.ConnectionType.QueuedConnection)
@ -310,14 +296,9 @@ class BookmarksView(QWidget):
src = Path(fav.cached_path)
post = self._bookmark_to_post(fav)
fetcher = self._category_fetcher_factory()
async def _do():
try:
await save_post_file(
src, post, dest_dir, self._db,
category_fetcher=fetcher,
)
await save_post_file(src, post, dest_dir, self._db)
self._signals.save_done.emit(fav.post_id)
except Exception as e:
log.warning(f"Bookmark→library save #{fav.post_id} failed: {e}")
@ -431,14 +412,12 @@ class BookmarksView(QWidget):
dest = save_file(self, "Save Image", default_name, f"Images (*{src.suffix})")
if dest:
dest_path = Path(dest)
fetcher = self._category_fetcher_factory()
async def _do_save_as():
try:
await save_post_file(
src, post, dest_path.parent, self._db,
explicit_name=dest_path.name,
category_fetcher=fetcher,
)
except Exception as e:
log.warning(f"Bookmark Save As #{fav.post_id} failed: {e}")

View File

@ -302,7 +302,7 @@ class ThumbnailWidget(QWidget):
self.setCursor(Qt.CursorShape.PointingHandCursor if over else Qt.CursorShape.ArrowCursor)
self.update()
if (self._drag_start and self._cached_path
and (event.position().toPoint() - self._drag_start).manhattanLength() > 30):
and (event.position().toPoint() - self._drag_start).manhattanLength() > 10):
drag = QDrag(self)
mime = QMimeData()
mime.setUrls([QUrl.fromLocalFile(self._cached_path)])
@ -868,10 +868,8 @@ class ThumbnailGrid(QScrollArea):
super().resizeEvent(event)
if self._flow:
self._flow.resize(self.viewport().size().width(), self._flow.minimumHeight())
# Column count can change on resize (splitter drag, tile/float
# toggle). Thumbs that were outside the keep zone had their
# pixmap freed by _recycle_offscreen and will paint as empty
# cells if the row shift moves them into view without a scroll
# event to refresh them. Re-run the recycle pass against the
# new geometry so newly-visible thumbs get their pixmap back.
self._recycle_offscreen()
# Qt Wayland buffer goes stale after compositor-driven resize
# (Hyprland tiled geometry change). Thumbs reflow but paint
# skips until a scroll/click invalidates the viewport. Force
# repaint so the grid stays visible through tile resizes.
self.viewport().update()

View File

@ -136,7 +136,6 @@ class InfoPanel(QWidget):
# Display tags grouped by category. Colors come from the
# tag*Color Qt Properties so a custom.qss can override any of
# them via `InfoPanel { qproperty-tagCharacterColor: ...; }`.
rendered: set[str] = set()
for category, tags in post.tag_categories.items():
color = self._category_color(category)
header = QLabel(f"{category}:")
@ -146,7 +145,6 @@ class InfoPanel(QWidget):
)
self._tags_flow.addWidget(header)
for tag in tags:
rendered.add(tag)
btn = QPushButton(tag)
btn.setFlat(True)
btn.setCursor(Qt.CursorShape.PointingHandCursor)
@ -157,27 +155,6 @@ class InfoPanel(QWidget):
btn.setStyleSheet(style)
btn.clicked.connect(lambda checked, t=tag: self.tag_clicked.emit(t))
self._tags_flow.addWidget(btn)
# Safety net: any tag in post.tag_list that didn't land in
# a cached category (batch tag API returned partial results,
# HTML scrape fell short, cache stale, etc.) is still shown
# under an "Other" bucket so tags can't silently disappear
# from the info panel.
leftover = [t for t in post.tag_list if t and t not in rendered]
if leftover:
header = QLabel("Other:")
header.setStyleSheet(
"font-weight: bold; margin-top: 6px; margin-bottom: 2px;"
)
self._tags_flow.addWidget(header)
for tag in leftover:
btn = QPushButton(tag)
btn.setFlat(True)
btn.setCursor(Qt.CursorShape.PointingHandCursor)
btn.setStyleSheet(
"QPushButton { text-align: left; padding: 1px 4px; border: none; }"
)
btn.clicked.connect(lambda checked, t=tag: self.tag_clicked.emit(t))
self._tags_flow.addWidget(btn)
elif not self._categories_pending:
# Flat tag fallback — only when no category fetch is
# in-flight. When a fetch IS pending, leaving the tags

View File

@ -315,9 +315,7 @@ class BooruApp(QMainWindow):
self._grid.nav_before_start.connect(self._search_ctrl.on_nav_before_start)
self._stack.addWidget(self._grid)
self._bookmarks_view = BookmarksView(
self._db, self._get_category_fetcher,
)
self._bookmarks_view = BookmarksView(self._db)
self._bookmarks_view.bookmark_selected.connect(self._on_bookmark_selected)
self._bookmarks_view.bookmark_activated.connect(self._on_bookmark_activated)
self._bookmarks_view.bookmarks_changed.connect(self._post_actions.refresh_browse_saved_dots)

View File

@ -111,20 +111,7 @@ class _MpvGLWidget(QWidget):
self._gl.makeCurrent()
self._init_gl()
def release_render_context(self) -> None:
"""Free the GL render context without terminating mpv.
Releases all GPU-side textures and FBOs that the render context
holds. The next ``ensure_gl_init()`` call (from ``play_file``)
recreates the context cheaply (~5ms). This is the difference
between "mpv is idle but holding VRAM" and "mpv is idle and
clean."
Safe to call when mpv has no active file (after
``mpv.command('stop')``). After this, ``_paint_gl`` is a no-op
(``_ctx is None`` guard) and mpv won't fire frame-ready
callbacks because there's no render context to trigger them.
"""
def cleanup(self) -> None:
if self._ctx:
# GL context must be current so mpv can release its textures
# and FBOs on the correct context. Without this, drivers that
@ -136,10 +123,6 @@ class _MpvGLWidget(QWidget):
finally:
self._gl.doneCurrent()
self._ctx = None
self._gl_inited = False
def cleanup(self) -> None:
self.release_render_context()
if self._mpv:
self._mpv.terminate()
self._mpv = None

View File

@ -3,7 +3,6 @@
from __future__ import annotations
import logging
import time
from PySide6.QtCore import Qt, QTimer, Signal, Property, QPoint
from PySide6.QtGui import QColor, QIcon, QPixmap, QPainter, QPen, QPolygon, QPainterPath, QFont
@ -159,9 +158,6 @@ class VideoPlayer(QWidget):
self._mpv['background'] = 'color'
self._mpv['background-color'] = self._letterbox_color.name()
except Exception:
# mpv not fully initialized or torn down; letterbox color
# is a cosmetic fallback so a property-write refusal just
# leaves the default black until next set.
pass
def __init__(self, parent: QWidget | None = None, embed_controls: bool = True) -> None:
@ -444,9 +440,6 @@ class VideoPlayer(QWidget):
try:
m['hwdec'] = 'auto'
except Exception:
# If hwdec re-arm is refused, mpv falls back to software
# decode silently — playback still works, just at higher
# CPU cost on this file.
pass
self._current_file = path
self._media_ready_fired = False
@ -457,7 +450,8 @@ class VideoPlayer(QWidget):
# treated as belonging to the previous file's stop and
# ignored — see the long comment at __init__'s
# `_eof_ignore_until` definition for the race trace.
self._eof_ignore_until = time.monotonic() + self._eof_ignore_window_secs
import time as _time
self._eof_ignore_until = _time.monotonic() + self._eof_ignore_window_secs
self._last_video_size = None # reset dedupe so new file fires a fit
self._apply_loop_to_mpv()
@ -487,15 +481,7 @@ class VideoPlayer(QWidget):
try:
self._mpv['hwdec'] = 'no'
except Exception:
# Best-effort VRAM release on stop; if mpv is mid-
# teardown and rejects the write, GL context destruction
# still drops the surface pool eventually.
pass
# Free the GL render context so its internal textures and FBOs
# release VRAM while no video is playing. The next play_file()
# call recreates the context via ensure_gl_init() (~5ms cost,
# swamped by the network fetch for uncached videos).
self._gl_widget.release_render_context()
self._time_label.setText("0:00")
self._duration_label.setText("0:00")
self._seek_slider.setRange(0, 0)
@ -541,9 +527,6 @@ class VideoPlayer(QWidget):
if pos is not None and dur is not None and dur > 0 and pos >= dur - 0.5:
self._mpv.command('seek', 0, 'absolute+exact')
except Exception:
# Replay-on-end is a UX nicety; if mpv refuses the
# seek (stream not ready, state mid-transition) just
# toggle pause without rewinding.
pass
self._mpv.pause = not self._mpv.pause
self._play_btn.setIcon(self._play_icon if self._mpv.pause else self._pause_icon)
@ -617,7 +600,8 @@ class VideoPlayer(QWidget):
reset and trigger a spurious play_next auto-advance.
"""
if value is True:
if time.monotonic() < self._eof_ignore_until:
import time as _time
if _time.monotonic() < self._eof_ignore_until:
# Stale eof from a previous file's stop. Drop it.
return
self._eof_pending = True

View File

@ -166,7 +166,9 @@ class MediaController:
cn = self._app._search_ctrl._cached_names
if cn is not None:
cn.add(Path(path).name)
self._app._status.showMessage(info)
self._app._status.showMessage(
f"{len(self._app._posts)} results — Loaded"
)
self.auto_evict_cache()
return
if self._app._popout_ctrl.window and self._app._popout_ctrl.window.isVisible():
@ -174,7 +176,7 @@ class MediaController:
self._app._preview._current_path = path
else:
self.set_preview_media(path, info)
self._app._status.showMessage(info)
self._app._status.showMessage(f"{len(self._app._posts)} results — Loaded")
idx = self._app._grid.selected_index
if 0 <= idx < len(self._app._grid._thumbs):
self._app._grid._thumbs[idx]._cached_path = path

View File

@ -114,7 +114,7 @@ class FitWindowToContent:
"""Compute the new window rect for the given content aspect using
`state.viewport` and dispatch it to Hyprland (or `setGeometry()`
on non-Hyprland). The adapter delegates the rect math + dispatch
to the helpers in `popout/hyprland.py`.
to `popout/hyprland.py`'s helper, which lands in commit 13.
"""
content_w: int

View File

@ -11,11 +11,11 @@ behind the same `HYPRLAND_INSTANCE_SIGNATURE` env var check the
legacy code used. Off-Hyprland systems no-op or return None at every
entry point.
The popout adapter calls these helpers directly; there are no
`FullscreenPreview._hyprctl_*` shims anymore. Every env-var gate
for opt-out (`BOORU_VIEWER_NO_HYPR_RULES`, popout-specific aspect
lock) is implemented inside these functions so every call site
gets the same behavior.
The legacy `FullscreenPreview._hyprctl_*` methods become 1-line
shims that call into this module see commit 13's changes to
`popout/window.py`. The shims preserve byte-for-byte call-site
compatibility for the existing window.py code; commit 14's adapter
rewrite drops them in favor of direct calls.
"""
from __future__ import annotations

View File

@ -16,6 +16,12 @@ becomes the forcing function that keeps this module pure.
The architecture, state diagram, invarianttransition mapping, and
event/effect lists are documented in `docs/POPOUT_ARCHITECTURE.md`.
This module's job is to be the executable form of that document.
This is the **commit 2 skeleton**: every state, every event type, every
effect type, and the `StateMachine` class with all fields initialized.
The `dispatch` method routes events to per-event handlers that all
currently return empty effect lists. Real transitions land in
commits 4-11 of `docs/POPOUT_REFACTOR_PLAN.md`.
"""
from __future__ import annotations
@ -417,6 +423,10 @@ class StateMachine:
The state machine never imports Qt or mpv. It never calls into the
adapter. The communication is one-directional: events in, effects
out.
**This is the commit 2 skeleton**: all state fields are initialized,
`dispatch` is wired but every transition handler is a stub that
returns an empty effect list. Real transitions land in commits 4-11.
"""
def __init__(self) -> None:
@ -501,7 +511,14 @@ class StateMachine:
# and reads back the returned effects + the post-dispatch state.
def dispatch(self, event: Event) -> list[Effect]:
"""Process one event and return the effect list."""
"""Process one event and return the effect list.
**Skeleton (commit 2):** every event handler currently returns
an empty effect list. Real transitions land in commits 4-11.
Tests written in commit 3 will document what each transition
is supposed to do; they fail at this point and progressively
pass as the transitions land.
"""
# Closing is terminal — drop everything once we're done.
if self.state == State.CLOSING:
return []
@ -560,13 +577,13 @@ class StateMachine:
case CloseRequested():
return self._on_close_requested(event)
case _:
# Unknown event type — defensive fall-through. The
# legality check above is the real gate; in release
# mode illegal events log and drop, strict mode raises.
# Unknown event type. Returning [] keeps the skeleton
# safe; the illegal-transition handler in commit 11
# will replace this with the env-gated raise.
return []
# ------------------------------------------------------------------
# Per-event handlers
# Per-event stub handlers (commit 2 — all return [])
# ------------------------------------------------------------------
def _on_open(self, event: Open) -> list[Effect]:
@ -577,7 +594,8 @@ class StateMachine:
on the state machine instance for the first ContentArrived
handler to consume. After Open the machine is still in
AwaitingContent the actual viewport seeding from saved_geo
happens inside the first ContentArrived.
happens inside the first ContentArrived (commit 8 wires the
actual viewport math; this commit just stashes the inputs).
No effects: the popout window is already constructed and
showing. The first content load triggers the first fit.
@ -592,11 +610,12 @@ class StateMachine:
Snapshot the content into `current_*` fields regardless of
kind so the rest of the state machine can read them. Then
transition to LoadingVideo (video) or DisplayingImage (image)
and emit the appropriate load + fit effects.
transition to LoadingVideo (video) or DisplayingImage (image,
commit 10) and emit the appropriate load + fit effects.
The first-content-load one-shot consumes `saved_geo` to seed
the viewport before the first fit. Every ContentArrived flips
the viewport before the first fit (commit 8 wires the actual
seeding). After this commit, every ContentArrived flips
`is_first_content_load` to False the saved_geo path runs at
most once per popout open.
"""

View File

@ -68,8 +68,9 @@ from .viewport import Viewport, _DRIFT_TOLERANCE, anchor_point
# the dispatch trace to the Ctrl+L log panel — useful but invisible
# from the shell. We additionally attach a stderr StreamHandler to
# the adapter logger so `python -m booru_viewer.main_gui 2>&1 |
# grep POPOUT_FSM` works from the terminal. The handler is tagged
# with a sentinel attribute so re-imports don't stack duplicates.
# grep POPOUT_FSM` works during the commit-14a verification gate.
# The handler is tagged with a sentinel attribute so re-imports
# don't stack duplicates.
import sys as _sys
_fsm_log = logging.getLogger("booru.popout.adapter")
_fsm_log.setLevel(logging.DEBUG)
@ -145,20 +146,25 @@ class FullscreenPreview(QMainWindow):
self._stack.addWidget(self._viewer)
self._video = VideoPlayer()
# Two legacy VideoPlayer forwarding connections were removed
# during the state machine extraction — don't reintroduce:
# Note: two legacy VideoPlayer signal connections removed in
# commits 14b and 16:
#
# - `self._video.play_next.connect(self.play_next_requested)`:
# the EmitPlayNextRequested effect emits play_next_requested
# via the state machine dispatch path. Keeping the forward
# would double-emit on every video EOF in Loop=Next mode.
# - `self._video.play_next.connect(self.play_next_requested)`
# (removed in 14b): the EmitPlayNextRequested effect now
# emits play_next_requested via the state machine dispatch
# path. Keeping the forwarding would double-emit the signal
# and cause main_window to navigate twice on every video
# EOF in Loop=Next mode.
#
# - `self._video.video_size.connect(self._on_video_size)`:
# the dispatch path's VideoSizeKnown handler produces
# FitWindowToContent which the apply path delegates to
# _fit_to_content. The direct forwarding was a parallel
# duplicate that same-rect-skip in _fit_to_content masked
# but that muddied the dispatch trace.
# - `self._video.video_size.connect(self._on_video_size)`
# (removed in 16): the dispatch path's VideoSizeKnown
# handler emits FitWindowToContent which the apply path
# delegates to _fit_to_content. The legacy direct call to
# _on_video_size → _fit_to_content was a parallel duplicate
# that the same-rect skip in _fit_to_content made harmless,
# but it muddied the trace. The dispatch lambda below is
# wired in the same __init__ block (post state machine
# construction) and is now the sole path.
self._stack.addWidget(self._video)
self.setCentralWidget(central)
@ -368,15 +374,17 @@ class FullscreenPreview(QMainWindow):
else:
self.showFullScreen()
# ---- State machine adapter wiring ----
# ---- State machine adapter wiring (commit 14a) ----
# Construct the pure-Python state machine and dispatch the
# initial Open event with the cross-popout-session class state
# the legacy code stashed above. Every Qt event handler, mpv
# signal, and button click below dispatches a state machine
# event via `_dispatch_and_apply`, which applies the returned
# effects to widgets. The state machine is the authority for
# "what to do next"; the imperative helpers below are the
# implementation the apply path delegates into.
# the legacy code stashed above. The state machine runs in
# PARALLEL with the legacy imperative code: every Qt event
# handler / mpv signal / button click below dispatches a state
# machine event AND continues to run the existing imperative
# action. The state machine's returned effects are LOGGED at
# DEBUG, not applied to widgets. The legacy path stays
# authoritative through commit 14a; commit 14b switches the
# authority to the dispatch path.
#
# The grid_cols field is used by the keyboard nav handlers
# for the Up/Down ±cols stride.
@ -395,17 +403,20 @@ class FullscreenPreview(QMainWindow):
monitor=monitor,
))
# Wire VideoPlayer's playback_restart Signal to the adapter's
# dispatch routing. mpv emits playback-restart once after each
# loadfile and once after each completed seek; the adapter
# distinguishes by checking the state machine's current state
# at dispatch time.
# Wire VideoPlayer's playback_restart Signal (added in commit 1)
# to the adapter's dispatch routing. mpv emits playback-restart
# once after each loadfile and once after each completed seek;
# the adapter distinguishes by checking the state machine's
# current state at dispatch time.
self._video.playback_restart.connect(self._on_video_playback_restart)
# Wire VideoPlayer signals to dispatch+apply via the
# _dispatch_and_apply helper. Every lambda below MUST call
# _dispatch_and_apply, not _fsm_dispatch directly — see the
# docstring on _dispatch_and_apply for the historical bug that
# explains the distinction.
# _dispatch_and_apply helper. NOTE: every lambda below MUST
# call _dispatch_and_apply, not _fsm_dispatch directly. Calling
# _fsm_dispatch alone produces effects that never reach
# widgets — the bug that landed in commit 14b and broke
# video auto-fit (FitWindowToContent never applied) and
# Loop=Next play_next (EmitPlayNextRequested never applied)
# until the lambdas were fixed in this commit.
self._video.play_next.connect(
lambda: self._dispatch_and_apply(VideoEofReached())
)
@ -454,8 +465,8 @@ class FullscreenPreview(QMainWindow):
Adapter-internal helper. Centralizes the dispatch + log path
so every wire-point is one line. Returns the effect list for
callers that want to inspect it; prefer `_dispatch_and_apply`
at wire-points so the apply step can't be forgotten.
callers that want to inspect it (commit 14a doesn't use the
return value; commit 14b will pattern-match and apply).
The hasattr guard handles edge cases where Qt events might
fire during __init__ (e.g. resizeEvent on the first show())
@ -477,10 +488,10 @@ class FullscreenPreview(QMainWindow):
return effects
def _on_video_playback_restart(self) -> None:
"""mpv `playback-restart` event arrived via VideoPlayer's
playback_restart Signal. Distinguish VideoStarted (after load)
from SeekCompleted (after seek) by the state machine's current
state.
"""mpv `playback-restart` event arrived (via VideoPlayer's
playback_restart Signal added in commit 1). Distinguish
VideoStarted (after load) from SeekCompleted (after seek) by
the state machine's current state.
This is the ONE place the adapter peeks at state to choose an
event type it's a read, not a write, and it's the price of
@ -497,35 +508,42 @@ class FullscreenPreview(QMainWindow):
# round trip.
# ------------------------------------------------------------------
# Effect application
# Commit 14b — effect application
# ------------------------------------------------------------------
#
# The state machine's dispatch returns a list of Effect descriptors
# describing what the adapter should do. `_apply_effects` is the
# single dispatch point: `_dispatch_and_apply` dispatches then calls
# this. The pattern-match by type is the architectural choke point
# — a new Effect type in state.py triggers the TypeError branch at
# runtime instead of silently dropping the effect.
# single dispatch point: every wire-point that calls `_fsm_dispatch`
# follows it with `_apply_effects(effects)`. The pattern-match by
# type is the architectural choke point — if a new effect type is
# added in state.py, the type-check below catches the missing
# handler at runtime instead of silently dropping.
#
# A few apply handlers are intentional no-ops:
# Several apply handlers are deliberate no-ops in commit 14b:
#
# - ApplyMute / ApplyVolume / ApplyLoopMode: the legacy slot
# connections on the popout's VideoPlayer handle the user-facing
# toggles directly. The state machine tracks these values as the
# source of truth for sync with the embedded preview; pushing
# them back here would create a double-write hazard.
# connections on the popout's VideoPlayer are still active and
# handle the user-facing toggles directly. The state machine
# tracks these values for the upcoming SyncFromEmbedded path
# (future commit) but doesn't push them to widgets — pushing
# would create a sync hazard with the embedded preview's mute
# state, which main_window pushes via direct attribute writes.
#
# - SeekVideoTo: `_ClickSeekSlider.clicked_position → _seek` on the
# VideoPlayer handles both the mpv.seek call and the legacy
# 500ms pin window. The state machine's SeekingVideo state
# tracks the seek; the slider rendering and the seek call itself
# live on VideoPlayer.
# - SeekVideoTo: the legacy `_ClickSeekSlider.clicked_position →
# VideoPlayer._seek` connection still handles both the mpv.seek
# call and the legacy 500ms `_seek_pending_until` pin window.
# The state machine's SeekingVideo state tracks the seek for
# future authority, but the slider rendering and the seek call
# itself stay legacy. Replacing this requires either modifying
# VideoPlayer's _poll loop (forbidden by the no-touch rule) or
# building a custom poll loop in the adapter.
#
# Every other effect (LoadImage, LoadVideo, StopMedia,
# The other effect types (LoadImage, LoadVideo, StopMedia,
# FitWindowToContent, EnterFullscreen, ExitFullscreen,
# EmitNavigate, EmitPlayNextRequested, EmitClosed, TogglePlay)
# delegates to a private helper in this file. The state machine
# is the entry point; the helpers are the implementation.
# delegate to existing private helpers in this file. The state
# machine becomes the official entry point for these operations;
# the helpers stay in place as the implementation.
def _apply_effects(self, effects: list) -> None:
"""Apply a list of Effect descriptors returned by dispatch.
@ -542,19 +560,18 @@ class FullscreenPreview(QMainWindow):
elif isinstance(e, StopMedia):
self._apply_stop_media()
elif isinstance(e, ApplyMute):
# No-op — VideoPlayer's legacy slot owns widget update;
# the state machine keeps state.mute as the sync source
# for the embedded-preview path.
# No-op in 14b — legacy slot handles widget update.
# State machine tracks state.mute for future authority.
pass
elif isinstance(e, ApplyVolume):
pass # same — widget update handled by VideoPlayer
pass # same — no-op in 14b
elif isinstance(e, ApplyLoopMode):
pass # same — widget update handled by VideoPlayer
pass # same — no-op in 14b
elif isinstance(e, SeekVideoTo):
# No-op — `_seek` slot on VideoPlayer handles both
# mpv.seek and the pin window. The state's SeekingVideo
# fields exist so the slider's read-path still returns
# the clicked position during the seek.
# No-op in 14b legacy `_seek` slot handles both
# mpv.seek (now exact) and the pin window. Replacing
# this requires touching VideoPlayer._poll which is
# out of scope.
pass
elif isinstance(e, TogglePlay):
self._video._toggle_play()
@ -670,14 +687,14 @@ class FullscreenPreview(QMainWindow):
self._save_btn.setToolTip("Unsave from library" if saved else "Save to library (S)")
# ------------------------------------------------------------------
# Public method interface
# Public method interface (commit 15)
# ------------------------------------------------------------------
#
# The methods below are the only entry points main_window.py uses
# to drive the popout. They wrap the private fields so main_window
# doesn't have to know about VideoPlayer / ImageViewer /
# QStackedWidget internals. The private fields stay in place; these
# are clean public wrappers, not a re-architecture.
# The methods below replace direct underscore access from
# main_window.py. They wrap the existing private fields so
# main_window doesn't have to know about VideoPlayer / ImageViewer
# / QStackedWidget internals. The legacy private fields stay in
# place — these are clean public wrappers, not a re-architecture.
def is_video_active(self) -> bool:
"""True if the popout is currently showing a video (vs image).
@ -814,9 +831,6 @@ class FullscreenPreview(QMainWindow):
try:
self._video._mpv.pause = True
except Exception:
# mpv was torn down or is mid-transition between
# files; pause is best-effort so a stale instance
# rejecting the property write isn't a real failure.
pass
def stop_media(self) -> None:
@ -1054,9 +1068,7 @@ class FullscreenPreview(QMainWindow):
from ...core.cache import _referer_for
referer = _referer_for(urlparse(path))
except Exception:
_fsm_log.debug(
"referer derivation failed for %s", path, exc_info=True,
)
pass
# Dispatch + apply. The state machine produces:
# - LoadVideo or LoadImage (loads the media)
@ -1477,11 +1489,11 @@ class FullscreenPreview(QMainWindow):
return True
elif key == Qt.Key.Key_Period and self._stack.currentIndex() == 1:
# +/- keys are seek-relative, NOT slider-pin seeks. The
# state machine's SeekRequested models slider-driven
# seeks (target_ms known up front); relative seeks go
# straight to mpv. If we ever want the dispatch path to
# own them, compute target_ms from current position and
# route through SeekRequested.
# state machine's SeekRequested is for slider-driven
# seeks. The +/- keys go straight to mpv via the
# legacy path; the dispatch path doesn't see them in
# 14a (commit 14b will route them through SeekRequested
# with a target_ms computed from current position).
self._video._seek_relative(1800)
return True
elif key == Qt.Key.Key_Comma and self._stack.currentIndex() == 1:
@ -1614,9 +1626,6 @@ class FullscreenPreview(QMainWindow):
if vp and vp.get('w') and vp.get('h'):
content_w, content_h = vp['w'], vp['h']
except Exception:
# mpv is mid-shutdown or between files; leave
# content_w/h at 0 so the caller falls back to the
# saved viewport rather than a bogus fit rect.
pass
else:
pix = self._viewer._pixmap
@ -1794,7 +1803,5 @@ class FullscreenPreview(QMainWindow):
try:
self._video._gl_widget.cleanup()
except Exception:
# Close path — a cleanup failure can't be recovered from
# here. Swallowing beats letting Qt abort mid-teardown.
pass
super().closeEvent(event)

View File

@ -21,7 +21,11 @@ def is_batch_message(msg: str) -> bool:
return "/" in msg and any(c.isdigit() for c in msg.split("/")[0][-2:])
def is_in_library(path: Path, saved_root: Path) -> bool:
return path.is_relative_to(saved_root)
"""Check if path is inside the library root."""
try:
return path.is_relative_to(saved_root)
except (TypeError, ValueError):
return False
class PostActionsController:

View File

@ -313,15 +313,6 @@ class SettingsDialog(QDialog):
clear_cache_btn.clicked.connect(self._clear_image_cache)
btn_row1.addWidget(clear_cache_btn)
clear_tags_btn = QPushButton("Clear Tag Cache")
clear_tags_btn.setToolTip(
"Wipe the per-site tag-type cache (Gelbooru/Moebooru sites). "
"Use this if category colors stop appearing correctly — the "
"app will re-fetch tag types on the next post view."
)
clear_tags_btn.clicked.connect(self._clear_tag_cache)
btn_row1.addWidget(clear_tags_btn)
actions_layout.addLayout(btn_row1)
btn_row2 = QHBoxLayout()
@ -708,18 +699,6 @@ class SettingsDialog(QDialog):
QMessageBox.information(self, "Done", f"Evicted {count} files.")
self._refresh_stats()
def _clear_tag_cache(self) -> None:
reply = QMessageBox.question(
self, "Confirm",
"Wipe the tag category cache for every site? This also clears "
"the per-site batch-API probe result, so the app will re-probe "
"Gelbooru/Moebooru backends on next use.",
QMessageBox.StandardButton.Yes | QMessageBox.StandardButton.No,
)
if reply == QMessageBox.StandardButton.Yes:
count = self._db.clear_tag_cache()
QMessageBox.information(self, "Done", f"Deleted {count} tag-type rows.")
def _bl_export(self) -> None:
from .dialogs import save_file
path = save_file(self, "Export Blacklist", "blacklist.txt", "Text (*.txt)")

View File

@ -160,10 +160,6 @@ class WindowStateController:
continue
return c
except Exception:
# hyprctl unavailable (non-Hyprland session), timed out,
# or produced invalid JSON. Caller treats None as
# "no Hyprland-visible main window" and falls back to
# Qt's own geometry tracking.
pass
return None
@ -211,9 +207,6 @@ class WindowStateController:
# When tiled, intentionally do NOT touch floating_geometry --
# preserve the last good floating dimensions.
except Exception:
# Geometry persistence is best-effort; swallowing here
# beats crashing closeEvent over a hyprctl timeout or a
# setting-write race. Next save attempt will retry.
pass
def restore_main_window_state(self) -> None:

View File

@ -454,89 +454,3 @@ class TestMaps:
assert _GELBOORU_TYPE_MAP[4] == "Character"
assert _GELBOORU_TYPE_MAP[5] == "Meta"
assert 2 not in _GELBOORU_TYPE_MAP # Deprecated intentionally omitted
# ---------------------------------------------------------------------------
# _do_ensure dispatch — regression cover for transient-error poisoning
# ---------------------------------------------------------------------------
class TestDoEnsureProbeRouting:
"""When _batch_api_works is None, _do_ensure must route through
_probe_batch_api so transient errors stay transient. The prior
implementation called fetch_via_tag_api directly and inferred
False from empty tag_categories but fetch_via_tag_api swallows
per-chunk exceptions, so a network drop silently poisoned the
probe flag to False for the whole site."""
def test_transient_error_leaves_flag_none(self, tmp_db):
"""All chunks fail → _batch_api_works must stay None,
not flip to False."""
client = FakeClient(
tag_api_url="http://example.com/tags",
api_key="k",
api_user="u",
)
async def raising_request(method, url, params=None):
raise RuntimeError("network down")
client._request = raising_request
fetcher = CategoryFetcher(client, tmp_db, site_id=1)
assert fetcher._batch_api_works is None
post = FakePost(tags="miku 1girl")
asyncio.new_event_loop().run_until_complete(fetcher._do_ensure(post))
assert fetcher._batch_api_works is None, (
"Transient error must not poison the probe flag"
)
# Persistence side: nothing was saved
reloaded = CategoryFetcher(FakeClient(), tmp_db, site_id=1)
assert reloaded._batch_api_works is None
def test_clean_200_zero_matches_flips_to_false(self, tmp_db):
"""Clean HTTP 200 + no names matching the request → flips
the flag to False (structurally broken endpoint)."""
client = FakeClient(
tag_api_url="http://example.com/tags",
api_key="k",
api_user="u",
)
async def empty_ok_request(method, url, params=None):
# 200 with a valid but empty tag list
return FakeResponse(
json.dumps({"@attributes": {"count": 0}, "tag": []}),
status_code=200,
)
client._request = empty_ok_request
fetcher = CategoryFetcher(client, tmp_db, site_id=1)
post = FakePost(tags="definitely_not_a_real_tag")
asyncio.new_event_loop().run_until_complete(fetcher._do_ensure(post))
assert fetcher._batch_api_works is False, (
"Clean 200 with zero matches must flip flag to False"
)
reloaded = CategoryFetcher(FakeClient(), tmp_db, site_id=1)
assert reloaded._batch_api_works is False
def test_non_200_leaves_flag_none(self, tmp_db):
"""500-family responses are transient, must not poison."""
client = FakeClient(
tag_api_url="http://example.com/tags",
api_key="k",
api_user="u",
)
async def five_hundred(method, url, params=None):
return FakeResponse("", status_code=503)
client._request = five_hundred
fetcher = CategoryFetcher(client, tmp_db, site_id=1)
post = FakePost(tags="miku")
asyncio.new_event_loop().run_until_complete(fetcher._do_ensure(post))
assert fetcher._batch_api_works is None

View File

@ -1,128 +0,0 @@
"""Tests for save_post_file.
Pins the contract that category_fetcher is a *required* keyword arg
(no silent default) so a forgotten plumb can't result in a save that
drops category tokens from the filename template.
"""
from __future__ import annotations
import asyncio
import inspect
from dataclasses import dataclass, field
from pathlib import Path
import pytest
from booru_viewer.core.library_save import save_post_file
@dataclass
class FakePost:
id: int = 12345
tags: str = "1girl greatartist"
tag_categories: dict = field(default_factory=dict)
score: int = 0
rating: str = ""
source: str = ""
file_url: str = ""
class PopulatingFetcher:
"""ensure_categories fills in the artist category from scratch,
emulating the HTML-scrape/batch-API happy path."""
def __init__(self, categories: dict[str, list[str]]):
self._categories = categories
self.calls = 0
async def ensure_categories(self, post) -> None:
self.calls += 1
post.tag_categories = dict(self._categories)
def _run(coro):
return asyncio.new_event_loop().run_until_complete(coro)
def test_category_fetcher_is_keyword_only_and_required():
"""Signature check: category_fetcher must be explicit at every
call site no ``= None`` default that callers can forget."""
sig = inspect.signature(save_post_file)
param = sig.parameters["category_fetcher"]
assert param.kind == inspect.Parameter.KEYWORD_ONLY, (
"category_fetcher should be keyword-only"
)
assert param.default is inspect.Parameter.empty, (
"category_fetcher must not have a default — forcing every caller "
"to pass it (even as None) is the whole point of this contract"
)
def test_template_category_populated_via_fetcher(tmp_path, tmp_db):
"""Post with empty tag_categories + a template using %artist% +
a working fetcher saved filename includes the fetched artist
instead of falling back to the bare id."""
src = tmp_path / "src.jpg"
src.write_bytes(b"fake-image-bytes")
dest_dir = tmp_path / "dest"
tmp_db.set_setting("library_filename_template", "%artist%_%id%")
post = FakePost(id=12345, tag_categories={})
fetcher = PopulatingFetcher({"Artist": ["greatartist"]})
result = _run(save_post_file(
src, post, dest_dir, tmp_db,
category_fetcher=fetcher,
))
assert fetcher.calls == 1, "fetcher should be invoked exactly once"
assert result.name == "greatartist_12345.jpg", (
f"expected templated filename, got {result.name!r}"
)
assert result.exists()
def test_none_fetcher_accepted_when_categories_prepopulated(tmp_path, tmp_db):
"""Pass-None contract: sites like Danbooru/e621 return ``None``
from ``_get_category_fetcher`` because Post already arrives with
tag_categories populated. ``save_post_file`` must accept None
explicitly the change is about forcing callers to think, not
about forbidding None."""
src = tmp_path / "src.jpg"
src.write_bytes(b"x")
dest_dir = tmp_path / "dest"
tmp_db.set_setting("library_filename_template", "%artist%_%id%")
post = FakePost(id=999, tag_categories={"Artist": ["inlineartist"]})
result = _run(save_post_file(
src, post, dest_dir, tmp_db,
category_fetcher=None,
))
assert result.name == "inlineartist_999.jpg"
assert result.exists()
def test_fetcher_not_called_when_template_has_no_category_tokens(tmp_path, tmp_db):
"""Purely-id template → fetcher ``ensure_categories`` never
invoked, even when categories are empty (the fetch is expensive
and would be wasted)."""
src = tmp_path / "src.jpg"
src.write_bytes(b"x")
dest_dir = tmp_path / "dest"
tmp_db.set_setting("library_filename_template", "%id%")
post = FakePost(id=42, tag_categories={})
fetcher = PopulatingFetcher({"Artist": ["unused"]})
_run(save_post_file(
src, post, dest_dir, tmp_db,
category_fetcher=fetcher,
))
assert fetcher.calls == 0