fix: address top recurring forum complaints (Tier A bundle)#2778
fix: address top recurring forum complaints (Tier A bundle)#2778vpetersson merged 14 commits intomasterfrom
Conversation
Bundles seven small fixes mapped from a survey of the most-discussed threads on https://forums.screenly.io/. Each is independently small; shipped together so a single user-visible release closes the loop on the most-asked complaints at once. - viewer: setBackgroundColor(Qt::black) on QWebEnginePage so dark URL assets don't flash white between load start and first paint (forums.screenly.io t/6610) - viewer: switch QWebEngineProfile to MemoryHttpCache + clearHttpCache on startup so URL assets refresh on each loop instead of lingering stale for days from /data/.cache/ScreenlyWebview (forums.screenly.io t/983 — most-viewed bug, 2.5k views) - ui: support multi-file selection and drag-and-drop in the asset add modal; sequential upload with "X of N" progress label (forums.screenly.io t/6579) - github: fix inverted cache check in get_latest_docker_hub_hash that caused the OR clause clearing the "Update Available" banner to never fire — function returned None on cache miss instead of fetching (forums.screenly.io t/6079, t/6144, t/6537) - celery: cleanup() now uses settings['assetdir'], adds a 1h mtime guard so in-flight uploads survive the sweep, and removes orphaned asset files no live Asset row references (GH #2657) (forums.screenly.io t/6636) - deps: bump yt-dlp 2026.2.21 → 2026.3.17 to pull in the YouTube extractor changes users have been hitting since Feb (forums.screenly.io t/6556) - splash: read the splash logo URL from anthias.conf [main] splash_logo_url so users don't have to fork-and-rebuild to swap it (forums.screenly.io t/6672)
There was a problem hiding this comment.
Pull request overview
This PR bundles several fixes driven by recurring forum complaints, targeting webview rendering/caching behavior, update-banner correctness, asset file housekeeping, multi-file uploads, YouTube URL ingestion, and splash-page customization.
Changes:
- WebView: set a black page background and switch QtWebEngine HTTP caching to in-memory with cache cleared on startup.
- Web UI: enable multi-file upload (file picker + drag/drop) with sequential uploads and per-file status updates.
- Backend/system: fix Docker Hub hash caching behavior for update checks, add orphan asset-file cleanup, add configurable splash logo URL, and bump
yt-dlp.
Reviewed changes
Copilot reviewed 9 out of 10 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| webview/src/view.cpp | Sets QWebEngine page background color; switches HTTP cache to memory and clears cache. |
| uv.lock | Updates locked yt-dlp version to 2026.3.17. |
| pyproject.toml | Bumps yt-dlp dependency version to 2026.3.17. |
| templates/splash-page.html | Uses a template-provided splash_logo_url instead of a hardcoded static path. |
| anthias_app/views.py | Passes splash_logo_url from settings into the splash template context. |
| settings.py | Adds a default splash_logo_url setting. |
| static/src/components/add-asset-modal/file-upload-tab.tsx | Enables multiple file selection in the file input. |
| static/src/components/add-asset-modal/use-file-upload.ts | Implements sequential multi-file uploads with “Uploading X of N” messaging; makes upload return success/failure. |
| lib/github.py | Fixes Docker Hub hash lookup so cache misses fetch tags and cache the result. |
| celery_tasks.py | Enhances cleanup to delete stale .tmp files with an mtime guard and remove orphaned asset files unreferenced by DB rows. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Anthias publishes images to ghcr.io now (Docker Hub stays as a parallel mirror during the migration window — see .github/workflows/docker-build. yaml). The previous Docker-Hub-tag-list path was about to retire from under the function, so the OR clause that clears the "Update Available" banner once a new image is published would silently start failing on new installs. Replaces the tag-list scan with a two-HEAD digest comparison against GHCR's Docker Registry v2 API: HEAD /v2/screenly/anthias-server/manifests/latest-<device_type> HEAD /v2/screenly/anthias-server/manifests/<short_hash>-<device_type> If the digests match, the running image is the latest published one. Both calls go through GHCR's anonymous token endpoint, so no credentials are needed for public packages. Result is cached for 10 minutes (same TTL as before) under a new redis key so existing devices don't stick to a stale Docker-Hub-derived value after upgrade. Renames `get_latest_docker_hub_hash` → `is_running_latest_published_image` (the function returns a bool now) and updates the parametrized test accordingly, including a new "GHCR lookup failed" case that exercises the fail-open-to-banner path. Forum: https://forums.screenly.io/t/update-abailable-always-in-the-menu-bar/6079
QtWebEngine renders web content at 1 CSS px = 1 physical px by default, which makes a typical web page look ~half-size on a 4K TV (forum 6538). This is the standard kiosk-mode complaint and the standard fix is the same one Chromium uses on desktop: scale the layout viewport so the page renders as if the screen were 1920×1080 and is then upscaled to the panel's native resolution. In bin/start_viewer.sh, before launching the viewer process: - read the active mode from /sys/class/drm/card*-*/modes for any connected output (first line is preferred mode, which Qt fullscreen will end up at) - bucket the screen width into integer scale factors (3840 → 2x, 5120 → 3x, 7680 → 4x; ≤2560 stays 1x) - export QT_SCALE_FACTOR before sudo'ing into the viewer user Forwarded through sudo via --preserve-env, since Debian's default sudoers env_keep doesn't include QT_*. Skipped entirely if the user has already set QT_SCALE_FACTOR explicitly, so manual overrides win. Forum: https://forums.screenly.io/t/default-zoom-on-image-and-webpage/6538
- ruff format: lib/github.py picked up stricter trailing-comma layout
in the GHCR migration constants (CI run-python-linter check).
- mypy: cleanup() comprehension narrows uri to non-None via `if uri and
uri.startswith(...)` so path.basename type-checks; GHCR token reader
isinstance-narrows resp.json().get('token') to str; signature of
is_running_latest_published_image accepts str | None for short_hash
to match get_git_short_hash()'s return type (already guarded inside).
- celery cleanup: drop the redundant HOME guard. AnthiasSettings already
raises if HOME is unset, and settings['assetdir'] is stored as an
absolute path now, so the comment about relative paths no longer
applies (Copilot review).
- view.cpp: hoist HTTP cache config out of configureWebView() into the
View constructor. Both web views share the default QWebEngineProfile,
so setHttpCacheType / clearHttpCache should run once per process, not
once per QWebEngineView (Copilot review).
- use-file-upload.ts: track the post-upload status-clear timeout via
useRef and only schedule it after the full batch completes. Previously
each successful file scheduled its own 5s timeout, which could fire
mid-batch and wipe the in-flight "Uploading X of N" label (Copilot
review). Multi-file batches now show "Uploaded N files." once the
loop finishes.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 11 out of 13 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Two follow-ups from Copilot's second-round review of #2778: - Cache key was a global 'latest-published-image-match' that wasn't scoped to device_type or short_hash. Because Redis persists across restarts, a verdict cached for the previous build could be served for up to PUBLISHED_IMAGE_MATCH_TTL (10m) after upgrade/downgrade, leaving the update banner wrong until the TTL expired. Bake device_type and short_hash into the key so a new image triggers a fresh lookup immediately. - _get_ghcr_manifest_digest() only reads Docker-Content-Digest from the response headers but issued a full GET, downloading the manifest body for nothing. Switch to HEAD; GHCR honors HEAD on the manifests endpoint with the same Accept negotiation.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 11 out of 13 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Round-3 Copilot review on #2778. Three issues: 1. Multi-file upload status got clobbered every iteration. The slice's uploadFile.pending cleared statusMessage and uploadFile.fulfilled / saveAsset.fulfilled rewrote it to "Upload completed." / "Asset saved successfully.", so the "Uploading X of N: <name>" label set by the hook never actually rendered between files. Add a `silent` flag to UploadFileParams / SaveAssetParams. When set, the slice skips statusMessage and isSubmitting writes in pending / fulfilled, leaving the hook in charge of the full lifecycle. The batch hook now passes silent=true and drives setIsSubmitting(true) at the start, resetForm() once at the end (after the loop), and the "Uploaded N files." / "Upload completed." status only after the whole batch finishes. 2. dispatch(resetForm()) was firing inside handleFileUpload after every successful file, which cleared isSubmitting between files in a batch and could briefly re-enable the file input / dropzone. Move it out of the per-file path; the post-batch path handles it once. 3. GHCR token / manifest lookup failures returned None without caching anything. Because is_up_to_date() runs on most UI/API requests, a transient ghcr.io outage triggered token + manifest HEADs on every page load. Cache a "ghcr-api-error" sentinel for ERROR_BACKOFF_TTL (5 min), mirroring the existing github-api-error pattern, so the next requests short-circuit until the backoff expires. Missing `<short_hash>-<device_type>` manifests (commit not yet published) are intentionally not error-backed-off — that's an expected case.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 14 out of 16 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…ar selector Round-4 Copilot review on #2778. 1. is_running_latest_published_image only triggered _set_ghcr_error_backoff() on token-fetch and the latest-<device_type> HEAD failure; the per-commit HEAD fell through on None without any backoff. A 5xx/429/timeout on that second HEAD would then be retried on every is_up_to_date() call. Move the backoff policy into _get_ghcr_anonymous_token() and _get_ghcr_manifest_digest() themselves so all three call sites are covered uniformly. The manifest helper now distinguishes 404 (clean "tag missing" miss — expected for unpublished commits, no backoff) from any other failure (transient infra — backoff). Orchestrator short-circuits on None and the comments explain why the per-commit None path is the only legitimate fall-through. 2. The catch block in handleFileUpload reset the progress bar via document.querySelector('.progress .bar'), but the rendered markup uses .progress-bar (file-upload-tab.tsx:77), so the selector never matched. The DOM write was a no-op; the visible width is already driven by the uploadProgress slice state. Drop the dead querySelector.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 14 out of 16 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Round-5 Copilot review on #2778. The 5s setTimeout that clears the "Upload completed." / "Uploaded N files." status held a reference to the store dispatch, not to the hook instance. If the user closed the Add Asset modal mid-timeout (unmounting the hook) and reopened it within 5s to start another upload, the old timeout would still fire and wipe the new in-flight "Uploading X of N" label. Add a useEffect cleanup that cancels any pending timeout on unmount. The existing in-component cancel at the top of handleFileUploads still covers the same-mount restart case.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 14 out of 16 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…-dlp sidecars YouTube downloads were writing to $HOME/anthias_assets while cleanup() sweeps settings['assetdir']. With a custom assetdir, the two diverge and orphans accumulate outside the swept tree. Route YouTube downloads through settings['assetdir'] so the orphan sweep sees them. Also skip yt-dlp's active sidecar suffixes (.part, .ytdl, .info.json) in the orphan sweep so a slow YouTube fetch isn't reaped mid-transfer once it crosses the 1h mtime guard. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 15 out of 17 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…mment Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 15 out of 17 changed files in this pull request and generated 5 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…r shape - cleanup: resolve Asset URIs via realpath so legacy ~/screenly_assets rows (now symlinked to ~/anthias_assets) are recognized as referenced - cleanup: drop suffix-based skip for yt-dlp sidecars; the 1h freshness guard alone protects in-flight downloads and lets stale sidecars get swept - github: cache "unknown" verdict (per-commit tag 404) for 60s so is_up_to_date() doesn't redo token + 2xHEAD on every poll - use-file-upload: handle string/unknown reject payloads so the user sees the actual error instead of "Upload failed: undefined" - tests: cover fresh/stale .tmp + sidecar retention, orphan removal, referenced-file preservation including legacy symlinked URIs Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 16 out of 18 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…dant TestCleanup - github: a 200 response with no Docker-Content-Digest header is now treated like other non-404 failures (log + ghcr-api-error backoff) instead of falling through as a clean miss - tests: remove TestCleanup; it wrote a fresh image.tmp into the real $HOME/anthias_assets in tearDown(), which leaks artifacts onto dev machines and is order-dependent under the new 1h mtime guard. The tmp-sweep behaviour is already covered by TestCleanupOrphanSweep.test_fresh_tmp_is_retained and test_stale_tmp_is_removed (TemporaryDirectory + backdated mtime) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
# Conflicts: # pyproject.toml # uv.lock
yt-dlp's YouTube extractor breaks frequently — pinning to ==2026.3.17 means the next site change leaves us stuck until somebody bumps it by hand. Switching to >=2026.3.17 lets `uv lock --upgrade-package yt-dlp` (or a routine `uv lock --upgrade`) sweep newer releases without a pyproject.toml edit, and keeps dependabot from opening a PR for every yt-dlp release. Reproducibility is unaffected: uv.lock still pins the exact resolved version. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 16 out of 18 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.



Summary
Triaged the most-discussed threads on https://forums.screenly.io/ and bundled the small/high-signal fixes that map cleanly to existing code paths. Each item below is independently small; shipped together so one release closes the loop on the most-asked complaints.
Forum threads addressed
QWebEnginePage::setBackgroundColor(Qt::black)so dark URL assets don't flash white between load start and first paint.QWebEngineProfiletoMemoryHttpCacheand callclearHttpCache()on startup so URL assets refresh each loop instead of lingering stale for days from/data/.cache/ScreenlyWebview.multipleon the file<input>+ sequential per-file upload with "X of N" status; backend already accepts one file per request, so no API change.get_latest_docker_hub_hash— function returnedNoneon cache miss instead of fetching, so the OR clause that should clear the banner never evaluated true.latest-<device_type>to<short_hash>-<device_type>. Anonymous public token; no credentials. Function renamedget_latest_docker_hub_hash→is_running_latest_published_image.bin/start_viewer.shnow reads the active mode from/sys/class/drm/and buckets it to an integerQT_SCALE_FACTOR(3840 → 2, 5120 → 3, 7680 → 4) before launching the viewer. Skipped if the user has already setQT_SCALE_FACTORso a manual override wins.screenly_assets/cleanup()now usessettings['assetdir'], adds a 1hmtimeguard so in-flight uploads aren't killed by the.tmpsweep, and removes asset files that no liveAssetrow references.yt-dlp2026.2.21 → 2026.3.17to pull in the YouTube extractor changes users have been hitting since February.[main] splash_logo_urlkey inanthias.conf; defaults to the existing static path. Users can override without forking + rebuilding.(Skipped from this bundle: browser tab title — already shipped, since t/6436 when the maintainer patched it.)
Test plan
cd webview && docker compose --profile pi5 up -d && docker compose --profile pi5 exec builder-pi5 /scripts/build_webview.sh(already verified locally on x86 — Qt6 build links clean)./bin/start_development_server.sh), open the Add Asset modal, select 3+ files, confirm 3+ assets land with "Uploading X of N" status visible/data/.cache/ScreenlyWebviewgit_short_hashmatches the currentlatest-<device_type>manifest digest onghcr.io/screenly/anthias-server, confirm banner clears within ≤10 minutes (cache TTL). On a device whose tag is older, confirm banner shows.QT_SCALE_FACTOR=2appears in the start_viewer log line and that web URL assets render at the same apparent size as on a 1080p panelQT_SCALE_FACTOR=1set in the compose env, confirm the auto-detect path is skipped.tmpfile (older than 1h) into~/.anthias/anthias_assets/with a name not matching any DB asset, wait for celery beat (or triggercleanup.delay()), confirm removed; verify a referenced file is not removed.tmpguard: drop a<uuid>.tmpfile withmtime<1h ago, run cleanup, confirm it stays~/.anthias/anthias.conf[main] splash_logo_urlto a different path, restart the server container, hit/splash_page, confirm the new logo rendersuv run ruff check .— cleanbun run lint:check && bun run format:check— clean./manage.py test --exclude-tag=integration—test_updates.pycases updated to mockis_running_latest_published_imageinstead, plus a new "lookup failed" case