/hydrus/ - Hydrus Network

Bug reports, feature requests, and other discussion for the hydrus network.


New Reply on thread #1306
X
Max 20 files0 B total
[New Reply]

[Index] [Catalog] [Banners] [Logs]
Posting mode: Reply [Return]


https://youtube.com/watch?v=6rboksqjPy4
windows
zip: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Windows.-.Extract.only.zip
exe: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Windows.-.Installer.exe
macOS
app: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.macOS.-.App.dmg
linux
tar.gz: https://github.com/hydrusnetwork/hydrus/releases/download/v489/Hydrus.Network.489.-.Linux.-.Executable.tar.gz

I had a good week getting back into the swing of things. I fixed some important bugs and improved some UI.

highlights

All the downloader pages--gallery, watcher, urls, and simple--have a revamped status system. All the text that shows how file or gallery downloads are going is now generated in a better way, with more error states (e.g. it will tell you when your gallery stopped because it hit the file limit, or when one of the emergency pause states under the network menu has kicked in), and logic in edge cases is improved. Everything is unified now, so the texts are the same across all pages. Also, if a gallery query or watched thread is 'pending', its text now reports that it is waiting for a work slot, rather than staying blank. There _shouldn't_ be any situations now where a downloader is unpaused with work to do but has blank status.

If you use the multiple local file services system, the archive/delete filter now presents more options when you are done. If the files are in more than one local file service, you can choose where you delete them from, including all applicable. This was confusing and opaque before, so I hope this makes it more clear what is happening and gives you more choice.

I _believe_ I have fixed an important bug some users were having with PTR processing. There was an annoying issue about a 'definitions' file being seen as a 'content' file, or vice versa, that the automatic maintenance could not fix. I finally managed to reproduce the issue and fixed it. I schedule a fix in the update this week, so if you have been hit by this, please wait for one more round of file maintenance 'metadata' scans, and then unpause the PTR one more time. Essentially, I think I fixed the automatic maintenance. Let me know how you get on!

full list

- downloader pages:
- greatly improved the status reporting for downloader pages. the way the little text updates on your file and gallery progress are generated and presented is overhauled, and tests are unified across the different downloader pages. you now get specific texts on all possible reasons the queue cannot currently process, such as the emergency pause states under the _network_ menu or specific info like hitting the file limit, and all the code involved here is much cleaner
- the 'working/pending' status, when you have a whole bunch of galleries or watchers wanting to run at the same time, is now calculated more reliably, and the UI will report 'waiting for a work slot' on pending jobs. no more blank pending!
- when you pause mid-job, the 'pausing - status' text is generated is a little neater too
- with luck, we'll also have fewer examples of 64KB of 503 error html spamming the UI
- any critical unhandled errors during importing proper now stop that queue until a client restart and make an appropriate status text and popup (in some situations, they previously could spam every thirty seconds)
- the simple downloader and urls downloader now support the 'delay work until later' error system. actual UI for status reporting on these downloaders remains limited, however
- a bunch of misc downloader page cleanup
- archive/delete:
- the final 'commit/forget/back' confirmation dialog on the archive/delete filter now lists all the possible local file domains you could delete from with separate file counts and 'commit' buttons, including 'all my files' if there are multiple, defaulting to the parent page's location at the top of the list. this let's you do a 'yes, purge all these from everywhere' delete or a 'no, just from here' delete as needed and generally makes what is going on more visible
- fixed archive/delete commit for users with the 'archived file delete lock' turned on
- .
- misc:
- fixed a bug in the parsing sanity check that makes sure bad 'last modified' timestamps are not added. some ~1970-01-01 results were slipping through. on update, all modified dates within a week of this epoch will be retroactively removed
- the 'connection' panel in the options now lets you configure how many times a network request can retry connections and requests. the logic behind these values is improved, too--network jobs now count connection and request errors separately
- optimised the master tag update routine when you petition tags
- the Client API help for /add_tags/add_tags now clarifies that deleting a tag that does not exist _will_ make a change--it makes a deletion record
- thanks to a user, the 'getting started with files' help has had a pass
- I looked into memory bloat some users are seeing after media viewer use, but I couldn't reproduce it locally. I am now making a plan to finally integrate a memory profiler and add some memory debug UI so we can better see what is going on when a couple gigs suddenly appear
- .
- important repository processing fixes:
- I've been trying to chase down a persistent processing bug some users got, where no matter what resyncs or checks they do, a content update seems to be cast as a definition update. fingers crossed, I have finally fixed it this week. it turns out there was a bug near my 'is this a definition or a content update?' check that is used for auto-repair maintenance here (long story short, ffmpeg was false-positive discovering mpegs in json). whatever the case, I have scheduled all users for a repository update file metadata check, so with luck anyone with a bad record will be fixed automatically in the background within a few hours of background work. anyone who encounters this problem in future should be fixed by the automatic repair too. thank you very much to the patient users who sent in reports about this and worked with me to figure this out. please try processing again, and let me know if you still have any issues
- I also cleaned some of the maintenance code, and made it more aggressive, so 'do a full metadata resync' is now be even more uncompromising
- also, the repository updates file service gets a bit of cleanup. it seems some ghost files have snuck in there over time, and today their records are corrected. the bug that let this happen in the first place is also fixed
- there remains an issue where some users' clients have tried to hit the PTR with 404ing update file hashes. I am still investigating this

next week

I ended up doing more cleanup this week than I expected, but I'm happy to have the downloader pages reporting better. They were a real knot before. I want to spend a little admin time next week, triaging final multiple local file services work and planning future server improvements for when that is done, and then I think I'd like to focus on more small jobs, including some github issues.



Post(s) action:


Moderation Help
Scope:
Duration: Days

Ban Type:


2 replies | 0 file
New Reply on thread #1306
Max 20 files0 B total