thumbnail of Odds_and_Ends.gif
thumbnail of Odds_and_Ends.gif
Odds_and_Ends gif
(1.08 MB, 788x535)
Anypony can get started with something! Here is a simple (and incomplete for more in depth stuff) guide to for some suggestions on starting out.

Archiving:
Simply starting out:
Individualized archiving and keeping of records (especially interested in basic accounts and timelines of websites, see: >>/4085/). If you have any memories to share, that is nice as well! Even old screencaps and old accounts of what a place was like! {If thread seems too busy or it seems a bit off form main objectives, I again point to /culture/: >>/10356/)


For videos: Youtube (and other many other websites!) YT-DLP is the best thing available right now. 
https://github.com/yt-dlp/yt-dlp
Don't forget the comments! 
yt-dlp with --write-comments or this script here: https://github.com/egbertbouman/youtube-comment-downloader which That we had been using in the past rather intensively. NOTE: both not working for me as of writing this, uncertain if broken by a Youtube update or just something wrong on my end, don't have much time to test at the moment.



For full websites, the best option is grab-site from the Archive Team. It is a specialty tool that archives websites in the WARC file format:
https://github.com/ArchiveTeam/grab-site
Note that Windows support is experimental. 

Using Linux, you most likely have Wget already installed, it can do simple websites and single web pages easily enough, although more complicated sites will NOT work well. Can do WARCs, but a bit problematic(see:https://wiki.archiveteam.org/index.php/Wget_with_WARC_output) You can get a version of Wget with Powershell on Windows, but it is very different under the hood. By default, Windows does come with its own version of curl as well. I'm not sure how useful either of these are in archival context compared to their Linux counterparts at least potential usefulness in some situations. There is a version of GNU Wget for Windows but it appears to be very outdated.

httrack is a program that has seen some use around here in the past. It might have a use but it's web crawls are incompatible with WARCs: 
https://www.httrack.com/
It does have Windows support and I am still interested in investigating potential uses (weirdly, it sometimes was able to grab complex sites relatively well for me) but this is much lower tier option due to no-WARC support, it's own separate web-crawl format, and uncertainty of how active the developer is now.


Linux: You might want to have this installed for the tools mentioned above. No fear, you can do that in Windows now!
https://learn.microsoft.com/en-us/linux/install (need to look up some better Youtube tutorial or something.)