Now that the software is running with (at least for me) a low level of jank, it seems worth considering what we do with the years of accumulated sneer-strata over at the old place. Just speaking for myself, I think it would be nice if we had a static-site backup of the whole shindig. Unfortunately, since I’m a physicist by trade, anything I do with webstuff tends to involve starting from scratch with compass, straightedge and wget. There’s got to be a better method of archiving.
The other, not-mutually-exclusive option I can think of is to manually rerun “SneerClub classics”, the posts that one way or another helped define what sneering is all about.
N.B. Some of the test posts made today involved writing more on the serious-discussion side and have accordingly been marked NSFW.
They’re freeze peach bro-y, but archiveteam might have backups at some point: https://wiki.archiveteam.org/index.php/Reddit
Anyone know how to manipulate compressed JSON (.zst) files? I was able to snarf the SneerClub data from a torrent there that goes up to December 2022.
Wake up babe, .zst files just dropped: comments and submissions.
To get the data from this year, I tried the straightforward thing of just doing save-full-webpage in Firefox for each post. This was tedious, but I didn’t feel like figuring out how to get any automated downloading tool to work with my login details so that it could grab the NSFW posts. The result is ~2 gigs, most of which is probably redundant infrastructure. An oddity: trying to save a thread always failed on the first attempt but worked when I clicked “retry download”.
nice! I’ll grab the archives and see how well they combine with the output of this tool: https://github.com/aliparlakci/bulk-downloader-for-reddit
Sounds like a good plan.
some work in progress on this is available here. the
SneerClub
directory is the output of the bulk downloader for all 1000 (deduplicated) posts it could grab from each of SneerClub’s hot, top, new, rising, and controversial tabs, and thejsonl
files are just the ones you posted decompressed for convenience. so far I’m just usingjq
to process the data setsSneerClub
has 1940 posts with nested comments and attached media where the downloader could parse it; the archive team files have 3851 posts and 100149 comments in a (much less convenient) flattened format without media. both sets have a few posts from 2015, so I’ll need to do more looking to see how much we’ve salvaged overalloh yeah I think that’s just zstandard! it’s fairly easy to decompress if you’ve got access to a Linux machine or similar, where it’s just
unzstd
if you’ve got thezstd
package for your distro installedany chance they’ve got the script they used available? we could use it to grab everything from this year and complete the archive
I don’t think the script is available (and it may be nonfunctional now, going by the terse notes at the above-linked wiki page).