Anon 06/08/2021 (Tue) 16:22:57 No.7615 del
I found some time to fork and remake the changes in the comment-downloader but properly this time, not just throwing it together in 30 mins. Here are the changes
> -y now has actually tested and working parsing instead of whatever that old one was, so short and long urls are supported
> output file is no longer necessary to specify for single downloads, it saves them to the [id].json by default
> If from-file downloading (-f) is used -o works as directory changer (So it no longer dumps everything to "./")
> from-file downloading is much more robust now - the resulting filenames are actually converted to normal strings (no \n or $ in resulting filename.json) also it happens automatically so -t was removed
> limit should work if it was ever needed
> Added error messages for known errors so when it breaks again due to google we should know
> Added -a option to save directly to valid JSON array so fixing is no longer needed
Function that fetches video name could be added but I expect this to be used with the files generated by youtube-dl and they already have the name in them so I don't think there is any need for that

>>7576
>If that is not possible or easy would splitting into pages work?
It would have to be splitted anyway so yes, I wanted them to autoload when you scroll lower but we could just link them as well. Now we just have figure out what amount of comments is optimal per page
>They have the content but don't show what it was like or the discourse around it
Exactly what i had in mind

>>7580

Message too long. Click here to view full text.