Anon 12/19/2023 (Tue) 02:30 No.9072 del
>>9032
More than 14,000 Google Drive files IDs have been saved to web.archive.org - weren't in there prior to me running spn.sh this month. I am lately continuing to get more Google Drive file ID WBM CDXs from JSON files to see what else hasn't been saved to wbm.

>>9053
>workflow to fix for missing blocks
Issues: (a) doesn't consider the case where a block is in neither repos (b) maybe not very I/O-friendly. I wrote a thing ~months ago that worked with (a), but I didn't record it well and it didn't work very well. Fix for (b): get index of a folder from $has_all, check which blocks $has_part has from said index ("ipfs pin add --recursive=false <IDs>" in loop), export missing blocks to CAR files, import .car files, then pin it in $has_part. Other fix for (b) other than working with one big CAR file: run $has_all and $has_part in two different computers and fix $has_part's copy via the network. Third fix for (b): instead of "ipfs pin add --progress ..." being the method to find the next missing block a bunch of times, try getting an index each time (something like "ipfs refs -r --format="<src> -> <dst> = <linkname>" [cid]"); the concern is that there's no way to get "ipfs pin add" to not hash everything each time, as I understand.

>Elegant method didn't work for some reason. Oh, it's because it uses sh and not bash
It does in fact work after replacing "sh -c" with "bash -c".

Porn from
>https://cf-ipfs.com/ipfs/bafybeifemum5o7icqdt4xtvgddobsmtc6u7pnfxiq7ij5z3bw6o433coza/files/zonkpunch/7679099-1ced-Derpy_Hooves.jpg
>https://gateway.ipfs.cybernode.ai/ipfs/bafybeifemum5o7icqdt4xtvgddobsmtc6u7pnfxiq7ij5z3bw6o433coza/files/zonkpunch/7679099-4760-Flash.zip
>https://gateway.ipfs.cybernode.ai/ipfs/bafybeifemum5o7icqdt4xtvgddobsmtc6u7pnfxiq7ij5z3bw6o433coza/files/zonkpunch/7679099-7ec3-Animation.zip
>https://gateway.ipfs.cybernode.ai/ipfs/bafybeifemum5o7icqdt4xtvgddobsmtc6u7pnfxiq7ij5z3bw6o433coza/files/zonkpunch/7679099-a341-Art.zip
>$ archivemount -o readonly bafybeififjokga6nbht2x3kmj2puprikiomk2uwhruchxbtfbojxcaf2oy /mnt/z # /mnt/z/Asset/Particles*
I've been indexing that 2.6-terabyte bafy...coza folder which has some MLP-related data. The python script I made and am using to index it definitely downloads way less than ipfs's built-in method (I'm pretty sure this is still the case), but it still downloads hundreds of megabytes in certain intervals. (So also in someway run "ipfs repo gc >/dev/null" occasionally.)