1stframe gif
(8.12 MB, 459x458)
1338450160907 png
(217.42 KB, 1024x617)
1338451125425 png
(201.8 KB, 1024x929)
1338451500670 png
(77.29 KB, 251x249)
1338451850783 png
(508.64 KB, 606x735)
Topics * save all of your IPFS CIDs to an MFS folder * create a ZFS snapshot * related to copying petabytes of data
IPFS advice: have an MFS folder where you can save every important CID that you make, call it "created". Example:
. "/created/cids/" - contains https://ipfs.ssi.eecc.de/ipfs/bafybeicm3xkycjr6tlcc7yfmqt4r53navwonvj7dl73g3fd7cqtu2lq3v4 which includes details on a possible "ipfs refs" indexing bug
. "/partial/" - for partially-downloaded data/folders, such as a Wikipedia-on-IPFS CID
. "/shared/cid/" - for all CIDs that you shared with users, contains all of the CIDs that you posted in an IRC channel or wherever (like if you shared them via an HTTP website)
>>/11356/
I think this is how to correctly make a ZFS snapshot of a pool named "zc":
> $ # sudo zpool set listsnapshots=on zc
> $ # sudo zfs snapshot zc@s2024-11-26
> $ zfs list -t snapshot
> NAME USED AVAIL REFER MOUNTPOINT
> zc@s2024-11-26 38.4M - 14.9T -
> $ # ~38 MB for roughly >9 million files. Info: https://docs.oracle.com/cd/E19253-01/819-5461/gbiqe/index.html
Next question: how do I "travel back in time" and see the significantly different past snapshot which contains files which are deleted in the non-snapshot version.
>>/11359/
> That would be interesting to calculate
Yeah, calculations at that level = "fun math".
> I wonder how realistic bandwidth is to handle that? You'd need a good internet connection and even then it wouldn't be quick.
archive.org has a 10GiB/s or 50GiB/s connection. The bottleneck would be whatever speed the Filecoin peers have, likely less than 50 GiB/s. It's significantly faster if the Internet transfer isn't happening transcontinentally (so it's happening in the same continent). The Internet is kinda like a network of worldwide routers with computers connected to it. Each ISP owns "pipes" and the peering stations, unless I'm mistaken, which I may be. Well, I don't think I'm wrong, each person's computer is connected to the Internet via a router going outwards via a line owned by an ISP to an ISP's building which contains computers and routers. Various hops along the way; run ping or traceroute - for example http://trace.die.net/search/?q=endchan.org
. be6245.rcr51.b004747-3.lax05.atlas.cogentco.com (38.104.85.137 [W])
.. be3584.ccr41.lax05.atlas.cogentco.com (154.54.85.229 [W])
... (38.104.84.254 [W])
.... (141.101.72.25 [W])
..... endchan.org (104.21.48.128 [W])
Those *.cogentco.com domain names = no web server, but are like Internet-backbone ISP buildings. Point in bringing this up, is said ISP "datacenters" probably all have ~100 GiB/s connections so the bottleneck wouldn't be there. In some countries (Korea, poor countries), I think said ISP datacenters have significantly slower connections.
> How would filecoin compare to the cost of paying for thousands of drives? Hilariously that still maybe cheaper when we count power and upkeep. I still think a psychical transfer and copy would probably be quicker even if you bought all the bandwidth you could.
There have been physical transfers of data before with Filecoin, maybe https://sealstorage.io/ does that. I think I read that they have like 100 petabytes. Having 100 terabytes is like entry-level with Filecoin. 100 TB costs roughly $2,000; 100 PB = roughly $2,000,000.
Image = MFS-related.