thumbnail of Sweetie_Belle_-_Giddy_Up_AI_Cover-FlutterBee_Anonymous_Bee-20240202-youtube-1920x1080-X711GSgpIjE.webm
thumbnail of Sweetie_Belle_-_Giddy_Up_AI_Cover-FlutterBee_Anonymous_Bee-20240202-youtube-1920x1080-X711GSgpIjE.webm
Sweetie_Belle_-_Giddy... webm
(11.51 MB, 1920x1080 vp9)
Sweetie Belle is supposed to have a beautiful singing voice. This AI cover ain't it.

 >>/11258/
Unintentional markup: double underscore -> underline.

> semi-complex code that I wrote (multiple files for the same project), may as well stick it in GitHub so at least it's easier to manage and update.
Partly done - this is missing some versions and is ahead/behind some things:
https://github.com/ProximaNova/yt-channel-dl

 >>/11257/
> Simple Derpi code
Another explanation is below.

Optional - Input data I was checking, use whatever files you have in your case:
$ find /zc/ipfs/blocks/52/ -type f | tail -n+10001 | head \
-n200 | xargs -d "\n" sh -c 'for args do file "$args"; don\
e' _ | grep "PNG image data\|JPEG image data\|GIF image da\
ta\|WebM" | sed "s/:/ :/g"
Optional - with LXQt you basically can't upload files in a web browser via absolute paths, so run this to copy the file to a folder which isn't a many-file folder:
$ read -p "path: " p; utc; cp --update=none "$p" .
Required:
$ read -p "src:" src; read -p "tags:" tags; read -p "url:" u\
rl; read -p "time:" time; datetime=$(date +%s); echo -e "{ \\
"source\": \"$src\", \"url\": \"$url \", \"uploaded\": \"$ti\
me\", \"tags\": \"$tags\" }" > $datetime.json; echo $datetim\
e.json; cat $datetime.json; cat $datetime.json | xsel -ib; i\
d="$(echo $url | sed "s/.*\///g")"; curl -sL https://derpibo\
oru.org/$id > $datetime.$id.htm; curl -sL https://derpibooru\
.org/api/v1/json/images/$id > $datetime.$id.json
Output data in my case (not including files between QmZv...ftQs point in time and now):
https://gateway.ipfs.io/ipfs/QmZvjEZTzakCm1BdLYYcFFdZm22apKzfL5hfUaG4pvftQs/fucklxqt

How the required part of the code works: next text. The idea is to save source+tags+url+upload time locally and not rely on derpipooru. With derpibo0ru it can only save http... in the source field and it does dumb tag aliases like "english text"->"dialog". This code also saves the tags you wrote instead of whatever that site changed them to. That code is like filling out paperwork. It asks you for the 4 fields (source+tags+url+time) and you paste in those 4 things. It saves that to data to a JSON and also downloads derpib00ru's webpage and JSON for that ID. For example:
> $ read -p "src:" src; read -p "tags:"[...]
> src:/zc/ipfs/blocks/52/CIQJZLOOZRLQCCVBLGYN5DVWQB4YITFFLRJEEFQCKXI34LRBS3AV52Q.data
> tags:safe, pony, tree, solo, hat, camping, campsite, clothes, full body
> url:https://derpibooru.org/images/3474316
> time:2024-10-28T02:24:19Z
> 1730082311.json
> { "source": "/zc/ipfs/blocks/52/CIQJZLOOZRLQCCVBLGYN5DVWQB4YITFFLRJEEFQCKXI34LRBS3AV52Q.data", "url": "https://derpibooru.org/images/3474316 ", "uploaded": "2024-10-28T02:24:19Z", "tags": "safe, pony, tree, solo, hat, camping, campsite, clothes, full body" }
> $ # created/downloaded files: "1730082311.3474316.htm", "1730082311.3474316.json", "1730082311.json"