Continues here: AllDebrid host support status (round 2)
I've purchased a month of premium subscription to check the hosts status, specially rapidgator (https://rapidgator.net/*) and ddownload (https://ddownload.com/*). I tried to get the status all days but wasn't possible, still 26 days were logged.
For legibility the 30 days will be split in 2 parts, so the table is not that cramped, and the hosts in column Host are not placed in up to 6 lines, like "ddownload".
PART 1: DAYS 0-13 / 26
Premium hosts
Host | 14/05/2025 | 15/05/2025 | 17/05/2025 | 18/05/2025 | 19/05/2025 | 20/05/2025 | 21/05/2025 | 22/05/2025 | 23/05/2025 | 24/05/2025 | 25/05/2025 | 26/05/2025 | 27/05/2025 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1fichier | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
4shared | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
alfafile | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
apkadmin | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
cloudvideo | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
ddownload | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
dropapk | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
dropgalaxy | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
fastbit | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
file-upload | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
fileal | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
filedot | ✘ | ✘ | ✔ | ✔ | ✔ | ✘ | ✔ | ✔ | ✘ | ✔ | ✘ | ✘ | ✔ |
filespace | ✘ | ✔ | ✔ | ✔ | ✔ | ✘ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ |
gigapeta | ✔ | ✔ | ✔ | ✘ | ✔ | ✘ | ✔ | ✔ | ✔ | ✘ | ✘ | ✔ | ✘ |
✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | |
hexupload | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
hitfile | ✔ | ✔ | ✔ | ✘ | ✔ | ✔ | ✘ | ✔ | ✔ | ✔ | ✔ | ✘ | ✔ |
isra | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
katfile | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
mediafire | ✘ | ✔ | ✘ | ✘ | ✔ | ✔ | ✔ | ✔ | ✘ | ✘ | ✘ | ✔ | ✘ |
mega | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
mexashare | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
modsbase | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
mp4upload | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
prefiles | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
rapidgator | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
scribd | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ |
sendit | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
sharemods | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
simfileshare | ✘ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ | ✔ | ✔ | ✔ |
turbobit | ✔ | ✔ | ✔ | ✘ | ✔ | ✔ | ✘ | ✔ | ✔ | ✔ | ✔ | ✘ | ✔ |
upload42 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
uploadboy | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
uploadev | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
uploadrar | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
uploady | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
userscloud | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
vidoza | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
vipfile | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
wayupload | ✔ | ✔ | ✔ | ✘ | ✔ | ✔ | ✘ | ✔ | ✔ | ✔ | ✔ | ✘ | ✔ |
world-files | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
worldbytez | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
Free hosts
Host | |
---|---|
dailyuploads | |
exload | |
filerio | |
filezip | |
hot4share | |
indishare | |
mixdrop | |
uploadbank | |
uploadbox | |
usersdrive |
Free stream hosts
Host | 14/05/2025 | 15/05/2025 | 17/05/2025 | 18/05/2025 | 19/05/2025 | 20/05/2025 | 21/05/2025 | 22/05/2025 | 23/05/2025 | 24/05/2025 | 25/05/2025 | 26/05/2025 | 27/05/2025 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4tube | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
archive.org | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Beeg | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Canalplus | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
ComedyCentral | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
dailymotion | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
DrTuber | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
lynda | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
niconico | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
NYTimes | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Odnoklassniki | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
PornHub | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
PornoXO | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
RedTube | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
RTBF | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
RTS | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
rtve.es | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
rutube | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
soundcloud | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
SpankBang | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Steam | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
SunPorno | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
twitch | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
vimeo | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
XHamster | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
XNXX | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
XVideos | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
XXXYMovies | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
YouJizz | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
PART 2: DAYS 14-26 / 26
Premium hosts
Host | 28/05/2025 | 29/05/2025 | 31/05/2025 | 01/06/2025 | 03/06/2025 | 04/06/2025 | 05/06/2025 | 06/06/2025 | 08/06/2025 | 09/06/2025 | 10/06/2025 | 11/06/2025 | 12/06/2025 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1fichier | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
4shared | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
alfafile | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
apkadmin | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
cloudvideo | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
ddownload | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
dropapk | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
dropgalaxy | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
fastbit | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
file-upload | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
fileal | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
filedot | ✘ | ✘ | ✘ | ✘ | ✘ | ✔ | ✘ | ✘ | ✔ | ✘ | ✔ | ✔ | ✘ |
filespace | ✔ | ✔ | ✔ | ✔ | ✘ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
gigapeta | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | |
hexupload | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
hitfile | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ | ✔ | ✘ | ✔ | ✔ | ✔ | ✔ | ✘ |
isra | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
katfile | ✔ | ✘ | ✘ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
mediafire | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
mega | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
mexashare | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
modsbase | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
mp4upload | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
prefiles | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
rapidgator | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
scribd | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
sendit | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
sharemods | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
simfileshare | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ | ✘ | ✔ | ✔ | ✔ | ✔ |
turbobit | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ | ✔ | ✘ | ✔ | ✔ | ✔ | ✔ | ✘ |
upload42 | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
uploadboy | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
uploadev | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
uploadrar | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
uploady | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
userscloud | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
vidoza | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
vipfile | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
wayupload | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ | ✔ | ✘ | ✔ | ✔ | ✔ | ✔ | ✘ |
world-files | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
worldbytez | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
Free hosts
Host | |
---|---|
dailyuploads | |
exload | |
filerio | |
filezip | |
hot4share | |
indishare | |
mixdrop | |
uploadbank | |
uploadbox | |
usersdrive |
Free stream hosts
Host | 28/05/2025 | 29/05/2025 | 31/05/2025 | 01/06/2025 | 03/06/2025 | 04/06/2025 | 05/06/2025 | 06/06/2025 | 08/06/2025 | 09/06/2025 | 10/06/2025 | 11/06/2025 | 12/06/2025 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4tube | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
archive.org | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Beeg | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Canalplus | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
ComedyCentral | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
dailymotion | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
DrTuber | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
lynda | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
niconico | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
NYTimes | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Odnoklassniki | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
PornHub | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
PornoXO | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
RedTube | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
RTBF | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
RTS | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
rtve.es | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
rutube | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
soundcloud | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ |
SpankBang | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
Steam | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
SunPorno | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
twitch | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
vimeo | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
XHamster | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
XNXX | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
XVideos | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ | ✘ |
XXXYMovies | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
YouJizz | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ | ✔ | ✔ | ✔ |
HOW THE TABLE WAS GENERATED
The python script that generated the html table code is this:
#!/usr/bin/env python3
"""
usage: python python alldebrid_status.py <dom_file_single_day.html> <out_file_combined.html>
1. Parse the AllDebrid status DOM (argument 1)
2. If <out_file_combined.html> is new/empty -> create three tables with today's column
else -> append today's column to existing tables
"""
import sys, collections, html
from pathlib import Path
# --- dependency -------------------------------------------------------------
try:
from bs4 import BeautifulSoup
except ImportError:
sys.exit("\nMissing dependency ─ install it with:\n pip install beautifulsoup4\n")
# ----------------------------------------------------------------------------
TICK = "✔" # ✓
CROSS = "✘" # ✗
# ─────────────────────────────────────────────────────────────────────────────
# Helpers for the *new* scrape
# ─────────────────────────────────────────────────────────────────────────────
def status_from_img(img):
if not img: return "not"
src = (img.get("src") or "").lower()
alt = (img.get("alt") or "").lower()
if "up.gif" in src or "hoster up" in alt: return "up"
if "down.gif" in src or "hoster down" in alt: return "down"
return "not"
def date_from_cell(cell):
span = cell.find("span", attrs={"data-fdate": True})
return (span.text.split(",")[0].strip() if span else "")
def collect_fresh(soup):
"""
Return {category: OrderedDict(host -> (symbol, date))}; symbol is ✓/✗/''
"""
fresh = collections.OrderedDict()
for block in soup.select("table.comparatif_block"):
header = block.find("td", class_="tdtop")
if not header: continue
cat = header.get_text(strip=True)
rows = collections.OrderedDict()
for tr in block.select("tr.g1"):
tds = tr.find_all("td")
if len(tds) != 2: continue
host_i = tds[0].find("i")
host = host_i["alt"] if host_i and host_i.has_attr("alt") else tds[0].get_text(" ", strip=True)
status = status_from_img(tds[1].find("img"))
symbol = TICK if status=="up" else CROSS if status=="down" else ""
rows[host] = (symbol, date_from_cell(tds[1]))
fresh[cat] = rows
return fresh
# ─────────────────────────────────────────────────────────────────────────────
# Helpers for the *existing* output file
# ─────────────────────────────────────────────────────────────────────────────
def parse_existing_tables(soup):
"""
Return mapping:
{category: {"headers": [date1,date2,...],
"rows": OrderedDict(host -> [symbols…])}}
"""
out = {}
for h3 in soup.find_all("h3"):
cat = h3.get_text(strip=True)
table = h3.find_next("table")
if not table: continue
ths = table.find("thead").find_all("th")[1:] # skip "Host"
heads = [th.get_text(strip=True) for th in ths]
rows = collections.OrderedDict()
for tr in table.find("tbody").find_all("tr"):
tds = tr.find_all("td")
host = tds[0].get_text(strip=True)
syms = [html.unescape(td.decode_contents()).strip() for td in tds[1:]]
rows[host] = syms
out[cat] = {"headers": heads, "rows": rows}
return out
# ─────────────────────────────────────────────────────────────────────────────
# Merge logic
# ─────────────────────────────────────────────────────────────────────────────
def merge(existing, fresh):
"""
existing -> result of parse_existing_tables (may be {})
fresh -> result of collect_fresh
Mutates & returns 'existing' structure with the new day appended.
"""
for cat, new_rows in fresh.items():
ex = existing.setdefault(cat, {"headers": [], "rows": collections.OrderedDict()})
# choose date for this column (first non-empty from fresh block)
today = next((d for _,d in new_rows.values() if d), "")
if today in ex["headers"]: # already updated today, skip
continue
ex["headers"].append(today)
col_idx = len(ex["headers"]) - 1
# pad existing rows
for host, syms in ex["rows"].items():
while len(syms) <= col_idx - 1:
syms.append("") # safety pad in case of mismatch
syms.append("") # placeholder for today's sym
# fill today's column
for host,(sym,_) in new_rows.items():
if host not in ex["rows"]:
ex["rows"][host] = [""]*(col_idx) + [sym]
else:
ex["rows"][host][col_idx] = sym
return existing
# ─────────────────────────────────────────────────────────────────────────────
# Build HTML from merged data
# ─────────────────────────────────────────────────────────────────────────────
def build_html(merged):
html_parts = []
for cat, block in merged.items():
hdrs = block["headers"]
rows = block["rows"] # OrderedDict(host -> [symbols…])
html_parts.append(f"<h3>{cat}</h3>")
html_parts.append('<table border="1" cellpadding="4" cellspacing="0">')
# header row
ths = "</th><th>".join(["Host"] + hdrs)
html_parts.append(f"<thead><tr><th>{ths}</th></tr></thead><tbody>")
# === alphabetical order here ===
for host in sorted(rows.keys(), key=str.lower):
syms = rows[host]
tds = "".join(f'<td style="text-align:center">{s}</td>' for s in syms)
html_parts.append(f"<tr><td>{host}</td>{tds}</tr>")
html_parts.append("</tbody></table>\n")
return "\n".join(html_parts)
# ─────────────────────────────────────────────────────────────────────────────
def main(src_path, dst_path):
# 1. Scrape today's status
fresh_soup = BeautifulSoup(Path(src_path).read_text("utf-8", errors="ignore"), "html.parser")
fresh = collect_fresh(fresh_soup)
# 2. Load previous output (if any)
dst = Path(dst_path)
if dst.exists() and dst.stat().st_size:
ex_soup = BeautifulSoup(dst.read_text("utf-8", errors="ignore"), "html.parser")
existing = parse_existing_tables(ex_soup)
else:
existing = {}
# 3. Merge & save
merged_html = build_html(merge(existing, fresh))
dst.write_text(merged_html, encoding="utf-8")
print(f"✓ Updated {dst_path}")
# ─────────────────────────────────────────────────────────────────────────────
if __name__ == "__main__":
if len(sys.argv) != 3:
print("Usage: python alldebrid_status.py <dom_file_single_day.html> <out_file_combined.html>")
print("Add to the table in the html file the data of the day in the input file if doesn't exist already")
sys.exit(1)
main(sys.argv[1], sys.argv[2])
All of the code was generated by ChatGPT-o3 using Vibe Coding, that is, uploading as attachment to the ChatGPT agent a DOM file. The DOM object code is not what you see in view source of the page, but use Developer Tools and copy outer html when browsing https://alldebrid.com/status/. That page requires being logged in and have a paid subscription active, otherwise the page redirects to https://alldebrid.com/offer/. The vibe coding works by telling it what I want, without any coding knowledge, and correcting it when it's wrong until the result is achieved. Some text strings were modified by me at the end to make it more explanatory. I asked it to do it in Python because it's quite well developed at this point.
Once the HTML code is generated for all tables, using the script "alldebrid_status.py", maybe there are too many days in a single html file, and when pasting the table to Blogger, some cells place the content across too many lines. To count how many different days there are or extract a part of them to another html file, I requested this python3 script using ChatGPT-o3 the same way with vibe coding:
#!/usr/bin/env python3
"""
Count or extract day-columns from an AllDebrid status table.
USAGE
-----
python count_days_range.py -count INPUT.html
python count_days_range.py -range A-B INPUT.html OUTPUT.html
OPTIONS
-------
-count List all unique DD/MM/YYYY headers and quit.
-range A-B Keep only the inclusive date range A-B (1-based)
in the output file, padding missing cells with ''.
The script validates arguments, protects existing files, and prints
help automatically when mis-used.
"""
from __future__ import annotations
import re, sys, argparse, datetime as dt
from pathlib import Path
from collections import OrderedDict
from typing import List, Dict
from bs4 import BeautifulSoup # pip install beautifulsoup4
# ────────────── constants ───────────────────────────────────────────────────
DATE_RX = re.compile(r"\d{2}/\d{2}/\d{4}") # simple DD/MM/YYYY
def _str2date(s: str) -> dt.date:
"""DD/MM/YYYY → datetime.date (throws ValueError on bad format)."""
d, m, y = map(int, s.split("/"))
return dt.date(y, m, d)
def _date2str(d: dt.date) -> str:
"""datetime.date → DD/MM/YYYY."""
return f"{d.day:02d}/{d.month:02d}/{d.year}"
# ────────────── HTML ⇄ Python helpers ───────────────────────────────────────
def parse_tables(html: str) -> Dict[str, Dict[str, OrderedDict[str, List[str]]]]:
"""
Return {category: {"headers": [dd/mm/yyyy…], "rows": OrderedDict(host -> [syms…])}}
Blank <th> cells are ignored when harvesting headers.
"""
soup = BeautifulSoup(html, "html.parser")
out: Dict[str, Dict[str, OrderedDict[str, List[str]]]] = {}
for h3 in soup.find_all("h3"):
cat = h3.get_text(strip=True)
tbl = h3.find_next("table")
if not tbl:
continue
ths = [th.get_text(strip=True) for th in tbl.thead.tr.find_all("th")[1:]]
# keep only real date strings
headers = [h for h in ths if DATE_RX.fullmatch(h)]
rows = OrderedDict()
for tr in tbl.tbody.find_all("tr"):
tds = tr.find_all("td")
host = tds[0].get_text(strip=True)
syms = [td.get_text(strip=True) for td in tds[1:]]
rows[host] = syms
out[cat] = {"headers": headers, "rows": rows}
return out
def build_html(struct: Dict[str, Dict[str, OrderedDict[str, List[str]]]]) -> str:
"""Serialise the internal structure back to (pretty simple) HTML."""
parts: List[str] = []
for cat, blk in struct.items():
hdrs, rows = blk["headers"], blk["rows"]
parts.append(f"<h3>{cat}</h3>")
parts.append('<table border="1" cellpadding="4" cellspacing="0">')
parts.append("<thead><tr><th>Host</th>" +
"".join(f"<th>{h}</th>" for h in hdrs) +
"</tr></thead><tbody>")
for host in sorted(rows.keys(), key=str.lower):
cells = "".join(f'<td style="text-align:center">{c}</td>'
for c in rows[host])
parts.append(f"<tr><td>{host}</td>{cells}</tr>")
parts.append("</tbody></table>\n")
return "\n".join(parts)
# ────────────── public features ────────────────────────────────────────────
def gather_global_dates(data) -> List[str]:
"""Return sorted list of **all distinct** date headers across categories."""
all_dates = {h for blk in data.values() for h in blk["headers"]}
return [_date2str(d) for d in sorted(map(_str2date, all_dates))]
def extract_range(data, dates_slice: List[str]):
"""
Mutate *data* in-place so every category shows exactly `dates_slice`
as its headers, inserting '' where a host didn’t have that day.
"""
for blk in data.values():
hdrs, rows = blk["headers"], blk["rows"]
# map existing header → index for quick lookup
index = {h: i for i, h in enumerate(hdrs)}
blk["headers"] = dates_slice # uniform header set
for host, syms in rows.items():
new_row = []
for d in dates_slice:
if d in index:
# symbol may be missing if header count > row length
j = index[d]
new_row.append(syms[j] if j < len(syms) else "")
else:
new_row.append("")
rows[host] = new_row
# ────────────── CLI boilerplate ────────────────────────────────────────────
def build_parser():
p = argparse.ArgumentParser(
prog="count_days_range.py",
description="Count or range-extract DD/MM/YYYY columns in an AllDebrid table.")
mx = p.add_mutually_exclusive_group(required=True)
mx.add_argument("-count", action="store_true",
help="List every unique day header then quit.")
mx.add_argument("-range", metavar="A-B",
help="Copy inclusive range A-B of the *global* timeline to OUTPUT.html")
p.add_argument("files", nargs="+", metavar="FILE",
help="Input file (and output file when using -range).")
return p
def main(argv: List[str] | None = None):
args = build_parser().parse_args(argv)
in_path = Path(args.files[0])
if not in_path.is_file():
sys.exit(f"✘ input file '{in_path}' not found.")
html = in_path.read_text("utf-8", errors="ignore")
data = parse_tables(html)
timeline = gather_global_dates(data) # sorted once
# ── count ──────────────────────────────────────────
if args.count:
if len(args.files) != 1:
sys.exit("✘ Only one file is allowed with -count.")
print("\n".join(timeline)) # one per line
print(f"\nTotal: {len(timeline)} day(s)")
return
# ── range ──────────────────────────────────────────
if len(args.files) != 2:
sys.exit("✘ Please provide INPUT and OUTPUT files with -range.")
try:
a, b = map(int, args.range.split("-"))
except ValueError:
sys.exit("✘ Range must be A-B (e.g. 1-13).")
if a < 1 or b < a or b > len(timeline):
sys.exit(f"✘ Range must be within 1-{len(timeline)} (found {a}-{b}).")
out_path = Path(args.files[1])
if out_path.exists():
sys.exit(f"✘ output file '{out_path}' already exists.")
slice_dates = timeline[a-1:b] # inclusive slice
extract_range(data, slice_dates) # mutate in-place
out_path.write_text(build_html(data), encoding="utf-8")
print(f"✓ Wrote '{out_path}' ({len(slice_dates)} day columns).")
if __name__ == "__main__":
main()
NOTE: use VENV for Python when possible to avoid conflicts with main operating system.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.