Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Z-Library Returns on the Clearnet in Full Hydra-Mode (torrentfreak.com)
460 points by bertman on Feb 13, 2023 | hide | past | favorite | 138 comments


Let’s keep iterating on the takedown evasion strategies until they’re impenetrable. It’s the only hope the People have of actually being in control of anything important.


TOR version of Z-Library was up and running all that time, including auth. So what we should evaluate is easir-to-access takedown evasion strategies.


TOR is already pretty easy to access via TOR browser so anything else that requires additional client software is probably not going to have better adoption.


if you use the brave browser, TOR is always a keyboard shortcut away


> It’s the only hope the People have of actually being in control of anything important.

Well I wouldn't go that far. There are more $5 wrenches than there are people.

https://xkcd.com/538/


Yep, aka rubber-hose cryptanalysis.

The real test here was Assange who embarrassed the U.S. military by publishing drone footage of them killing civilians not to mention everything else.

They got him on an individual level (IMHO by blatantly discarding any remaining vestigial pretense of abiding by the law) but-- the site is up.


The site is up but it's no longer what it once was. It's disappeared from news reports. Nor has another one taken its place.

After the prosecution of Assange and Manning and with Snowden in exile, they've mandated to basically stop the whole whistleblower phenomenon in its tracks. Which was probably a bigger goal than one easily-replaced website.

It feels a bit like the Arab spring, a lot of hope in the beginning and then it all fizzled out.


> Nor has another one taken its place.

Yes it has, the spook-friendly Bellingcat. It filled a Wikileaks-shaped hole.


that comic did a huge disservice to computer security and still being thrown around after years.

there's a huge difference in knowing that your data has been compromised (you hand out the keys to avoid torture) and not knowing, that alone justifies every hoop you have to jump around to have your data encrypted.

besides, it doesn't apply here since onion services were created specifically to host content anonimously, they don't know who to torture.


You can make systems that even the creator cannot take down.


That only works if you can track down who the people are.


It works if you can track down and wrench a tiny % of them, causing the rest to wonder if they're next.

See: Belarus, 2020 or Iran, 2022 for recent examples


Current shadow libraries (zlib, libgen, scihub) suffer from centralized data hosting and opaque librarians/custodians (who modify metadata and gate inclusion/exclusion of content). We already have the tools to solve this.

1. Files are stored in a distributed fashion and referred to via their content hash. We already have IPFS for this.

2. Library metadata can be packaged up into a SQLite DB file. The DB would contain IPFS hashes, book names, authors, etc.

3. Teams of volunteers assemble and publish the library metadata DB files. There can be multiple teams, each with their own policies. The latest library files can be published via RSS. Each team can have their own upload portal.

4. A desktop app can pull multiple RSS feeds for multiple libraries. The libraries can be combined together and be searched easily on the client side. Users can search for content via the latest library metadata files, locally on their desktop. Content can be downloaded via IPFS.

5. The desktop app can also double as an IPFS host, allowing users to choose specific files to pin or simply allocate an amount of space for the purpose (100 GB, etc). There could also be servers that aggregate pinning info to make sure no gaps are there.

5. For ease of access, people can run websites that preclude the need to setup your own desktop app / download libraries.

6. Library teams can publish metadata DBs and content via torrents, too, for long-term/disaster-recovery/archival purposes.

This would be a true hydra. No one centralized team, no reliance on DNS. If one team's library set up goes down, you can use another's.


1. yes, ipfs could solve that, but it relies on people hoting content. Previous examples of content-based addressing showed that little-accessed content tends to disappear as nodes go offline over the years. This would need to be solved, and I think the only way to solve it is to have a battery of centralized ipfs servers mirroring each other, which defeats the "fully distributed" setup

2. this would also need to be hosted and could be taken down. You'd need to mirror this too, but that's a simpler problem to solve (gigabytes instead of terabytes)

3. the upload portals and the RSS feeds would, again, be centralized or would have to change so regularly that they become impractical

in the end you would end up with a dozen (a hundred? more?) different z-libraries, which would make it actually worse from a preservation standpoint, since only the most popular content would be shared, libraries that focused on rare/exotic/fringe material would be endangered of being lost since they have fewer volunteers/mirrors/seeds/...

Also, freenet and other projects already showed that end-users allocating some storage and using that to spread data around is not an easy problem, the fluctuation in end-nodes is so big that it slows down the entire network to a crawl. I'm not sure this problem has been solved yet.


1. You're absolutely right that IPFS alone isn't a good way to guarantee durability. I think you'd need a second level of archiving done in bulk. Thousands of books collated into various torrents that people can help seed. This has already been done for LibGen. IPFS does provide a common hash space for everyone to rally around, and does make it easy to download single books. It's also easy to speed up downloads by using an IPFS gateway - the actual protocol is slow. I don't expect most users to actually pin files or anything.

2. Well, users and librarians need some way to find each other. That's true in any system. And that communication medium can allow certain kinds of attacks. (website on the public internet, word-of-mouth, Telegram groups) If all someone needs is an IPFS hash of a recent library metadata DB (a SQLite file), any way of communication will suffice. I think this approach allows for centralization (sure, keep a website up as long a the authorities don't care) but also gracefully allows for all manner of de-centralization (use any of the above methods to distribute the metadata DB).

3. Any many-to-one system with curation (librarians) will have weak points. The idea is you can set up upload portals across any communication medium (a regular website, a dark-net site, a Telegram group, email) - and the libraries take care of collating the information. The social grouping is what matters more (libraries vs uploaders vs downloaders) - and we want to make it tech agnostic and, therefore, more resilient.

This system will be stable I think, for two reasons:

1. Network and branding effects will naturally create a few big libraries. People will use the familiar, useful libraries. See how few torrent search sites took up the bulk of traffic, back in the heyday of torrents. Most users will probably use a website, and the ones that are easiest to use will probably get the most traffic.

2. The resilience of the system is necessary only once in a while. A set of libraries will emerge, there'll be enforcement actions and they might break apart, and then new ones will pick up their pieces (easily bc the metadata and content is open). So we want to provide the open-ness for this to actually happen.


The core problem with IPFS and friends is that the vast majority of Web access these days occurs on battery-powered devices that need to sleep their connections often and for long periods or battery life will plummet. End users aren't going to accept even 10% worse battery life (and it'd likely be worse than that) just so they can participate in "the swarm" and have higher latency and slower transfer on all their requests.


You could keep the metadata DB in IPFS too, along with the books. The only thing that needs to be regularly republished is the CID of the metadata DB, which fits in a tweet.

In fact, it fits in a tweet with enough room for steganographic deniability hijinks. You could publish a browser plugin which transforms tweets into IPFS CIDs according to a non-cryptographic hash algorithm. That way the cleartext of your tweet is not takedown-worthy, nor is the plugin, but the two together let users transform the seemingly innocuous tweet into the metadata DB update they need.


That's a great idea, and also amusing. :)

It's amazing that we can refer to data in a globally-unique way with small content-based hashes. Hash collisions aren't usually worth worrying about.

Another benefit is that its easy to store large numbers of hashes with basic metadata.

SHA-256 hashes are 32 bytes. If it takes 512 bytes on average to store author/title/publish-date/ISBN, then the hash is a small part of the total per item (though not well-compressible.) You can store the info for 2 million books in a megabyte.

Shadow librarians can also publish curated collections of books. I know a guy who tried to do this in a systematic way for college-level history textbooks covering a wide swathe of the world's history. The entire catalog with metadata and hashes is probably only a few hundred thousand KB.


The tricky part is finding a balance between the blockchain approach, where everybody pins every item (wasteful) and the other end of the spectrum where people only pin the things they're thinking about right now (lossy).

There's some middle ground where we coordinate who pins what in a way where there's just enough enough redundancy to not worry about it disappearing, but not so much that any one of us is bearing an unnecessarily large burden.

I don't think we've quite figured that part out yet.


You bring up a great insight - that blockchain/pinning are on opposite ends of the spectrum of what one can choose to store, if one participates in the system at all.

We do have a good example of this already I think: torrent client peer lists. When participating in a torrent, we can see the current availability of the data in the swarm, displayed usually as a bar visualization, with each pixel being a chunk. The darker the chunk, the more peers have a copy of that chunk. The result is also summarized in a number capturing the general health of the swarm wrt hosting the torrent.

All we need to do, then, is to have a mechanism where 1) the client has a list of items to pin 2) the client uses this existing swarm-tracking mechanism to figure out which files need more hosts 3) the client picks the best ones to host given the available space/network constraints. One can be smarter than just picking the lowest-seeded file. If a host is known to have files consistently, but is offline for a few hours a day, the client can be smart and not worry about immediately getting those files, perhaps spending available resources on less-seeded files.

This is possible with current technology. We can do a simple version of this via a torrent client plugin, reading the list of files from RSS or a special text file.

I've seen communities do this manually actually. For hosting SciHub torrents, the community made a web page that showed the current number of known seeds per torrent. Users were responsible for picking whichever ones, usually the lowest-seeded ones, and seeding them. We can remove this tedious and error-prone work.

Doing this per IPFS file will probably take up too many resources. Perhaps we need a standard IPFS<->torrent correspondence. Something as simple as a text file in the root of the torrent file structure, a file that maps IPFS hash <-> file inside the torrent. This way an IPFS swarm and a torrent swarm can work together. You get the easy retrieval of IPFS and the increased durability of torrent swarms.


> Doing this per IPFS file will probably take up too many resources

I think we can come up with some scheme where it doesn't have to be centrally hosted. Like if my public key is 18 mod 256, then it's up to me to pin all of the files I rely on whose CID is also 18 mod 256.

If you've got thousands of users doing this, each one of them has to bear only 1/256th the burden.

I imagine incentive schemes where we keep track of which peers have gotten which files from our nodes, and then later randomly check to see if they're doing their part and pinning what they got from us

We'd all put $5 into a pot at the beginning of the month, and at the end of the month we'd share our data re: who seeded and who leeched. Maybe the bottom 10% leechers get no money back, the top 10% seeders get $10 back and everybody else gets their $5 back.

So it's like it costs $5 to access the first time, but if you leave a node running (like maybe you have a phone with a cracked screen that you just leave plugged in) then you'll never have to pay that $5 again, and if you're lucky you'll get $5 from a leecher.

Of course it doesn't make sense for all files anywhere, but in context with a project, like Z-library, where somebody is curating the files that are in scope. Otherwise an attacker could flood the network with noise that hashed in such a way that it affected only their target (and their target's 1/256 slice of the community).


… taking this a step further… you could place the message you speak of tweeting into an ‘oh by’ code[1] … and then just chalk it onto the sidewalk.

Now passers-by can receive the message, time shifted, without the Internet.

[1] 0x.co


I'd like a stencil that can be configured to display an arbitrary QR code so you can do it with spray chalk.


Actually, you can do another cool thing, too. The IPFS ecosystem has IPNS, which generates a public/private-key-back hash-name, which can point to another IPFS hash. So this way, a shadow librarian group can have one IPNS hash that always points to the latest catalog.


Right, I had forgotten about IPNS. I remember some thread (a while back) where people were complaining that it was terribly slow, so I wrote it off.

Now that I think about it again, I'm not sure what you'd use IPFS for that would require fast IPNS resolution anyway. Having version of a library that's an hour old is... just fine actually.


https://annas-archive.org does a lot of this, and has been my go-to for books for a while now.


Yes! Anna has been active in helping duplicate Zlib, among other efforts. No one person can do it all. We need to pitch in!


It has already done, you will be glad to learn:

https://bafybeigpp6mtsmjngaqscfkjwzivbptt4ui5yb7uih6qe43obof...

For me it has worked in Brave, Chrome, Firefox and Safari Tech Preview, with or without installed IPFS and IPFS Companion (with IPFS it works much better). Haven't worker in Safari in IOS and non-Chrome browsers on Android.

Not very fast and sometimes it requires page reloading but overall impression is awesome.


Thank you! This looks promising!


Guys, you already have this one named Nexus. Even for torrents.

https://bafybeigpp6mtsmjngaqscfkjwzivbptt4ui5yb7uih6qe43obof...

You must learn about them.


An alternative that works today is #bookz on undernet.


Domain names turned out to be a weak point susceptible to attack by the statists. To route around this weakness, an array of names is used.

However, there is still the matter of having an account to get to these names. Which was the original reason the statists went after them in the first place. The users themselves will thus become the next target, just like in the days of Napster.


To me that was the real strength in IPv6. (I know I know innefficient protocol with complex upgrade path lead to near negligible adoption)

NAT "fixed" the problem of address exhaustion, but it killed the old internet. You cannot run your own network anymore. In the "old" times, I gave you a phone number or IP address and that's it, direct connection. All anyone could do was show up and take the computer to stop that. Sure there's a phone company or ISP involved, but they just powered the pump, you completely controlled what went through it.

Now I can't do that. They ran out of addresses and I share an address with X unknown others. So I can't give you a home address, just to a bank of doors. I could give you an apartment number, but that's also shifting transparently, so num X to you is num Y to someone else.

IPv6 would have solved the problem of exhaustion while preserving the right to an address. I could be some number permanently and you could reliably find a connection to my system using it. In that world I could set up a private DNS service in my house no one can alter without physically plugging in. Then have that store records to other addresses. Every part of that chain requires someone finding you and showing up at your door to disrupt.

Instead now I have to pay digital ocean 5 bucks to keep an address for me so anything can find me via them. A bunch of servers in my home effectively an island without a coordinate until DO points me out on request. Like having all mail addresses be to the local town hall for them to forward to me. Sure maybe you trust your local town hall, but they are fundamentally beholden to someone else.

With IPv6 support and adoption a whole network could be set up independent of any other authority besides BGP. Which requires nation-state levels of mobilization just to block an address, with fallout affecting literally thousands of others. They'd have to nuke a block to suppress any site, only for that site to find another address and be back to normal within minutes. Instead they do a WHOIS, send a scary email and boom, you're unknown, unfindable and disconnected. Hoping that word of mouth brings people to your new "address" exactly like losing your phone (and SIM) while abroad.

I know it sucks as a protocol but v6 to me is a massive extremely important development that would change how the internet, and from that all communication, works.


> With IPv6 support and adoption a whole network could be set up independent of any other authority besides BGP.

Private individuals have access to IPv4 blocks and maintain their own soverign networks. That fact doesn't change the reality that most people most of the time pay a network operator (ISP, Telecom) to operate their network. Network operators aren't going anywhere, and these network operators still maintain full control over how packets transit their network. In the case of WWAN networks, they will also know roughly where you are.

All IPv6 does is expand the address space and put the price of an address within reach of anyone... but it doesn't change the knowledge or hardware required to run your own network.


IP addresses are just a different type of name, and also assigned by hierarchical entities. NAT isn't the issue, rather it's the incumbent power structures gradually tightening the identity/control screws. If you have a public IP on your physical connection and use that for banned publishing, they go after the account holder listed for the physical connection, which eventually gets back to you - the same as if you obtain that public IP from Digital Ocean or a tunnel broker.

The only way around that is using naming systems that don't rely on centralized authorities, or at least can't be coerced by governments.


I miss the days of sending someone a letter with some cash for them to associate address A with line B. All I'd have to do to stay essentially anonymous is finding a someone with bad record keeping.

Suddenly someone shows up with address A and threats and then drowns trying to interpret that persons mappings. While that's happening I can find 5 other someones and suddenly I have 6 addresses all of which essentially ephemerally link to my system. Someone else does that for their mapping system and you get to Dijkstra levels of working out how to block connections.

After like 3 levels of middlemen even centralized authorities just struggle to do the actual work of blocking, outside of just issuing the order.


On the one hand, hosting companies don't like getting raided by the feds and taken offline because one of their customers is doing something objectionable.

So I doubt those 5 new addresses will remain live for all that much longer. When you're on the lam, digitally or physically, or both, you find out who your real friends are, real quick.

On the other hand, I can type "tpb" into Google and get to a bittorrent of Disney's latest hits in less than 5 clicks, so maybe the copyright regime doesn't have an omnipotent hand on the Internet.


I'm not sure how to deliver packets on the internet without destination IP addresses of some sort.


The technique is to make it so that the destination/host the IP packets are going to isn't important. Say accessing a TOR hidden service - the IP address the packets are going to is that of any TOR node. To be useful, such an overlay network requires a different naming protocol (in this case, the TOR hidden service one), that allows services to have persistent identities without needing to publish DNS names or IP addresses.

Your traffic is still going to specific IP address(es), but this isn't useful for someone trying to censor, unless they can persecute those running TOR nodes and/or prevent access to all TOR nodes.


You seem to be describing an overlay network rather than a method of avoiding destination addresses in IP packets sent over the internet.


> With IPv6 support and adoption a whole network could be set up independent of any other authority besides BGP. Which requires nation-state levels of mobilization just to block an address, with fallout affecting literally thousands of others.

This is not how it works. Taking down a single IPv6 IP address (or whole AS) is a very simple thing and is done daily to combat spam and DDoS attacks, without requiring "nation-state levels of mobilization" (whatever that means). Also there is essentially no "fallout" at all in IPv6, and there isn't any fallout in IPv4, too, since BGP routes can be as specific as a single host


Can't they just send a scary email to the AS administrator who then removes the offending address block from its routing tables? Or are you imagining folks migrating to ones that don't respond to such requests?


Even if you have your own IP block, ASN, are set up with multiple BGP peers/upstreams, they can always go to those upstreams and have you filtered/blocked. IPv6 is cheap and plentiful, that’s all.


your isp is sharing an IP with other customers? i have never, ever seen that in 3 countries worth of residential isps and doubt its possible and want to make sure its true (and concerning)


If you have ever used mobile data, you've shared your IP address with other customers. Many residential ISPs around the world also use CGNAT. I had to call the customer support of mine to have a dedicated IP address. Other providers may force customers to pay for a static IP address if they want to avoid CGNAT.


The ISP I am using does have NAT, but I was able to disable the NAT using the modem setting. (When they replaced the modem, I told them what setting I changed and they were able to put that setting into the new modem too) It is a dynamic address, but I can accept incoming connections and the IP address rarely changes (although it has happened before).


That's NAT on your modem+router, not from your ISP.

Carrier Grade, CGNAT results in you not getting a public IP at all.


Having your own address in most places is a part of a "dedicated business line". My ISP in Switzerland literally refuses to issue so called "static" addresses at all, business or not.


you'll see it called CGNAT (aka Carrier Grade NAT) and it can be a really big annoyance for a lot of things, usually I see it on mobile/cell connections but I've heard of some DSL providers here in the states using it too.


Metronet in the US does CGNAT. I’ve had them for about a year and a half. Hasn’t caused me any real issues other than the occasional captcha.


4chan is a funny one. Apparently I had the IP of someone who posted "child models".

I'm pretty sure that wasn't me unless I have an alter ego called mister Hyde.


How does Mr. Hyde always gets away with it?


It’s quickly becoming the norm rather than the exception due to IPv4 exhaustion.


I tried it. The url I got looked like guid.domain.net. At first I was thinking that the guid part must be unique for every user, but then the domain.net part is still susceptible to being seized. So… without being able to compare the url I got with other people, I’m left wondering how this actually works.


FTA:

"The domain names in question are subdomains of newly registered TLDs that rely on different domain name registries."

There are multiple TLDs/SLDs involved (and the pool will likely grow over time)


> domain.net part is still susceptible to being seized

Yes it is, but how do you discover the domains? There could be just a few hundred users per domain. Then you have to expend substantial effort to seize each domain.

Meanwhile any affected user just moves to their second domain. Even if the authorities got much better at taking down domains the only issue would be increasing the number of extra domains per user.

I can't see how the authorities can beat this.


I had to use https://singlelogin.me/ for it to generate the special domain, so can't the central https://singlelogin.me/ domain be seized at some point?


Yeah but that’s only used for creating the unique domain, and it wouldn’t affect user domains. So if that one goes down, they’ll just have throw the registration page up on a new domain.


From the article:

> If users can’t access the universal login page, Z-Library says they can log in through TOR or I2P and get their personal clearnet domains there.


> I tried it. The url I got looked like guid.domain.net. At first I was thinking that the guid part must be unique for every user, but then the domain.net part is still susceptible to being seized.

The GUID seems to be unique per-user, but also per ccTLD (mine are GUID.DOMAIN.cz and OTHERGUID.DOMAIN.ph).

I would guess that the pool of registered DOMAIN.ccTLD will grow faster than they can be blocked or seized, that new user per-domain GUIDs can be issued on demand onto different registered domains, and that there is an unused reserve of registered domains ready for deployment.


How did they take it down the domain last time? Was it by picking on the registrar?

In my country they so this by asking all the ISPs to block the domain from their DNS servers. This works for 90% of the population, but all you have to do is just change the DNS server to something other than what the ISP gives you and you’re good to go.

Also, I just don’t get how current approach is any better. As far as I understand, there’s still a single point of failure, i.e. the site you get your “personal” domain from.


I'm honestly shocked no one has openly laughed themselves silly at the idea of "personalized domains" for a site openly engaging in piracy... because surely that wouldn't be a way to build a stronger case against individual users engaged in piracy, riiiiiight?


Depends. If the database that contains those domains and the account they are linked to is obtained by whatever law enforcement agency gets them, than sure. But the network of torrent piracy detection doesn’t work for this situation so it requires that database to be seized. It is also possible the information is not stored after it is processed. Because copyright trolls can’t get the information like they do with torrents either, they can’t easily send bullshit threatening notices to end users, and it is extremely doubtful that the FBI is going to provide linking information for individual prosecution. I don’t believe there is any example of that, though I’d be interested to hear otherwise.


I have heard of people using throwaway emails and VPNs for this sort of stuff!


you can get the personal domain from tor and then use the domain on the regular net.


How are they able to afford so many domain names? And what stops the state from just asking the domain registrar for the details of who purchased the domains? Besides, there are only so many domain registrars in the world, eventually you will lose the ability to purchase new domains

And if the actual web server is behind a service like cloudflare, the state can just ask cloudflare for the IP of the real server, then ask the datacenter who owns the server at IP x ...


>what stops the state from just asking the domain registrar for the details of who purchased the domains

Some domain registrars don't ask for your personal data and the registrars who ask for it won't verify them.

>then ask the datacenter who owns the server at IP x ...

Many datacenters in China and Russia doesn't care about some warez and if the zlib staff pays over Tor with cryptocurrencies the datacenter also don't know who rents the server


Users don't actually get a personalized domain for each account, only a subdomain.


That title is straight out of a cyberpunk novel.


I don’t know what’s the solution to this problem, but this library is extremely useful. Most of the time, you don’t want to read the books entirely, but mostly to check something, read a section or browse see if what you’re looking for is there.

Eventually, you may buy a book that you know is worth it. Right now even the table of contents may not be available before buying.


Why did they go so hard after Z-Library when all of the other Library Genesis mirrors are still up and running? Is it because Z-Library have ads and paid subscriptions while the others are non-profit volunteers?


They have limited resources, better to go after the one getting 99% of the traffic.


Z-lib has more content than lingen and was more well known and had more traffic.


> Z-Library’s very existence was put to the test last November when U.S. law enforcement seized over 200 domain names connected to the site.

From the list:

> 1lib.ae

> 1lib.in

> 1lib.io

> 1lib.mx

> b-ok.as

> b-ok.cc

> booksc.eu

> booksc.me

How does U.S. law enforcement get to seize domains under other countries' ccTLDs? Were the respective countries involved or did they just bully the operators who probably have name servers in the US?

And seizing domains is questionable in the first place - there isn't anything illegal about the domains itself and shouldn't the name -> IP association fall under free speech in the US?


How does this affect book sales? Do average people know this site exists and depend on it, or mostly the tech savvy types frequent zlib? A site like this could be a godsend, but what about the authors? I can see the benefit of this if it would have a lead time of let's say T+1 years (assuming that the first year is most important for sales and not sales over time).


I can't speak for all categories of books, but here's an actual author's opinion on the matter for text books: http://from-a-to-remzi.blogspot.com/2014/01/the-case-for-fre...


wait, aren't "they" going to gather evidence on user of "my account" and associate the downloads from this domain to "me" and come after me and ask hard questions?


I’m so happy to see this available to everybody! Z-lib is definitely in my top 5 favorite websites of all time.


Are people making money off of z-library, or is it being run purely out of principle/generosity, or what?


I'd expect enterprising bad actors to upload malicious PDFs and rent out compromised machines.

Site operators could potentially re-render uploads and inject their own exploits.


You'd be wrong.


Good to know that there are no bad people doing these sorts of things.


This seems like a worse system than just using Tor. What is the point in making the system available on clearnet?

I think it is more work to go to the single sign on, create an account, and save the custom domain names than just installing the tor browser and going to the zlib onion address?


you are using your existing login so you just have to use singlelogin as a gateway.


What's the difference between Library Genesis and Z-Library? Aren't they the same catalogue?


Z-Library is a superset of Libgen, if I remember correctly it has lots of files that haven't been cleaned up enough to put into Libgen.


Z-library began as a Libgen mirror, but stopped mirroring uploads back. In some cases, it has documents that Libgen does not.


z-library uses a freemium model, a for-profit model.


Do you have evidence they make a profit or are you smearing the way they accept donations into “a for-profit model”


Last time I tried, they prevented me from downloading a 6th ebook unless I paid. That’s a bit more aggressive than “accepting donations”.


I don't defend them pushing for payment but did you not also have the options of A: waiting 24hrs then downloading it. B: registering an account to get 20 downloads per day ?


I saw that today (and created an account); I don’t remember it was like that when I used it last time but I could well be wrong, it was a couple of years ago. Sure, you could wait 24 hours or get a new IP address and clear cookies to bypass the limitation altogether. I cannot say how much money they did with this, but that’s the definition of freemium.


freemium is a for profit model, the model has nothing to do if they make a profit.


Anyone has any links or good intro videos about what is Hydra-mode?

Reading through the article, it seems the domain name is not publicly exposed and a new domain (?) is created on-the-fly? I am not sure if I understand how it works. But every user who logs in with a inter-mediator would get his own domain and that's their strategy to keep shop open for now.


> Anyone has any links or good intro videos about what is Hydra-mode?

Probably just a reference of Greek mythology. You chop one head, two grow. An analogue of spawning multiple domains a site is taken down.

https://en.wikipedia.org/wiki/Lernaean_Hydra


Later versions of the Hydra story add a regeneration feature to the monster: for every head chopped off, the Hydra would regrow two heads.

The crucial line in that article; amusingly enough, it almost reads like a changelog entry.


Where do these books come from in the first place? Have publishers systems been leaked/hacked?


Library Genesis has an "Upload" button on the top bar, the common login and password for which can be found from simple a Google search. They might also run more organised scanning/sourcing operations on their forums.


> Where do these books come from in the first place? Have publishers systems been leaked/hacked?

Most of zlib is libgen, and I think libgen relies on user uploads and sourcing from their forums


At least in torrenting communities a lot of these books come from book databases like OverDrive. Most public libraries give access to OverDrive and removing DRM is straightforward


In the first place, the books come authors who write words based on what a muse or ChatGPT tells them.


Touché


They also had a similar response to Telegram banning their bot: Each user gets their own personal Telegram bot.


I wonder what they use to build this Hydra feature...


ELI5 why the gov doesn’t just seize the servers.


You have to know where something is to seize it. Operations over international lines are hard to do. They likely have a series of bouncers/reverse proxies before the "main" back infrastructure. It is also likely that they rotate their bouncers regularly, different datacenters, countries, etc


Don't the proxies just point back to the main infrastructure? How do the site operators deal with bandwidth usage and QOS and all the edge vps/proxies?


Generally, something like a nginx reverse proxy is pretty performant. The opsec gains come by rotating the infrastructure you run on. If you had something like a ingress -> middle -> backend, and then regularly changed hosts, by the time someone is able to get a court order to seize the ingress, you've already moved on and they need to start the process over.

In terms of system hardening, since the outer machines are almost bare, they are hard to hack. Attempting to attack the backend server isn't easy either (assuming the the webadmin knows what they are doing. Things like blocking outgoing traffic and configuring the system to not leak the backend server's IP)


> ELI5 why the gov doesn’t just seize the servers.

Because “the government” is not a single unitary global institution.


...yet


How could it be? Why would countries like China, Russia, or Thailand align with the US on that subject when they clearly have no interest in doing so? Being overly cynical sounds good to some people, but it does not result in an accurate understanding of how the world works.


I understand your interpretation of a one word response as idle cynicism for its own sake. Truly though, the history of civilization is one of ever-consolidating power, roughly correlated with technological advance. There are of course local exceptions (both in the geographical and temporal sense) to the overall trend, but barring a massive technological setback, I really don't see how we won't eventually end up under the rule of a single government.


The desire for control will unite even philosophically disparate governments. Pay attention.


In the real world, the opposite is happening right now.


Because you can buy an infinite number of globally located, cheap, disposable front end VPSes starting at $1 a month or less that hide your critical infrastructure.


The servers can easily be duplicated in infinite numbers.


Which government? Where are the servers located? Whose jurisdiction? Are all the servers in the same place?


It seems the US Postal Inspection Service has some sort of judical powers over domains especially, possibly related to DNS root through ICANN being rooted in the US.


there are plenty of cases of .com and .net domains being seized by US federal court order, the registrar locks the ability to login to admin the domain, sets new authoritative nameservers, government persons publish a new authoritative zonefile for it, government puts up a one page like "Seized by FBI" on their own httpd somewhere.

https://www.google.com/search?client=firefox-b-d&q=.com+doma...


Can you elaborate on this?


I'm possibly wrong about the Postal, just searched for seizure news involving them and there seemed to be many and made more of a leap than seems warranted, I guess they just do ops and the seizures are via DoJ/FBI... But in any case ICANN is well known to control DNS and is a US org.


Interesting. It does seem like they are involved, but I do not think the USPIS has jurisdiction over cyber crime unless it invoices mail, although they can participate in inter-agency operations.

> Cybercrimes are crimes committed through the Internet or using computer devices. These crimes almost always intersect with the postal system. That’s why the Postal Inspection Service is committed to protecting the public from criminals who steal digital information for financial gain, revenge, or even political advantage.

> These crimes almost always intersect with the postal system.

I don't understand this part at all.

https://www.uspis.gov/tips-prevention/cybercrime


Neither does anyone else: "In order to preserve operational effectiveness, the U.S. Postal Inspection Service does not discuss its protocols, investigative methods, or tools," and "In sum, not much is known about iCOP" - https://www.snopes.com/articles/393823/usps-icop/

One of the first articles that brought this up: https://news.yahoo.com/the-postal-service-is-running-a-runni...

See also: https://www.vox.com/policy-and-politics/2020/8/20/21377305/p...


They're not in a country that wants that to happen.


Dumb question.. how do I find the URL?


dump answer: you ask a dump question. You can start somewhere around https://singlelogin.me/


So where does one go to log in and be redirected to their personal domain?


Per the Torrenfreak article: https://singlelogin.me/ to login on Z-lib


They give you your domains when you log in on Tor.


people think z-library is good because it gives access to everyone, but really it gives more access to those who have capital to use the content to train AI.... it's very foolish


i frankly dont see what on earth this actually does. when users sign up, their account gets hooked up with a randomly generated subdomain for some domain that 1lib.(some tld). what fundamentally makes it harder for domains to be revoked like the 200 or whatever that were revoked before? i dont think it would be that hard for lawyers to just generate a bunch of fake accounts, take note of the domains and report them. or even better just go after the singlelogin.me domain


[flagged]


Z-Library the book/article piracy project, not zlib the compression library


my comment was a joke :(


I actually wondered the same thing since I’ve never heard of z-library


Wow really? I used it for damn near a decade.


Haha, sorry. It's very hard to tell, since that's a real confusion a lot of people have.


No, Z-library is not related to zip or compression.


But given how pervasive zlib is, it's probably used somewhere in its infrastructure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: