Here s a list of PageSucker s main features:
- Various strong URL filters to restrict the downloads to only the desired files or file types. Among the filters, a pattern matching filter which supports Perl like regular expressions and also standard DOS like patterns.
- Automatic rebuilding of the server s file system hierarchy on the client side (i.e. files will not be downloaded into one common directory, but the server s directory tree will be recreated if possible).
- Option to download only pages up to a certain recursion depth.
- Multithreaded. More than one connection to a server can be opened at a time, which can reduce download times by a factor of 10 or more as compared to traditional single-connection download programs (e.g. FTP clients).
- Adapted to special cases where the server s file system has less limitations in file naming than the local machine. The names of the downloaded files will be shortened as necessary and stripped of characters that are illegal in the local filesystem.
- Automatic prevention of file overwriting. If a file with the same name already exists, the new file will be renamed such as to prevent destruction of the old file.
- Limited capability to correct broken HTML pages, so that the downloaded copies might work better than the original pages.