summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
Diffstat (limited to 'doc')
-rw-r--r--doc/ChangeLog1474
-rw-r--r--doc/Makefile.am118
-rw-r--r--doc/Makefile.in1584
-rw-r--r--doc/fdl.texi507
-rw-r--r--doc/sample.wgetrc125
-rw-r--r--doc/sample.wgetrc.munged_for_texi_inclusion125
-rw-r--r--doc/stamp-vti4
-rwxr-xr-xdoc/texi2pod.pl500
-rw-r--r--doc/version.texi4
-rw-r--r--doc/wget.info4556
-rw-r--r--doc/wget.texi4284
11 files changed, 13281 insertions, 0 deletions
diff --git a/doc/ChangeLog b/doc/ChangeLog
new file mode 100644
index 0000000..a2b923c
--- /dev/null
+++ b/doc/ChangeLog
@@ -0,0 +1,1474 @@
+2011-08-18 Giuseppe Scrivano <gscrivano@gnu.org>
+
+ * texi2pod.pl: Don't assume the perl executable is under /usr/bin/.
+
+2011-08-06 Giuseppe Scrivano <gscrivano@gnu.org>
+
+ * wget.texi (Wgetrc Commands): Document show_all_dns_entries.
+
+ * Makefile.am (wget.pod): Pass the VERSION value to texi2pod.
+
+ * texi2pod.pl: Update from GCC.
+
+2011-07-28 Noèl Köthe <noel@debian.org> (tiny change)
+
+ * wget.texi (HTTP Options): Fix typo.
+
+2011-07-26 Giuseppe Scrivano <giuseppe@southpole.se>
+
+ * wget.info (cookies): Remove reference to --cookies.
+ Reported by: Noèl Köthe.
+
+2011-07-05 Giuseppe Scrivano <gscrivano@gnu.org>
+
+ * wget.texi (Recursive Retrieval Options): Make clearer that recursion,
+ by default, uses 5 levels.
+ Reported by: Marc Deop <damnshock@gmail.com>.
+
+2011-03-21 Giuseppe Scrivano <gscrivano@gnu.org>
+
+ * wget.texi: Do not cite the current maintainer.
+ Reported by: Micah Cowan <micah@cowan.name>.
+
+2010-12-22 Giuseppe Scrivano <gscrivano@gnu.org>
+
+ * wget.texi (HTTP Options): Remove sentence which doesn't reflect
+ the wget behaviour when -k -K are used with -E.
+ Reported by: pike-wget@kw.nl.
+
+2010-08-08 Reza Snowdon <vivi@mage.me.uk>
+ * wget.texi: Added information about the config option to the
+ 'Overview' section and a description of the option in
+ 'Logging and Input File Options'.
+
+2010-10-26 Giuseppe Scrivano <gscrivano@gnu.org>
+
+ * wget.texi (Download Options): Remove unclear statement about the
+ --waitretry option.
+ Reported by: Manfred Koizar <mkoi-pg@aon.at>.
+
+2010-09-25 Merinov Nikolay <kim.roader@gmail.com>
+
+ * wget.texi (Download Options): Document --unlink option.
+
+2010-09-13 Giuseppe Scrivano <gscrivano@gnu.org>
+
+ * wget.texi (Recursive Accept/Reject Options): Remove superfluous dot.
+ Reported by: Snader_LB.
+
+2010-07-28 Alon Bar-Lev <alon.barlev@gmail.com> (tiny change)
+
+ * texi2pod.pl: Use the warnings module only when it is available.
+
+2010-05-27 Giuseppe Scrivano <gscrivano@gnu.org>
+
+ * wget.texi (Download Options): Document that -k can be used with -O
+ only with regular files.
+
+2010-05-08 Giuseppe Scrivano <gscrivano@gnu.org>
+
+ * Makefile.am: Update copyright years.
+
+ * fdl.texi: Likewise.
+
+ * texi2pod.pl: Likewise.
+
+ * wget.texi: Likewise.
+
+2010-01-09 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Download Options): Documented
+ --no-use-server-timestamps.
+
+2009-09-04 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Time-Stamping): "older" -> "not newer".
+
+ * Makefile.am (install.man, install.wgetrc): Use $(mkinstalldirs),
+ not $(top_srcdir)/mkinstalldirs.
+
+2009-08-27 Micah Cowan <micah@cowan.name>
+
+ * texi2pod.pl: Handle @asis in table-element formatting.
+
+ * wget.texi (Exit Status): Document new exit codes.
+
+2009-08-02 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Option Syntax): "This is a complete equivalent of" ->
+ "This is completely equivalent to". Thanks to Reuben Thomas for
+ catching this.
+
+2009-07-28 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Download Options): Document "lowercase", "uppercase",
+ and the new "ascii" specifier for --restrict-file-names.
+ (HTTP Options): Rename --html-extension to --adjust-extension.
+ (Wgetrc Commands): Rename html_extension to adjust_extension.
+
+2009-07-26 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Download Options): Change --iri item to --no-iri;
+ rename --locale to --local-encoding.
+ (Wgetrc Commands): Document iri, local_encoding, remote_encoding,
+ ask_password, auth_no_challenge, and keep_session_cookies.
+
+2009-07-06 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Logging and Input File Options): Alter description of
+ --input-file, implying that --force-html isn't necessary when the
+ input is in HTML file. Improve accuracy of --base description.
+ (Wgetrc Commands): Improve accuracy of "base" description.
+ (HTTP Options): Clarify operation of --post-file.
+
+2009-07-03 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Download Options): --iri=no --> --no-iri
+ (Contributors): Add Saint Xavier.
+
+2009-06-20 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Contributors): Added Jay Krell.
+
+2009-06-14 Micah Cowan <micah@cowan.name>
+
+ * Makefile.am (wget.pod): $(srcdir)/version.texi -> version.texi
+
+2009-06-12 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Download Options): More accuracy on what happens when
+ -nd is used with -r or -p.
+
+2009-06-11 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Contributors): Added Xin Zou, Benjamin Wolsley, and
+ Robert Millan.
+
+2009-06-11 Joao Ferreira <joao@joaoff.com>
+
+ * wget.texi (Option Syntax): Fixed contradictory and confusing
+ explanation of --folow-ftp and negation.
+
+2009-06-10 Micah Cowan <micah@cowan.name>
+
+ * sample.wgetrc: Add "https_proxy" to the proxy examples. Thanks
+ to Martin Paul <martin@par.univie.ac.at> for the suggestion.
+
+2008-11-15 Steven Schubiger <stsc@members.fsf.org>
+
+ * sample.wgetrc: Comment the waitretry "default" value,
+ because there is a global one now.
+
+ * wget.texi (Download Options): Mention the global
+ default value.
+
+2008-11-10 Micah Cowan <micah@cowan.name>
+
+ * Makefile.am (EXTRA_DIST): Removed no-longer-present
+ README.maint (shouldn't have been there in the first place).
+
+ * wget.texi (Mailing Lists): Added information aboug Gmane portal,
+ added subsection headings.
+
+ Update node pointers.
+
+2008-11-05 Micah Cowan <micah@cowan.name>
+
+ * wget.texi: Move --no-http-keep-alive from FTP Options to HTTP
+ Options.
+ (Mailing List): Mention moderation for unsubscribed posts, and
+ archive location.
+
+2008-11-04 Micah Cowan <micah@cowan.name>
+
+ * wget.texi, fdl.texi: Updated to FDL version 1.3.
+
+2008-10-31 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Mailing List): Update info to reflect change to
+ bug-wget@gnu.org.
+
+2008-09-30 Steven Schubiger <stsc@members.fsf.org>
+
+ * wget.texi (Wgetrc Commands): Add default_page, save_headers,
+ spider and user_agent to the list of recognized commands.
+
+2008-09-10 Michael Kessler <kessler.michael@aon.at>
+
+ * wget.texi (Robot Exclusion): Fixed typo "downloads" ->
+ "download"
+
+2008-08-03 Xavier Saint <wget@sxav.eu>
+
+ * wget.texi : Add option descriptions for the three new
+ options --iri, --locale and --remote-encoding related to
+ IRI support.
+
+ * sample.wgetrc : Add commented lines for the three new
+ command iri, locale and encoding related to IRI support.
+
+2008-08-03 Micah Cowan <micah@cowan.name>
+
+ * wget.texi: Don't set UPDATED; already set by version.texi.
+ (HTTP Options): Add --default-page option.
+
+2008-07-17 Steven Schubiger <stsc@members.fsf.org>
+
+ * wget.texi (Logging and Input File Options): Document
+ for --input-file and according remote input file URLs, the
+ implicit enforcement of treating a document as HTML and
+ the possible baseref assumption.
+
+2008-06-29 Micah Cowan <micah@cowan.name>
+
+ * wget.texi <Contributors>: Added Joao Ferreira, Mike Frysinger,
+ Alain, Guibert, Madhusudan Hosaagrahara, Jim Paris, Kenny
+ Parnell, Benno Schulenberg, and Pranab Shenoy. Added Steven
+ Schubiger to the "Special Thanks" section.
+
+2008-06-13 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Mailing List): The wget-notify mailing list no longer
+ receives commit notifications from the source repository.
+ (Internet Relay Chat): Activity isn't quite so low any more,
+ remove notice to that effect.
+
+2008-05-17 Steven Schubiger <stsc@members.fsf.org>
+
+ * wget.texi (Download Options): Change documentation to reflect
+ the new default value for --prefer-family.
+ (Wgetrc Commands): Same, for prefer_family wgetrc command.
+
+2008-05-12 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Download Options): -N with -O downgraded to a
+ warning.
+
+2008-04-30 Steven Schubiger <stsc@members.fsf.org>
+
+ * wget.texi <Download Options>: Document the --ask-password
+ option.
+
+2008-04-27 Micah Cowan <micah@cowan.name>
+
+ * wget.texi (Download Options) <-O>: Elaborate on why certain
+ options make poor combinations with -O.
+
+2008-04-24 Micah Cowan <micah@cowan.name>
+
+ * wget.texi: Adjusted documentation to account for CSS support;
+ added Ted Mielczarek to contributors.
+
+2008-04-22 Mike Frysinger <vapier@gentoo.org>
+
+ * sample.wgetrc: Added prefer_family example. Resolves bug
+ #22142.
+
+2008-04-11 Micah Cowan <micah@cowan.name>
+
+ * wget.texi <Contributors>: Added Julien Buty, Alexander
+ Dergachev, and Rabin Vincent.
+
+2008-03-24 Micah Cowan <micah@cowan.name>
+
+ * wget.texi <Types of Fields>: Mentioned various caveats in the
+ behavior of accept/reject lists, deprecate current
+ always-download-HTML feature. Added @noindent to a couple of
+ appropriate spots.
+
+2008-03-17 Micah Cowan <micah@cowan.name>
+
+ * wget.texi <Directory-Based Limits>: Mention importance of
+ trailing slashes to --no-parents.
+
+2008-02-10 Micah Cowan <micah@cowan.name>
+
+ * wget.texi <HTTP Options>: Added documentation of
+ --auth-no-challenge.
+
+2008-02-06 Micah Cowan <micah@cowan.name>
+
+ * wget.ṫexi <Overview>: Remove references to no-longer-supported
+ socks library.
+
+2008-01-31 Micah Cowan <micah@cowan.name>
+
+ * wget.texi: Ensure that license info appears in the info
+ version of the manual.
+
+2008-01-25 Micah Cowan <micah@cowan.name>
+ * Makefile.am, wget.texi: Updated copyright year.
+
+2007-12-10 Micah Cowan <micah@cowan.name>
+
+ * wget.texi: Document the --content-disposition option (and not
+ just the .wgetrc setting).
+
+2007-12-06 Micah Cowan <micah@cowan.name>
+
+ * wget.texi: "the the" -> "the"
+
+2007-12-05 Micah Cowan <micah@cowan.name>
+
+ * wget.texi <Wgetrc Commands>: Explicitly mention that
+ --content-disposition has known issues.
+
+2007-10-13 Micah Cowan <micah@cowan.name>
+
+ * wget.texi <Mailing Lists>: Replaced mention of no-longer
+ included PATCHES file with link to relevant Wgiki page.
+ * wget.texi <Internet Relay Chat>: Added new section.
+
+2007-10-10 Micah Cowan <micah@cowan.name>
+
+ * wget.texi <Wgetrc Commands>: Fixed "doewnloads" typo.
+
+2007-10-08 Micah Cowan <micah@cowan.name>
+
+ * wget.texi: Credit to Ralf Wildenhues for automakifying patches.
+
+2007-10-05 Ralf Wildenhues <Ralf.Wildenhues@gmx.de>
+
+ * Makefile.in: Removed, replaced by Makefile.am.
+ * Makefile.am: Converted from Makefile.in.
+
+2007-10-03 Micah Cowan <micah@cowan.name>
+
+ * wget.texi <Wgetrc Commands>: Cleaned up alphabetization,
+ more consistent use of underscores. Added a description of the
+ content_disposition wgetrc command.
+
+2007-10-01 Micah Cowan <micah@cowan.name>
+
+ * wget.texi: Updated information in Mailing Lists, Reporting
+ Bugs. Added Web Site section, and add information about Mac OS
+ X, MS-DOS, and VMS in Portability.
+
+2007-09-27 Micah Cowan <micah@cowan.name>
+
+ * wget.texi: Removed "for more details" from parenthesese
+ enclosing @pxref{}s, so that texi2pod.pl knows to remove the
+ whole reference. Made some gramattical improvements, and
+ strengthened the recommendation to use the info manual instead.
+ * texi2pod.pl: Brought in some updates from the GCC version. Not
+ an entire update, since a couple "fixes" there breaks stuff
+ here.
+
+2007-09-12 Micah Cowan <micah@cowan.name>
+
+ * wget.texi: Expanded the description of -O. Clarified the
+ detection of elements as "patterns" versus "suffixes" in -A,
+ -R. Describe -p in relation to -nc.
+
+2007-07-28 Micah Cowan <micah@cowan.name>
+
+ * wget.texi <HTTP Options>: Added --max-redirect option.
+
+2007-07-05 Micah Cowan <micah@cowan.name>
+
+ * fdl.texi:
+ Changed to match the version in gnulib.
+
+ * Makefile.in:
+ * texi2pod.pl:
+ * texinfo.tex:
+ Updated GPL reference to version 3 or later, removed FSF
+ address.
+
+ * wget.texi:
+ Slightly reworded the FDL license invocation. Replaced the
+ maintainer reference. Removed the GPL text from the manual.
+
+ * gpl.texi:
+ Removed due to discontinuation of reference in Wget manual.
+
+2006-07-10 Mauro Tortonesi <mauro@ferrara.linux.it>
+
+ * wget.texi: Fixed rendering of --no-proxy description in the man
+ page. Added information about current maintainer.
+
+2006-06-28 Mauro Tortonesi <mauro@ferrara.linux.it>
+
+ * wget.texi: Removed invariant status to the GPL and GFDL sections.
+ Changed UPDATED to Jun 2006. Updated copyright notice to include 2006.
+
+2006-06-26 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Recursive Accept/Reject Options): Document
+ --ignore-case.
+
+2006-06-20 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Add missing word.
+ Reported by Adrian Knoth.
+
+2006-02-05 Hrvoje Niksic <hniksic@xemacs.org>
+
+ (Download Options): Changed "a recent article" to "a 2001 article"
+ in the description of --random-wait, since the article in question
+ is not really recent.
+
+2006-02-05 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Document the modified meaning of
+ --random-wait.
+
+2005-11-15 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Document https_proxy.
+
+2005-09-02 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * sample.wgetrc: Rewrite the "passive FTP" paragraph to better
+ reflect reality.
+
+2005-08-09 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Wgetrc Commands): Removed documentation for the now
+ deleted command "kill_longer".
+
+2005-06-28 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Logging and Input File Options): Don't claim that
+ --base requires --force-html to work.
+
+2005-06-25 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Update -4/-6 documentation to
+ reflect the fact that we no longer use AI_ADDRCONFIG.
+
+2005-06-24 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * gpl.texi (GNU General Public License): Split GPL text into a
+ separate file and include it from wget.texi. Used the latest
+ template from gnu.org with the updated address of the FSF.
+
+2005-06-23 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Contributors): Updated list of principal
+ contributors.
+
+2005-06-22 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Mailing List): Remove reference to the wget-cvs list,
+ which no longer exists.
+
+2005-06-22 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Use the more standard authorship phrase "and others".
+
+2005-06-22 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Overview): Remove explicit vertical spacing.
+
+2005-06-22 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * texinfo.tex: Update with a non-prehistoric version.
+
+2005-06-22 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * texi2pod.pl: Locate perl using the "env" program, so we don't
+ need to modify texi2pod.
+
+ * Makefile.in (wget.pod): Work with texi2pod.pl directly instead
+ of generating it from texi2pod.pl.in.
+
+2005-06-22 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Wgetrc Commands): Remove the "lockable boolean"
+ feature.
+
+2005-06-20 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * ansi2knr.1: Removed.
+
+2005-06-16 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Logging and Input File Options): It's --no-verbose,
+ not --non-verbose.
+
+2005-06-06 Keith Moore <keithmo@exmsft.com>
+
+ * Makefile.in: Fix a harmless (but annoying) installation warning.
+
+2005-05-30 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (HTTP Options): Removed statement that redirect in
+ response to POST is "technically disallowed", which I cannot find
+ in rfc2616 nor in rfc1945. Even if that were technically the
+ case, the widespreadedness of such responses would make the
+ prohibition irrelevant.
+
+2005-05-14 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Overview): Document --[no-]proxy as primarily being
+ used to turn *off* the use of proxies.
+
+2005-05-11 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (HTTPS (SSL/TLS) Options): Explain certificate
+ checking in more detail.
+
+2005-05-08 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * texi2pod.pl.in: Allow an "EXAMPLES" section.
+
+2005-05-06 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (HTTP Options): Document empty user-agent.
+
+2005-05-06 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Explain that the read timeout
+ really refers to idle timeout.
+ (Download Options): Mention that decimal and subsecond values may
+ be used for timeouts.
+
+2005-05-05 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: We're using GFDL 1.2, not 1.1.
+
+2005-05-05 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Contributors): Updated.
+
+2005-04-27 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Fix bind address cindex entry that
+ broke concept index generation.
+
+2005-04-27 Mauro Tortonesi <mauro@ferrara.linux.it>
+
+ * wget.texi: Fixed a broken reference to Security Considerations
+ section in tex-generated documents (like the man page).
+
+2005-04-27 Mauro Tortonesi <mauro@ferrara.linux.it>
+
+ * wget.texi: Document --user, --password, --ftp-user and the
+ corresponding Wgetrc command. Renamed --ftp-passwd to --ftp-password,
+ --http-passwd to --http-passwd and --proxy-passwd to proxy_password.
+ Renamed ftp_passwd to ftp_password, http_passwd to http_passwd and
+ proxy_passwd to proxy_password. Removed documentation for the
+ deprecated login command.
+
+2005-04-27 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (HTTPS (SSL/TLS) Options): Document --random-file.
+
+2005-04-27 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Improve wording of command descriptions.
+
+2005-04-27 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (HTTP Options): Mention --keep-session-cookies when
+ documenting --post-data.
+
+2005-04-27 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Document the new form of SSL/TLS options.
+
+2005-04-26 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (HTTP Options): Improved entry on
+ --keep-session-cookies.
+
+2005-04-26 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Directory Options): Removed stray text after
+ --protocol-directories.
+
+2005-04-26 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Option Syntax): Document boolean options. Include
+ the option syntax in the man page.
+ (Directory Options): Removed stray text after --protocol-directories.
+
+2005-04-25 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Advanced Usage): Don't advertise the non-existent
+ `-s' option.
+
+2005-04-25 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Document --retry-connrefused.
+
+2005-04-25 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * Makefile.in (wget.info): Depend on version.texi as well.
+
+ * wget.texi: Simplify copyright. Replace remaining instances of
+ --OPTION=off with --no-OPTION.
+
+2005-04-24 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Document --prefer-family.
+
+2005-04-24 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Don't claim that -6 accepts mapped
+ IPv4 addresses.
+
+2005-04-23 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Documented the SSL command-line options.
+
+2005-04-23 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Wgetrc Commands): Document ftp_passwd.
+ (FTP Options): Document --ftp-passwd.
+
+2005-04-23 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * texi2pod.pl.in: First process @@ then @}, so @samp{-wget@@} is
+ interpreted correctly.
+
+2005-04-20 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Document behavior of -6 wrt mapped IPv4 addresses.
+
+2005-04-20 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Document IPv6 related options.
+
+2005-04-18 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Update mailing list information.
+
+2005-04-18 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Don't claim that --no-dns-cache is
+ necessary for dyndns servers -- it's not.
+
+2005-04-08 Larry Jones <lawrence.jones@ugsplm.com>
+
+ * Makefile.in (wget.info): Don't use $< in an explicit rule.
+
+2005-03-22 Joseph Caretto <jcaretto@pitt.edu>
+
+ * texi2pod.pl.in: Handle asis again. It used to work (see the
+ 2001-12-11 entry), but the local change was lost in the upgrade
+ to 1.4.
+
+2005-02-11 Mauro Tortonesi <mauro@ferrara.linux.it>
+
+ * wget.texi: Added Simone Piunno as new contributor.
+
+2005-01-01 Mauro Tortonesi <mauro@ferrara.linux.it>
+
+ * wget.texi: Updated copyright information, added new contributors.
+
+2004-11-20 Hans-Andreas Engel <engel@node.ch>
+
+ * wget.texi: Describe limitations of combining `-O' with `-k'.
+
+2004-05-13 Nico R. <n-roeser@gmx.net>
+
+ * Makefile.in: Allow building in a separate tree with source tree
+ write-protected.
+
+2004-02-22 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Contributors): Updated.
+
+2004-02-12 Jens Roesner <jens.roesner@gmx.de>
+
+ * wget.texi (Wgetrc Commands): Document `-e' here.
+
+2004-02-08 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Security Considerations): Put @item contents on a
+ separate line.
+ Reported by Ted Rodriguez-Bell.
+
+2004-02-06 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Wgetrc Commands): Document --no-http-keep-alive and
+ the corresponding Wgetrc command.
+
+2003-12-06 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Don't incorrectly claim that `-O'
+ sets the number of retries to 1.
+
+2003-12-06 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Document the new option `--protocol-directories'.
+
+2003-11-15 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Portability): Update slightly.
+
+2003-11-15 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Documented that --dont-remove-listing is now
+ --no-remove-listing.
+
+2003-11-14 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * fdl.texi: New file.
+
+ * wget.texi: Upgrade to GNU Free Documentation License 1.2.
+
+2003-11-09 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Remove next/previous/up node links. Makeinfo doesn't
+ require them, and they make the document harder to modify.
+
+2003-11-09 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: No longer document options -s, -C, -g, and -G.
+ (Contributors): Update my email address.
+
+2003-11-05 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (HTTP Options): Document `--keep-session-cookies'.
+
+2003-10-26 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Wgetrc Commands): Fixed typo.
+ From DervishD <raul@pleyades.net>.
+
+2003-10-24 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * Makefile.in (install.info): Handle the case when only wget.info
+ is generated from wget.texi. In that case, wget.info-*[0-9]
+ doesn't match anything and therefore ends up as a bogus value of
+ FILE in the loop. Fix this by not calling INSTALL_DATA on
+ nonexistent files.
+
+2003-10-07 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (HTTP Options): Documented --post-file and
+ --post-data.
+
+2003-10-01 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Renamed prep.ai.mit.edu to ftp.gnu.org.
+
+2003-10-01 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Contributors): Updated from ChangeLog entries.
+
+2003-09-21 Aaron S. Hawley <Aaron.Hawley@uvm.edu>
+
+ * wget.texi: Split version to version.texi. Tweak documentation's
+ phrasing and markup.
+
+2003-09-21 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Documented the new timeout options.
+
+2003-09-19 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Changed @itemx not preceded by @item to @item.
+
+2003-09-17 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Explain how --tries works by
+ default.
+
+2003-09-17 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Explain new --restrict-file-names
+ semantics.
+
+2003-09-16 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi: Set the man page title to a string more descriptive
+ than "Wget manual".
+
+2003-09-16 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * Makefile.in ($(TEXI2POD)): Update only the #! line.
+
+ * texi2pod.pl: New version from Gcc.
+
+2003-09-16 Noel Kothe <noel@debian.org>
+
+ * wget.texi (Download Options): Fix misspelling.
+
+2003-09-15 Nicolas Schodet <schodet@efrei.fr>
+
+ * wget.texi (Download Options): Add link to Proxies.
+
+2003-09-14 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Document the new option
+ --restrict-file-names and the corresponding wgetrc command.
+
+2003-09-10 Hrvoje Niksic <hniksic@xemacs.org>
+
+ * wget.texi (Download Options): Documented new option --dns-cache.
+
+2002-04-24 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Robot Exclusion): Explain how to turn off the robot
+ exclusion support from the command line.
+ (Wgetrc Commands): Explain that the `robots' variable also takes
+ effect on the "nofollow" matching.
+
+2002-04-15 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Download Options): Fix the documentation of
+ `--progress'.
+
+2002-04-14 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Wgetrc Commands): Document `--limit-rate'.
+
+2002-04-10 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi: Warn about the dangers of specifying passwords on the
+ command line and in unencrypted files.
+
+2001-12-16 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Wgetrc Commands): Undocument simple_host_check.
+
+2001-12-13 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Robots): Fix broken URLs that point to the webcrawler
+ web site.
+
+2001-12-11 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (HTTP Options): Explain how to make IE produce a
+ `cookies.txt'-compatible file.
+ Reported by Herold Heiko.
+
+2001-12-11 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * texi2pod.pl.in: Handle @asis in table.
+
+2001-12-09 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi: Bump version to 1.8.
+
+2001-12-08 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (HTTP Options): Provide more specific information
+ about how --load-cookies is meant to be used.
+
+2001-12-08 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * texi2pod.pl: Include the EXAMPLES section.
+
+ * wget.texi (Overview): Shorten the man page DESCRIPTION.
+ (Examples): Redo the Examples chapter. Include it in the man
+ page.
+
+2001-12-01 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi: Update the manual with the new recursive retrieval
+ stuff.
+
+2001-11-30 Ingo T. Storm <tux-sparc@computerbild.de>
+
+ * sample.wgetrc: Document ftp_proxy, too.
+
+2001-11-04 Alan Eldridge <alane@geeksrus.net>
+
+ * wget.texi: Document --random-wait, randomwait=on/off.
+
+2001-11-23 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Download Options): Document the new `--progress'
+ option.
+
+2001-11-22 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Proxies): Fix typo.
+ (Proxies): Sync the text with the example.
+ (Wgetrc Commands): There is no -f option. It's --follow-ftp.
+ Reported by Wojtek Kotwica.
+
+2001-11-17 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * Makefile.in (install.info): If info files from the build
+ directory are not available, use the ones from $(srcdir).
+
+2001-11-16 Peter Farmer <peter.farmer@zveno.com>
+
+ * Makefile.in: Use $? instead of $<. Use TEXI2POD more
+ consistently.
+
+2001-06-16 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi: Updated version to 1.7.1.
+
+2001-06-15 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * Makefile.in (install.wgetrc): Use $(DESTDIR) when testing
+ whether $(WGETRC) exists.
+
+2001-06-15 Adam J. Richter <adam@yggdrasil.com>
+
+ * Makefile.in (install.wgetrc): Make `make install'
+ non-interactive in all cases.
+
+2001-06-15 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * Makefile.in (install.wgetrc): Take $(DESTDIR) into account when
+ running mkinstalldirs.
+
+2001-06-05 Jan Prikryl <prikryl@cg.tuwien.ac.at>
+
+ * Makefile.in (wget.info): Added -I$(srcdir) to support compilation
+ outside the source tree.
+ (install.man): Replaced $(srcdir)$(MAN) with $(MAN). The former
+ did not work when compiling outside the source tree.
+
+2001-05-26 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi: Updated version to 1.7.
+
+2001-05-31 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Mailing List): Fix the mailing list address.
+
+2001-05-27 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Copying): Clarify. Link to
+ "free-software-for-freedom.html".
+
+2001-05-26 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Contributors): Updated list of contributors.
+
+2001-05-26 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi: Updated version to 1.7-pre1.
+
+2001-04-28 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Wgetrc Commands): Update docs for `continue'.
+
+2001-04-27 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (HTTP Options): Document cookie options.
+
+2001-01-20 Karl Eichwalder <ke@suse.de>
+
+ * Makefile.in: Provide and use DESTDIR according to the Coding
+ Standards.
+
+2001-04-01 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Recursive Retrieval Options): Document more
+ accurately what --convert-links does.
+
+2001-03-27 Dan Harkless <wget@harkless.org>
+
+ * Makefile.in: Moved top_builddir out of "User configuration
+ section" of top Makefile and analogous spot in this one.
+
+2001-03-26 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (Recursive Retrieval Options): Explained that you need
+ to use -r -l1 -p to get the two levels of requisites for a
+ <FRAMESET> page. Also made a few other wording improvements.
+
+2001-03-17 Dan Harkless <wget@harkless.org>
+
+ * Makefile.in: Using '^' in the sed call caused a weird failure on
+ Solaris 2.6. Changed it to a ','. Defined top_builddir.
+
+2001-02-23 Dan Harkless <wget@harkless.org>
+
+ * wget.texi: Corrections, clarifications, and English fixes to
+ time-stamping documentation. Also moved -nr from "Recursive
+ Retrieval Options" to "FTP Options" and gave it a @cindex entry.
+ Alphabetized FTP options by long option name. Mentioned that
+ .listing symlinked to /etc/passwd is not a security hole, but that
+ other files could be, so root shouldn't run wget in user dirs.
+
+2001-02-22 Dan Harkless <wget@harkless.org>
+
+ * Makefile.in: Make wget man page and install it if we have
+ pod2man. Added some missing '$(srcdir)/'s. Added missing
+ dependencies on install targets (allowing you to just do `make
+ install' rather than forcing you to do `make && make install').
+ Also, Makefile rules should always use output file parameters if
+ available rather than redirecting stdout with '>', or you falsely
+ satisfy dependencies if the tool you're running is missing or
+ fails -- fixed call of texi2pod.pl that did this wrong.
+
+ * texi2pod.pl: Removed from CVS. Now automatically generated.
+
+ * texi2pod.pl.in: This new file is processed into texi2pod.pl,
+ getting the appropriate path to the Perl 5+ executable on this
+ system and becoming executable (CVS files, by contrast, don't
+ arrive executable).
+
+2001-02-19 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (Download Options): Further improvement to --continue
+ documentation -- explain interaction with -r and -N, mention
+ usefulness for downloading new sections of appended-to files, etc.
+
+2001-01-06 Jan Prikryl <prikryl@cg.tuwien.ac.at>
+
+ * wget.texi (Reporting Bugs): Deleted the setence about Cc-ing the
+ bug report to Wget mailing list as the bug report address is an
+ alias for the mailing ist anyway.
+ (Mailing List): Added URL for the alternate archive.
+
+ * wget.texi: Bunch of cosmetical changes.
+
+ * Makefile.in: Added targets for manpage generation using
+ texi2pod.pl and pod2man (comes with Perl5). As we cannot rely on
+ Perl5 being available on the system, manpage is not being built
+ automatically. Updated '*clean' targets to remove
+ 'sample.wgetrc.munged...', 'wget.pod', and 'wget.man'.
+
+ * texi2pod.pl: New file copied from GCC distribution to facilitate
+ automatic manpage generation.
+
+2001-01-09 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (Download Options): Did a bunch of clarification and
+ correction to the description of --continue.
+
+2001-01-06 Dan Harkless <wget@harkless.org>
+
+ * ChangeLog: The '[Not in 1.6 branch.]'s were decided not to be
+ the best way to go about my aim. Removed them in favor of:
+
+ * ChangeLog-branches/1.6_branch.ChangeLog: New file.
+
+2000-12-31 Dan Harkless <wget@harkless.org>
+
+ * Makefile.in (distclean): sample.wgetrc.munged_for_texi_inclusion
+ needs to be included in the distribution or it'll get regenerated
+ due to the wget.info dependency, and then that file will get
+ regenerated, forcing people to have makeinfo installed
+ unnecessarily. We could use a kludge of a 0-length file in the
+ distro, but the file isn't that big and should compress very well.
+
+ * wget.texi: Changed "VERSION 1.5.3+dev" to "VERSION 1.7-dev" and
+ "UPDATED Feb 2000" to "UPDATED Dec 2000". Like the comment in the
+ file says, it'd be nice if these were handled automatically...
+
+ * ChangeLog: Since this flat file doesn't have multiple branches,
+ looking at the dates would make you think that things went into
+ 1.6 that actually just went into the 1.7-dev branch. Added "[Not
+ in 1.6 branch.]" where appropriate to clarify.
+
+2000-12-10 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * Makefile.in (install.info): Info files are *not* in $(srcdir),
+ but in the current build dir.
+
+2000-11-15 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Robots): Document that we now support the meta tag
+ exclusion.
+
+2000-11-16 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi: Use --- consistently.
+ Spell "Wget" with starting capital letter consistently.
+ Use ``...'' or @dfn{} instead of simple double quotes where
+ appropriate.
+ Use double space as separator between sentences consistently.
+
+2000-11-15 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (Robots): Rearrange text. Mention the meta tag.
+
+2000-11-14 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi: Add GFDL; remove norobots specification.
+
+ * wget.texi (Sample Wgetrc): Remove warnings with lateish
+ makeinfo, mostly by changing xref{} to pxref{} when inside
+ parentheses.
+
+2000-11-10 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi: cc.fer.hr -> srk.fer.hr.
+
+2000-11-05 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * Makefile.in (sample.wgetrc.munged_for_texi_inclusion): Use $(srcdir).
+
+2000-11-05 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi: Updated names of contributors.
+
+2000-10-23 Hrvoje Niksic <hniksic@arsdigita.com>
+
+ * wget.texi (HTTP Options): Remove Netscape bullying.
+
+2000-10-23 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (Recursive Retrieval Options): Improved --delete-after docs.
+ (Download Options): Documented Rob Mayoff's new --bind-address option.
+ (Wgetrc Commands): Documented Rob Mayoff's new bind_address command.
+
+2000-10-20 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (Recursive Retrieval Options): Sugg. -E on 1-page download.
+
+2000-10-19 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (HTTP Options): Documented my new -E / --html-extension.
+ (Wgetrc Commands): Documented my new html_extension option and
+ John Daily's "quad" values (which I renamed to "lockable
+ Boolean"). When I documented Damir Dzeko's --referer, I forgot to
+ add the .wgetrc equivalent; mentioned the "referrer" spelling issue.
+
+2000-10-09 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (FTP Options): --retr-symlinks wasn't documented properly.
+
+2000-08-30 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (Recursive Retrieval Options): Documented new -p option.
+ (Wgetrc Commands): Documented -p's equvialent, page_requisites.
+
+2000-08-23 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (Download Options): Using -c on a file that's already fully
+ downloaded results in an unchanged file and no second ".1" copy.
+
+ * wget.texi (Logging and Input File Options): -B / --base was not
+ documented as a separate item, and the .wgetrc version was misleading.
+
+ * wget.texi (Wgetrc Commands): Changed all instances of
+ ", the same as" to the more grammatical " -- the same as".
+
+2000-08-22 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (Download Options): --no-clobber's documentation was
+ severely lacking -- ameliorated the situation. Some of the
+ previously-undocumented stuff (like the multiple-file-version
+ numeric-suffixing) that's now mentioned for the first (and only)
+ time in the -nc documentation should probably be mentioned
+ elsewhere, but due to the way that wget.texi's hierarchy is laid
+ out, I had a hard time finding anywhere else appropriate.
+
+2000-07-17 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (HTTP Options): Minor clarification in "download a
+ single HTML page and all files necessary to display it" example.
+
+2000-05-22 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (HTTP Options): Damir Dzeko <ddzeko@zesoi.fer.hr> did
+ not document his new --referer option. Did so.
+
+2000-04-18 Dan Harkless <wget@harkless.org>
+
+ * sample.wgetrc: Realized I put a global setting in the local section.
+
+2000-04-13 Dan Harkless <wget@harkless.org>
+
+ * Makefile.in (sample.wgetrc.munged_for_texi_inclusion): Added
+ build, dependencies, and distclean cleanup of this new file.
+
+ * sample.wgetrc: Uncommented waitretry and set it to 10, clarified
+ some wording, and re-wrapped some text to 71 columns due to
+ @sample indentation in wget.texi.
+
+ * wget.texi: Herold further expounded on the behavior of waitretry
+ -- reworded docs again. Changed note saying _all_ lines in
+ sample.wgetrc are commented out. Don't have an entire hand-
+ cut-and-pasted copy of sample.wgetrc in this file -- use @include.
+
+2000-04-12 Dan Harkless <wget@harkless.org>
+
+ * Makefile.in (install.wgetrc): I completely missed the message
+ that the new wgetrc wasn't being installed the first couple of
+ times I ran `make install' after changing sample.wgetrc. Added
+ blank lines around the message and a "<Hit RETURN to
+ acknowledge>", and reworded the message to be a bit more clear.
+
+ * sample.wgetrc: Added entries for backup_converted and waitretry.
+
+ * wget.texi (Download Options and Wgetrc Commands): Herold Heiko
+ <Heiko.Herold@previnet.it>'s new --waitretry option was
+ undocumented until now. Reworded the suggested documentation he
+ sent to the list.
+
+2000-03-10 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (Recursive Retrieval Options): In -K description,
+ added a link to the discussion of interaction with -N.
+ (Recursive Accept/Reject Options): Did some alphabetizing and added
+ descriptions of new --follow-tags and -G / --ignore-tags options.
+ (Following Links): Changed "the loads of" to "loads of".
+ (Wgetrc Commands): Added descriptions of new follow_tags and
+ ignore_tags commands.
+
+2000-03-02 Daniel S. Lewart <d-lewart@uiuc.edu>
+
+ * wget.texi: Fix spelling and grammatical mistakes.
+
+2000-03-02 Hrvoje Niksic <hniksic@iskon.hr>
+
+ * wget.texi (Contributors): Update contributors list.
+
+2000-03-01 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (HTTP Time-Stamping Internals): Added a note about my
+ newly-implemented interaction between -K and -N.
+
+2000-02-29 Dan Harkless <wget@harkless.org>
+
+ * wget.texi: Updated version to 1.5.3+dev, updated copyrights to
+ 2000, changed Hrvoje's old, invalid email address to his new one,
+ and added " and the developers" to the .texi file's byline.
+
+2000-02-18 Dan Harkless <wget@harkless.org>
+
+ * wget.texi (Recursive Retrieval Options): Documented my new -K /
+ --backup-converted option.
+ (Wgetrc Commands): Documented backup_converted equivalent.
+
+1998-09-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (HTTP Options): Warn against masquerading as Mozilla.
+
+1998-05-24 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (clean): Remove HTML files.
+
+1998-05-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Various updates.
+ (Proxies): New node.
+
+1998-05-09 Hrvoje Niksic <hniksic@srce.hr>
+
+ * texinfo.tex: New file.
+
+1998-05-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (dvi): New target.
+
+1998-05-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Recursive Retrieval): Fix typo. Suggested by
+ Francois Pinard.
+
+1998-04-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Fixed @dircategory, courtesy Karl Eichwalder.
+
+1998-03-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in: Don't attempt to (un)install the man-page.
+
+1998-03-30 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.1: Removed it.
+
+1998-03-29 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Invoking): Split into more sections, analogous to
+ output of `wget --help'.
+ (HTTP Options): Document --user-agent.
+
+1998-03-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Contributors): Updated with oodles of new names.
+
+1998-02-22 Karl Eichwalder <ke@suse.de>
+
+ * Makefile.in (install.info): only info files (no *info.orig,
+ etc.).
+
+1998-01-31 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (install.wgetrc): Don't use `!'.
+
+1998-01-28 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Expanded.
+
+1998-01-25 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document `--cache'.
+ (Contributors): Added Brian.
+
+1997-07-26 Francois Pinard <pinard@iro.umontreal.ca>
+
+ * Makefile.in (install.wgetrc): Print the sample.wgetrc warning
+ only if the files actually differ.
+
+1998-01-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in: Use `test ...' rather than `[ ... ]'.
+
+ * wget.texi (Advanced Options): Explained suffices.
+
+1998-01-23 Karl Heuer <kwzh@gnu.org>
+
+ * wget.texi (Advanced Options): Updated.
+
+1997-12-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Mailing List): Update.
+
+1997-04-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document `--follow-ftp'.
+
+1997-02-17 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document --proxy-user and
+ --proxy-passwd.
+
+1997-02-14 Karl Eichwalder <ke@ke.Central.DE>
+
+ * Makefile.in (install.wgetrc): Never ever nuke an existing rc file.
+
+1997-02-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Updated and revised.
+
+ * wget.texi (Contributors): Update.
+ (Advanced Options): Removed bogus **/* example.
+
+ * wget.texi: Use ``...'' instead of "...".
+
+1997-02-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Domain Acceptance): Document --exclude-domains.
+
+1997-01-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document --ignore-length.
+
+1997-01-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Time-Stamping): New node.
+
+1997-01-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * Makefile.in (distclean): Don't remove wget.info*.
+
+1997-01-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Mailing List): Update archive.
+ (Portability): Update the Windows port by Budor.
+
+1996-12-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Security Considerations): New node.
+
+1996-12-19 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document --passive.
+
+1996-12-12 Dieter Baron <dillo@danbala.tuwien.ac.at>
+
+ * wget.texi (Advanced Usage): Would reference prep instead of
+ wuarchive.
+
+1996-11-25 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Documented --retr-symlinks.
+
+1996-11-23 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document --delete-after.
+
+1996-11-22 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Portability): Add IRIX and FreeBSD as the "regular"
+ platforms.
+
+1996-11-20 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Usage): Document dot-style.
+
+1996-11-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Usage): Dot customization example.
+ (Sample Wgetrc): Likewise.
+
+1996-11-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Wgetrc Syntax): Explained emptying lists.
+
+1996-11-13 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document includes/excludes.
+ (Wgetrc Commands): Likewise.
+
+1996-11-10 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Options): Document headers.
+
+1996-11-07 Hrvoje Niksic <hniksic@srce.hr>
+
+ * sample.wgetrc: Added header examples.
+
+1996-11-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * sample.wgetrc: Rewritten.
+
+ * Makefile.in (install.wgetrc): Install sample.wgetrc.
+ (uninstall.info): Use $(RM).
+
+1996-11-06 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Docfixes.
+
+1996-11-03 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Proofread; *many* docfixes.
+
+1996-11-02 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Introduction): Updated robots mailing list address.
+
+1996-11-01 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi: Minor docfixes.
+
+1996-10-26 Hrvoje Niksic <hniksic@srce.hr>
+
+ * wget.texi (Advanced Usage): Document passwords better.
+
+ * Makefile.in (distclean): Remove wget.1 on make distclean.
+
+ * wget.texi (Option Syntax): Explain --.
+
+1996-10-21 Hrvoje Niksic <hniksic@srce.hr>
+
+ * fetch.texi (No Parent): update.
+
+1996-10-18 Hrvoje Niksic <hniksic@srce.hr>
+
+ * fetch.texi (Advanced Options): Docfix.
+
+1996-10-17 Tage Stabell-Kulo <tage@acm.org>
+
+ * geturl.texi (Advanced Options): Sort alphabetically.
+
+1996-10-16 Hrvoje Niksic <hniksic@srce.hr>
+
+ * geturl.texi (Advanced Options): Describe -nr.
+ (Advanced Usage): Moved -O pipelines to Guru Usage.
+ (Simple Usage): Update.
+ (Advanced Options): Docfix.
+
+ * Makefile.in (RM): RM = rm -f.
+
+1996-10-15 Hrvoje Niksic <hniksic@srce.hr>
+
+ * geturl.texi (Guru Usage): Add proxy-filling example.
+
+1996-10-12 Hrvoje Niksic <hniksic@srce.hr>
+
+ * geturl.texi (Advanced Options): Added --spider.
+
+1996-10-08 Hrvoje Niksic <hniksic@srce.hr>
+
+ * geturl.texi (Advanced Options): Added -X.
+
+ * Makefile.in: Added $(srcdir) where appropriate (I hope).
diff --git a/doc/Makefile.am b/doc/Makefile.am
new file mode 100644
index 0000000..b90f68b
--- /dev/null
+++ b/doc/Makefile.am
@@ -0,0 +1,118 @@
+# Makefile for `wget' utility
+# Copyright (C) 1995, 1996, 1997, 2007, 2008, 2009, 2010, 2011 Free
+# Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+#
+# Version: @VERSION@
+#
+
+# Program to convert DVI files to PostScript
+DVIPS = dvips -D 300
+# Program to convert texinfo files to html
+TEXI2HTML = texi2html -expandinfo -split_chapter
+
+manext = 1
+RM = rm -f
+
+TEXI2POD = $(srcdir)/texi2pod.pl
+POD2MAN = @POD2MAN@
+MAN = wget.$(manext)
+WGETRC = $(sysconfdir)/wgetrc
+SAMPLERCTEXI = sample.wgetrc.munged_for_texi_inclusion
+
+#
+# Dependencies for building
+#
+
+man_MANS = $(MAN)
+
+all: wget.info @COMMENT_IF_NO_POD2MAN@$(MAN)
+
+everything: all wget_us.ps wget_a4.ps wget_toc.html
+
+$(SAMPLERCTEXI): $(srcdir)/sample.wgetrc
+ sed s/@/@@/g $? > $@
+
+info_TEXINFOS = wget.texi
+wget_TEXINFOS = fdl.texi sample.wgetrc.munged_for_texi_inclusion
+
+EXTRA_DIST = sample.wgetrc \
+ $(SAMPLERCTEXI) \
+ texi2pod.pl
+
+wget.pod: $(srcdir)/wget.texi version.texi
+ $(TEXI2POD) -D VERSION="$(VERSION)" $(srcdir)/wget.texi $@
+
+$(MAN): wget.pod
+ $(POD2MAN) --center="GNU Wget" --release="GNU Wget @VERSION@" $? > $@
+
+#wget.cat: $(MAN)
+# nroff -man $? > $@
+
+wget_us.ps: wget.dvi
+ $(DVIPS) -t letter -o $@ wget.dvi
+
+wget_a4.ps: wget.dvi
+ $(DVIPS) -t a4 -o $@ wget.dvi
+
+wget_toc.html: $(srcdir)/wget.texi
+ $(TEXI2HTML) $(srcdir)/wget.texi
+
+#
+# Dependencies for installing
+#
+
+# install all the documentation
+install-data-local: install.wgetrc @COMMENT_IF_NO_POD2MAN@install.man
+
+# uninstall all the documentation
+uninstall-local: @COMMENT_IF_NO_POD2MAN@uninstall.man
+
+
+# install man page, creating install directory if necessary
+install.man: $(MAN)
+ $(mkinstalldirs) $(DESTDIR)$(mandir)/man$(manext)
+ $(INSTALL_DATA) $(MAN) $(DESTDIR)$(mandir)/man$(manext)/$(MAN)
+
+# install sample.wgetrc
+install.wgetrc: $(srcdir)/sample.wgetrc
+ $(mkinstalldirs) $(DESTDIR)$(sysconfdir)
+ @if test -f $(DESTDIR)$(WGETRC); then \
+ if cmp -s $(srcdir)/sample.wgetrc $(DESTDIR)$(WGETRC); then echo ""; \
+ else \
+ echo ' $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(DESTDIR)$(WGETRC).new'; \
+ $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(DESTDIR)$(WGETRC).new; \
+ echo; \
+ echo "WARNING: Differing \`$(DESTDIR)$(WGETRC)'"; \
+ echo " exists and has been spared. You might want to"; \
+ echo " consider merging in the new lines from"; \
+ echo " \`$(DESTDIR)$(WGETRC).new'."; \
+ echo; \
+ fi; \
+ else \
+ $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(DESTDIR)$(WGETRC); \
+ fi
+
+# uninstall man page
+uninstall.man:
+ $(RM) $(DESTDIR)$(mandir)/man$(manext)/$(MAN)
+
+#
+# Dependencies for cleanup
+#
+
+CLEANFILES = *~ *.bak *.cat *.pod
+DISTCLEANFILES = $(MAN)
diff --git a/doc/Makefile.in b/doc/Makefile.in
new file mode 100644
index 0000000..8eeca87
--- /dev/null
+++ b/doc/Makefile.in
@@ -0,0 +1,1584 @@
+# Makefile.in generated by automake 1.11.1 from Makefile.am.
+# @configure_input@
+
+# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
+# 2003, 2004, 2005, 2006, 2007, 2008, 2009 Free Software Foundation,
+# Inc.
+# This Makefile.in is free software; the Free Software Foundation
+# gives unlimited permission to copy and/or distribute it,
+# with or without modifications, as long as this notice is preserved.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
+# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
+# PARTICULAR PURPOSE.
+
+@SET_MAKE@
+
+# Makefile for `wget' utility
+# Copyright (C) 1995, 1996, 1997, 2007, 2008, 2009, 2010, 2011 Free
+# Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+#
+# Version: @VERSION@
+#
+VPATH = @srcdir@
+pkgdatadir = $(datadir)/@PACKAGE@
+pkgincludedir = $(includedir)/@PACKAGE@
+pkglibdir = $(libdir)/@PACKAGE@
+am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
+install_sh_DATA = $(install_sh) -c -m 644
+install_sh_PROGRAM = $(install_sh) -c
+install_sh_SCRIPT = $(install_sh) -c
+INSTALL_HEADER = $(INSTALL_DATA)
+transform = $(program_transform_name)
+NORMAL_INSTALL = :
+PRE_INSTALL = :
+POST_INSTALL = :
+NORMAL_UNINSTALL = :
+PRE_UNINSTALL = :
+POST_UNINSTALL = :
+build_triplet = @build@
+host_triplet = @host@
+subdir = doc
+DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in \
+ $(srcdir)/stamp-vti $(srcdir)/version.texi $(wget_TEXINFOS) \
+ ChangeLog
+ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
+am__aclocal_m4_deps = $(top_srcdir)/m4/00gnulib.m4 \
+ $(top_srcdir)/m4/alloca.m4 $(top_srcdir)/m4/arpa_inet_h.m4 \
+ $(top_srcdir)/m4/asm-underscore.m4 \
+ $(top_srcdir)/m4/clock_time.m4 $(top_srcdir)/m4/close.m4 \
+ $(top_srcdir)/m4/codeset.m4 $(top_srcdir)/m4/configmake.m4 \
+ $(top_srcdir)/m4/dirname.m4 \
+ $(top_srcdir)/m4/double-slash-root.m4 $(top_srcdir)/m4/dup2.m4 \
+ $(top_srcdir)/m4/environ.m4 $(top_srcdir)/m4/errno_h.m4 \
+ $(top_srcdir)/m4/error.m4 $(top_srcdir)/m4/extensions.m4 \
+ $(top_srcdir)/m4/fatal-signal.m4 $(top_srcdir)/m4/fcntl-o.m4 \
+ $(top_srcdir)/m4/fcntl.m4 $(top_srcdir)/m4/fcntl_h.m4 \
+ $(top_srcdir)/m4/float_h.m4 $(top_srcdir)/m4/fseek.m4 \
+ $(top_srcdir)/m4/fseeko.m4 $(top_srcdir)/m4/futimens.m4 \
+ $(top_srcdir)/m4/getaddrinfo.m4 $(top_srcdir)/m4/getdelim.m4 \
+ $(top_srcdir)/m4/getdtablesize.m4 $(top_srcdir)/m4/getline.m4 \
+ $(top_srcdir)/m4/getopt.m4 $(top_srcdir)/m4/getpass.m4 \
+ $(top_srcdir)/m4/gettext.m4 $(top_srcdir)/m4/gettime.m4 \
+ $(top_srcdir)/m4/gettimeofday.m4 $(top_srcdir)/m4/glibc21.m4 \
+ $(top_srcdir)/m4/gnulib-common.m4 \
+ $(top_srcdir)/m4/gnulib-comp.m4 $(top_srcdir)/m4/hostent.m4 \
+ $(top_srcdir)/m4/iconv.m4 $(top_srcdir)/m4/iconv_h.m4 \
+ $(top_srcdir)/m4/include_next.m4 $(top_srcdir)/m4/inet_ntop.m4 \
+ $(top_srcdir)/m4/inline.m4 $(top_srcdir)/m4/intlmacosx.m4 \
+ $(top_srcdir)/m4/intmax_t.m4 $(top_srcdir)/m4/inttypes_h.m4 \
+ $(top_srcdir)/m4/ioctl.m4 $(top_srcdir)/m4/largefile.m4 \
+ $(top_srcdir)/m4/lib-ld.m4 $(top_srcdir)/m4/lib-link.m4 \
+ $(top_srcdir)/m4/lib-prefix.m4 \
+ $(top_srcdir)/m4/localcharset.m4 $(top_srcdir)/m4/locale-fr.m4 \
+ $(top_srcdir)/m4/locale-ja.m4 $(top_srcdir)/m4/locale-zh.m4 \
+ $(top_srcdir)/m4/lock.m4 $(top_srcdir)/m4/longlong.m4 \
+ $(top_srcdir)/m4/lseek.m4 $(top_srcdir)/m4/lstat.m4 \
+ $(top_srcdir)/m4/malloc.m4 $(top_srcdir)/m4/mbrtowc.m4 \
+ $(top_srcdir)/m4/mbsinit.m4 $(top_srcdir)/m4/mbstate_t.m4 \
+ $(top_srcdir)/m4/mbtowc.m4 $(top_srcdir)/m4/md5.m4 \
+ $(top_srcdir)/m4/memchr.m4 $(top_srcdir)/m4/mkdir.m4 \
+ $(top_srcdir)/m4/mmap-anon.m4 $(top_srcdir)/m4/mode_t.m4 \
+ $(top_srcdir)/m4/multiarch.m4 $(top_srcdir)/m4/netdb_h.m4 \
+ $(top_srcdir)/m4/netinet_in_h.m4 $(top_srcdir)/m4/nls.m4 \
+ $(top_srcdir)/m4/nocrash.m4 $(top_srcdir)/m4/open.m4 \
+ $(top_srcdir)/m4/pipe2.m4 $(top_srcdir)/m4/po.m4 \
+ $(top_srcdir)/m4/posix_spawn.m4 $(top_srcdir)/m4/printf.m4 \
+ $(top_srcdir)/m4/quote.m4 $(top_srcdir)/m4/quotearg.m4 \
+ $(top_srcdir)/m4/rawmemchr.m4 $(top_srcdir)/m4/realloc.m4 \
+ $(top_srcdir)/m4/sched_h.m4 $(top_srcdir)/m4/select.m4 \
+ $(top_srcdir)/m4/servent.m4 $(top_srcdir)/m4/sig_atomic_t.m4 \
+ $(top_srcdir)/m4/sigaction.m4 $(top_srcdir)/m4/signal_h.m4 \
+ $(top_srcdir)/m4/signalblocking.m4 $(top_srcdir)/m4/sigpipe.m4 \
+ $(top_srcdir)/m4/size_max.m4 $(top_srcdir)/m4/snprintf.m4 \
+ $(top_srcdir)/m4/socketlib.m4 $(top_srcdir)/m4/sockets.m4 \
+ $(top_srcdir)/m4/socklen.m4 $(top_srcdir)/m4/sockpfaf.m4 \
+ $(top_srcdir)/m4/spawn-pipe.m4 $(top_srcdir)/m4/spawn_h.m4 \
+ $(top_srcdir)/m4/stat-time.m4 $(top_srcdir)/m4/stat.m4 \
+ $(top_srcdir)/m4/stdbool.m4 $(top_srcdir)/m4/stddef_h.m4 \
+ $(top_srcdir)/m4/stdint.m4 $(top_srcdir)/m4/stdint_h.m4 \
+ $(top_srcdir)/m4/stdio_h.m4 $(top_srcdir)/m4/stdlib_h.m4 \
+ $(top_srcdir)/m4/strcase.m4 $(top_srcdir)/m4/strcasestr.m4 \
+ $(top_srcdir)/m4/strchrnul.m4 $(top_srcdir)/m4/strerror.m4 \
+ $(top_srcdir)/m4/strerror_r.m4 $(top_srcdir)/m4/string_h.m4 \
+ $(top_srcdir)/m4/strings_h.m4 $(top_srcdir)/m4/sys_ioctl_h.m4 \
+ $(top_srcdir)/m4/sys_select_h.m4 \
+ $(top_srcdir)/m4/sys_socket_h.m4 \
+ $(top_srcdir)/m4/sys_stat_h.m4 $(top_srcdir)/m4/sys_time_h.m4 \
+ $(top_srcdir)/m4/sys_types_h.m4 $(top_srcdir)/m4/sys_uio_h.m4 \
+ $(top_srcdir)/m4/sys_wait_h.m4 $(top_srcdir)/m4/threadlib.m4 \
+ $(top_srcdir)/m4/time_h.m4 $(top_srcdir)/m4/timespec.m4 \
+ $(top_srcdir)/m4/unistd-safer.m4 $(top_srcdir)/m4/unistd_h.m4 \
+ $(top_srcdir)/m4/unlocked-io.m4 $(top_srcdir)/m4/utimbuf.m4 \
+ $(top_srcdir)/m4/utimens.m4 $(top_srcdir)/m4/utimes.m4 \
+ $(top_srcdir)/m4/vasnprintf.m4 $(top_srcdir)/m4/vasprintf.m4 \
+ $(top_srcdir)/m4/wait-process.m4 $(top_srcdir)/m4/waitpid.m4 \
+ $(top_srcdir)/m4/warn-on-use.m4 $(top_srcdir)/m4/wchar_h.m4 \
+ $(top_srcdir)/m4/wchar_t.m4 $(top_srcdir)/m4/wctype_h.m4 \
+ $(top_srcdir)/m4/wget.m4 $(top_srcdir)/m4/wint_t.m4 \
+ $(top_srcdir)/m4/write.m4 $(top_srcdir)/m4/xalloc.m4 \
+ $(top_srcdir)/m4/xsize.m4 $(top_srcdir)/configure.ac
+am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
+ $(ACLOCAL_M4)
+mkinstalldirs = $(install_sh) -d
+CONFIG_HEADER = $(top_builddir)/src/config.h
+CONFIG_CLEAN_FILES =
+CONFIG_CLEAN_VPATH_FILES =
+SOURCES =
+DIST_SOURCES =
+INFO_DEPS = $(srcdir)/wget.info
+TEXINFO_TEX = $(top_srcdir)/build-aux/texinfo.tex
+am__TEXINFO_TEX_DIR = $(top_srcdir)/build-aux
+DVIS = wget.dvi
+PDFS = wget.pdf
+PSS = wget.ps
+HTMLS = wget.html
+TEXINFOS = wget.texi
+TEXI2DVI = texi2dvi
+TEXI2PDF = $(TEXI2DVI) --pdf --batch
+MAKEINFOHTML = $(MAKEINFO) --html
+AM_MAKEINFOHTMLFLAGS = $(AM_MAKEINFOFLAGS)
+am__installdirs = "$(DESTDIR)$(infodir)"
+am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
+am__vpath_adj = case $$p in \
+ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
+ *) f=$$p;; \
+ esac;
+am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`;
+am__install_max = 40
+am__nobase_strip_setup = \
+ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`
+am__nobase_strip = \
+ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||"
+am__nobase_list = $(am__nobase_strip_setup); \
+ for p in $$list; do echo "$$p $$p"; done | \
+ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \
+ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \
+ if (++n[$$2] == $(am__install_max)) \
+ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \
+ END { for (dir in files) print dir, files[dir] }'
+am__base_list = \
+ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
+ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
+DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
+pkglibexecdir = @pkglibexecdir@
+ACLOCAL = @ACLOCAL@
+ALLOCA = @ALLOCA@
+ALLOCA_H = @ALLOCA_H@
+AMTAR = @AMTAR@
+APPLE_UNIVERSAL_BUILD = @APPLE_UNIVERSAL_BUILD@
+AR = @AR@
+ARFLAGS = @ARFLAGS@
+ASM_SYMBOL_PREFIX = @ASM_SYMBOL_PREFIX@
+AUTOCONF = @AUTOCONF@
+AUTOHEADER = @AUTOHEADER@
+AUTOMAKE = @AUTOMAKE@
+AWK = @AWK@
+BITSIZEOF_PTRDIFF_T = @BITSIZEOF_PTRDIFF_T@
+BITSIZEOF_SIG_ATOMIC_T = @BITSIZEOF_SIG_ATOMIC_T@
+BITSIZEOF_SIZE_T = @BITSIZEOF_SIZE_T@
+BITSIZEOF_WCHAR_T = @BITSIZEOF_WCHAR_T@
+BITSIZEOF_WINT_T = @BITSIZEOF_WINT_T@
+CC = @CC@
+CCDEPMODE = @CCDEPMODE@
+CFLAGS = @CFLAGS@
+COMMENT_IF_NO_POD2MAN = @COMMENT_IF_NO_POD2MAN@
+CONFIG_INCLUDE = @CONFIG_INCLUDE@
+CPP = @CPP@
+CPPFLAGS = @CPPFLAGS@
+CYGPATH_W = @CYGPATH_W@
+DEFS = @DEFS@
+DEPDIR = @DEPDIR@
+ECHO_C = @ECHO_C@
+ECHO_N = @ECHO_N@
+ECHO_T = @ECHO_T@
+EGREP = @EGREP@
+EMULTIHOP_HIDDEN = @EMULTIHOP_HIDDEN@
+EMULTIHOP_VALUE = @EMULTIHOP_VALUE@
+ENOLINK_HIDDEN = @ENOLINK_HIDDEN@
+ENOLINK_VALUE = @ENOLINK_VALUE@
+EOVERFLOW_HIDDEN = @EOVERFLOW_HIDDEN@
+EOVERFLOW_VALUE = @EOVERFLOW_VALUE@
+ERRNO_H = @ERRNO_H@
+EXEEXT = @EXEEXT@
+FLOAT_H = @FLOAT_H@
+GETADDRINFO_LIB = @GETADDRINFO_LIB@
+GETOPT_H = @GETOPT_H@
+GETTEXT_MACRO_VERSION = @GETTEXT_MACRO_VERSION@
+GLIBC21 = @GLIBC21@
+GMSGFMT = @GMSGFMT@
+GMSGFMT_015 = @GMSGFMT_015@
+GNULIB_ACCEPT = @GNULIB_ACCEPT@
+GNULIB_ACCEPT4 = @GNULIB_ACCEPT4@
+GNULIB_ATOLL = @GNULIB_ATOLL@
+GNULIB_BIND = @GNULIB_BIND@
+GNULIB_BTOWC = @GNULIB_BTOWC@
+GNULIB_CALLOC_POSIX = @GNULIB_CALLOC_POSIX@
+GNULIB_CANONICALIZE_FILE_NAME = @GNULIB_CANONICALIZE_FILE_NAME@
+GNULIB_CHOWN = @GNULIB_CHOWN@
+GNULIB_CLOSE = @GNULIB_CLOSE@
+GNULIB_CONNECT = @GNULIB_CONNECT@
+GNULIB_DPRINTF = @GNULIB_DPRINTF@
+GNULIB_DUP2 = @GNULIB_DUP2@
+GNULIB_DUP3 = @GNULIB_DUP3@
+GNULIB_ENVIRON = @GNULIB_ENVIRON@
+GNULIB_EUIDACCESS = @GNULIB_EUIDACCESS@
+GNULIB_FACCESSAT = @GNULIB_FACCESSAT@
+GNULIB_FCHDIR = @GNULIB_FCHDIR@
+GNULIB_FCHMODAT = @GNULIB_FCHMODAT@
+GNULIB_FCHOWNAT = @GNULIB_FCHOWNAT@
+GNULIB_FCLOSE = @GNULIB_FCLOSE@
+GNULIB_FCNTL = @GNULIB_FCNTL@
+GNULIB_FFLUSH = @GNULIB_FFLUSH@
+GNULIB_FFS = @GNULIB_FFS@
+GNULIB_FFSL = @GNULIB_FFSL@
+GNULIB_FFSLL = @GNULIB_FFSLL@
+GNULIB_FGETC = @GNULIB_FGETC@
+GNULIB_FGETS = @GNULIB_FGETS@
+GNULIB_FOPEN = @GNULIB_FOPEN@
+GNULIB_FPRINTF = @GNULIB_FPRINTF@
+GNULIB_FPRINTF_POSIX = @GNULIB_FPRINTF_POSIX@
+GNULIB_FPURGE = @GNULIB_FPURGE@
+GNULIB_FPUTC = @GNULIB_FPUTC@
+GNULIB_FPUTS = @GNULIB_FPUTS@
+GNULIB_FREAD = @GNULIB_FREAD@
+GNULIB_FREOPEN = @GNULIB_FREOPEN@
+GNULIB_FSCANF = @GNULIB_FSCANF@
+GNULIB_FSEEK = @GNULIB_FSEEK@
+GNULIB_FSEEKO = @GNULIB_FSEEKO@
+GNULIB_FSTATAT = @GNULIB_FSTATAT@
+GNULIB_FSYNC = @GNULIB_FSYNC@
+GNULIB_FTELL = @GNULIB_FTELL@
+GNULIB_FTELLO = @GNULIB_FTELLO@
+GNULIB_FTRUNCATE = @GNULIB_FTRUNCATE@
+GNULIB_FUTIMENS = @GNULIB_FUTIMENS@
+GNULIB_FWRITE = @GNULIB_FWRITE@
+GNULIB_GETADDRINFO = @GNULIB_GETADDRINFO@
+GNULIB_GETC = @GNULIB_GETC@
+GNULIB_GETCHAR = @GNULIB_GETCHAR@
+GNULIB_GETCWD = @GNULIB_GETCWD@
+GNULIB_GETDELIM = @GNULIB_GETDELIM@
+GNULIB_GETDOMAINNAME = @GNULIB_GETDOMAINNAME@
+GNULIB_GETDTABLESIZE = @GNULIB_GETDTABLESIZE@
+GNULIB_GETGROUPS = @GNULIB_GETGROUPS@
+GNULIB_GETHOSTNAME = @GNULIB_GETHOSTNAME@
+GNULIB_GETLINE = @GNULIB_GETLINE@
+GNULIB_GETLOADAVG = @GNULIB_GETLOADAVG@
+GNULIB_GETLOGIN = @GNULIB_GETLOGIN@
+GNULIB_GETLOGIN_R = @GNULIB_GETLOGIN_R@
+GNULIB_GETPAGESIZE = @GNULIB_GETPAGESIZE@
+GNULIB_GETPEERNAME = @GNULIB_GETPEERNAME@
+GNULIB_GETS = @GNULIB_GETS@
+GNULIB_GETSOCKNAME = @GNULIB_GETSOCKNAME@
+GNULIB_GETSOCKOPT = @GNULIB_GETSOCKOPT@
+GNULIB_GETSUBOPT = @GNULIB_GETSUBOPT@
+GNULIB_GETTIMEOFDAY = @GNULIB_GETTIMEOFDAY@
+GNULIB_GETUSERSHELL = @GNULIB_GETUSERSHELL@
+GNULIB_GRANTPT = @GNULIB_GRANTPT@
+GNULIB_GROUP_MEMBER = @GNULIB_GROUP_MEMBER@
+GNULIB_ICONV = @GNULIB_ICONV@
+GNULIB_INET_NTOP = @GNULIB_INET_NTOP@
+GNULIB_INET_PTON = @GNULIB_INET_PTON@
+GNULIB_IOCTL = @GNULIB_IOCTL@
+GNULIB_ISWBLANK = @GNULIB_ISWBLANK@
+GNULIB_ISWCTYPE = @GNULIB_ISWCTYPE@
+GNULIB_LCHMOD = @GNULIB_LCHMOD@
+GNULIB_LCHOWN = @GNULIB_LCHOWN@
+GNULIB_LINK = @GNULIB_LINK@
+GNULIB_LINKAT = @GNULIB_LINKAT@
+GNULIB_LISTEN = @GNULIB_LISTEN@
+GNULIB_LSEEK = @GNULIB_LSEEK@
+GNULIB_LSTAT = @GNULIB_LSTAT@
+GNULIB_MALLOC_POSIX = @GNULIB_MALLOC_POSIX@
+GNULIB_MBRLEN = @GNULIB_MBRLEN@
+GNULIB_MBRTOWC = @GNULIB_MBRTOWC@
+GNULIB_MBSCASECMP = @GNULIB_MBSCASECMP@
+GNULIB_MBSCASESTR = @GNULIB_MBSCASESTR@
+GNULIB_MBSCHR = @GNULIB_MBSCHR@
+GNULIB_MBSCSPN = @GNULIB_MBSCSPN@
+GNULIB_MBSINIT = @GNULIB_MBSINIT@
+GNULIB_MBSLEN = @GNULIB_MBSLEN@
+GNULIB_MBSNCASECMP = @GNULIB_MBSNCASECMP@
+GNULIB_MBSNLEN = @GNULIB_MBSNLEN@
+GNULIB_MBSNRTOWCS = @GNULIB_MBSNRTOWCS@
+GNULIB_MBSPBRK = @GNULIB_MBSPBRK@
+GNULIB_MBSPCASECMP = @GNULIB_MBSPCASECMP@
+GNULIB_MBSRCHR = @GNULIB_MBSRCHR@
+GNULIB_MBSRTOWCS = @GNULIB_MBSRTOWCS@
+GNULIB_MBSSEP = @GNULIB_MBSSEP@
+GNULIB_MBSSPN = @GNULIB_MBSSPN@
+GNULIB_MBSSTR = @GNULIB_MBSSTR@
+GNULIB_MBSTOK_R = @GNULIB_MBSTOK_R@
+GNULIB_MBTOWC = @GNULIB_MBTOWC@
+GNULIB_MEMCHR = @GNULIB_MEMCHR@
+GNULIB_MEMMEM = @GNULIB_MEMMEM@
+GNULIB_MEMPCPY = @GNULIB_MEMPCPY@
+GNULIB_MEMRCHR = @GNULIB_MEMRCHR@
+GNULIB_MKDIRAT = @GNULIB_MKDIRAT@
+GNULIB_MKDTEMP = @GNULIB_MKDTEMP@
+GNULIB_MKFIFO = @GNULIB_MKFIFO@
+GNULIB_MKFIFOAT = @GNULIB_MKFIFOAT@
+GNULIB_MKNOD = @GNULIB_MKNOD@
+GNULIB_MKNODAT = @GNULIB_MKNODAT@
+GNULIB_MKOSTEMP = @GNULIB_MKOSTEMP@
+GNULIB_MKOSTEMPS = @GNULIB_MKOSTEMPS@
+GNULIB_MKSTEMP = @GNULIB_MKSTEMP@
+GNULIB_MKSTEMPS = @GNULIB_MKSTEMPS@
+GNULIB_MKTIME = @GNULIB_MKTIME@
+GNULIB_NANOSLEEP = @GNULIB_NANOSLEEP@
+GNULIB_NONBLOCKING = @GNULIB_NONBLOCKING@
+GNULIB_OBSTACK_PRINTF = @GNULIB_OBSTACK_PRINTF@
+GNULIB_OBSTACK_PRINTF_POSIX = @GNULIB_OBSTACK_PRINTF_POSIX@
+GNULIB_OPEN = @GNULIB_OPEN@
+GNULIB_OPENAT = @GNULIB_OPENAT@
+GNULIB_PERROR = @GNULIB_PERROR@
+GNULIB_PIPE = @GNULIB_PIPE@
+GNULIB_PIPE2 = @GNULIB_PIPE2@
+GNULIB_POPEN = @GNULIB_POPEN@
+GNULIB_POSIX_SPAWN = @GNULIB_POSIX_SPAWN@
+GNULIB_POSIX_SPAWNATTR_DESTROY = @GNULIB_POSIX_SPAWNATTR_DESTROY@
+GNULIB_POSIX_SPAWNATTR_GETFLAGS = @GNULIB_POSIX_SPAWNATTR_GETFLAGS@
+GNULIB_POSIX_SPAWNATTR_GETPGROUP = @GNULIB_POSIX_SPAWNATTR_GETPGROUP@
+GNULIB_POSIX_SPAWNATTR_GETSCHEDPARAM = @GNULIB_POSIX_SPAWNATTR_GETSCHEDPARAM@
+GNULIB_POSIX_SPAWNATTR_GETSCHEDPOLICY = @GNULIB_POSIX_SPAWNATTR_GETSCHEDPOLICY@
+GNULIB_POSIX_SPAWNATTR_GETSIGDEFAULT = @GNULIB_POSIX_SPAWNATTR_GETSIGDEFAULT@
+GNULIB_POSIX_SPAWNATTR_GETSIGMASK = @GNULIB_POSIX_SPAWNATTR_GETSIGMASK@
+GNULIB_POSIX_SPAWNATTR_INIT = @GNULIB_POSIX_SPAWNATTR_INIT@
+GNULIB_POSIX_SPAWNATTR_SETFLAGS = @GNULIB_POSIX_SPAWNATTR_SETFLAGS@
+GNULIB_POSIX_SPAWNATTR_SETPGROUP = @GNULIB_POSIX_SPAWNATTR_SETPGROUP@
+GNULIB_POSIX_SPAWNATTR_SETSCHEDPARAM = @GNULIB_POSIX_SPAWNATTR_SETSCHEDPARAM@
+GNULIB_POSIX_SPAWNATTR_SETSCHEDPOLICY = @GNULIB_POSIX_SPAWNATTR_SETSCHEDPOLICY@
+GNULIB_POSIX_SPAWNATTR_SETSIGDEFAULT = @GNULIB_POSIX_SPAWNATTR_SETSIGDEFAULT@
+GNULIB_POSIX_SPAWNATTR_SETSIGMASK = @GNULIB_POSIX_SPAWNATTR_SETSIGMASK@
+GNULIB_POSIX_SPAWNP = @GNULIB_POSIX_SPAWNP@
+GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDCLOSE = @GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDCLOSE@
+GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDDUP2 = @GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDDUP2@
+GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDOPEN = @GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDOPEN@
+GNULIB_POSIX_SPAWN_FILE_ACTIONS_DESTROY = @GNULIB_POSIX_SPAWN_FILE_ACTIONS_DESTROY@
+GNULIB_POSIX_SPAWN_FILE_ACTIONS_INIT = @GNULIB_POSIX_SPAWN_FILE_ACTIONS_INIT@
+GNULIB_PREAD = @GNULIB_PREAD@
+GNULIB_PRINTF = @GNULIB_PRINTF@
+GNULIB_PRINTF_POSIX = @GNULIB_PRINTF_POSIX@
+GNULIB_PSELECT = @GNULIB_PSELECT@
+GNULIB_PTHREAD_SIGMASK = @GNULIB_PTHREAD_SIGMASK@
+GNULIB_PTSNAME = @GNULIB_PTSNAME@
+GNULIB_PUTC = @GNULIB_PUTC@
+GNULIB_PUTCHAR = @GNULIB_PUTCHAR@
+GNULIB_PUTENV = @GNULIB_PUTENV@
+GNULIB_PUTS = @GNULIB_PUTS@
+GNULIB_PWRITE = @GNULIB_PWRITE@
+GNULIB_RANDOM_R = @GNULIB_RANDOM_R@
+GNULIB_RAWMEMCHR = @GNULIB_RAWMEMCHR@
+GNULIB_READ = @GNULIB_READ@
+GNULIB_READLINK = @GNULIB_READLINK@
+GNULIB_READLINKAT = @GNULIB_READLINKAT@
+GNULIB_REALLOC_POSIX = @GNULIB_REALLOC_POSIX@
+GNULIB_REALPATH = @GNULIB_REALPATH@
+GNULIB_RECV = @GNULIB_RECV@
+GNULIB_RECVFROM = @GNULIB_RECVFROM@
+GNULIB_REMOVE = @GNULIB_REMOVE@
+GNULIB_RENAME = @GNULIB_RENAME@
+GNULIB_RENAMEAT = @GNULIB_RENAMEAT@
+GNULIB_RMDIR = @GNULIB_RMDIR@
+GNULIB_RPMATCH = @GNULIB_RPMATCH@
+GNULIB_SCANF = @GNULIB_SCANF@
+GNULIB_SELECT = @GNULIB_SELECT@
+GNULIB_SEND = @GNULIB_SEND@
+GNULIB_SENDTO = @GNULIB_SENDTO@
+GNULIB_SETENV = @GNULIB_SETENV@
+GNULIB_SETSOCKOPT = @GNULIB_SETSOCKOPT@
+GNULIB_SHUTDOWN = @GNULIB_SHUTDOWN@
+GNULIB_SIGACTION = @GNULIB_SIGACTION@
+GNULIB_SIGNAL_H_SIGPIPE = @GNULIB_SIGNAL_H_SIGPIPE@
+GNULIB_SIGPROCMASK = @GNULIB_SIGPROCMASK@
+GNULIB_SLEEP = @GNULIB_SLEEP@
+GNULIB_SNPRINTF = @GNULIB_SNPRINTF@
+GNULIB_SOCKET = @GNULIB_SOCKET@
+GNULIB_SPRINTF_POSIX = @GNULIB_SPRINTF_POSIX@
+GNULIB_STAT = @GNULIB_STAT@
+GNULIB_STDIO_H_NONBLOCKING = @GNULIB_STDIO_H_NONBLOCKING@
+GNULIB_STDIO_H_SIGPIPE = @GNULIB_STDIO_H_SIGPIPE@
+GNULIB_STPCPY = @GNULIB_STPCPY@
+GNULIB_STPNCPY = @GNULIB_STPNCPY@
+GNULIB_STRCASESTR = @GNULIB_STRCASESTR@
+GNULIB_STRCHRNUL = @GNULIB_STRCHRNUL@
+GNULIB_STRDUP = @GNULIB_STRDUP@
+GNULIB_STRERROR = @GNULIB_STRERROR@
+GNULIB_STRERROR_R = @GNULIB_STRERROR_R@
+GNULIB_STRNCAT = @GNULIB_STRNCAT@
+GNULIB_STRNDUP = @GNULIB_STRNDUP@
+GNULIB_STRNLEN = @GNULIB_STRNLEN@
+GNULIB_STRPBRK = @GNULIB_STRPBRK@
+GNULIB_STRPTIME = @GNULIB_STRPTIME@
+GNULIB_STRSEP = @GNULIB_STRSEP@
+GNULIB_STRSIGNAL = @GNULIB_STRSIGNAL@
+GNULIB_STRSTR = @GNULIB_STRSTR@
+GNULIB_STRTOD = @GNULIB_STRTOD@
+GNULIB_STRTOK_R = @GNULIB_STRTOK_R@
+GNULIB_STRTOLL = @GNULIB_STRTOLL@
+GNULIB_STRTOULL = @GNULIB_STRTOULL@
+GNULIB_STRVERSCMP = @GNULIB_STRVERSCMP@
+GNULIB_SYMLINK = @GNULIB_SYMLINK@
+GNULIB_SYMLINKAT = @GNULIB_SYMLINKAT@
+GNULIB_SYSTEM_POSIX = @GNULIB_SYSTEM_POSIX@
+GNULIB_TIMEGM = @GNULIB_TIMEGM@
+GNULIB_TIME_R = @GNULIB_TIME_R@
+GNULIB_TMPFILE = @GNULIB_TMPFILE@
+GNULIB_TOWCTRANS = @GNULIB_TOWCTRANS@
+GNULIB_TTYNAME_R = @GNULIB_TTYNAME_R@
+GNULIB_UNISTD_H_GETOPT = @GNULIB_UNISTD_H_GETOPT@
+GNULIB_UNISTD_H_NONBLOCKING = @GNULIB_UNISTD_H_NONBLOCKING@
+GNULIB_UNISTD_H_SIGPIPE = @GNULIB_UNISTD_H_SIGPIPE@
+GNULIB_UNLINK = @GNULIB_UNLINK@
+GNULIB_UNLINKAT = @GNULIB_UNLINKAT@
+GNULIB_UNLOCKPT = @GNULIB_UNLOCKPT@
+GNULIB_UNSETENV = @GNULIB_UNSETENV@
+GNULIB_USLEEP = @GNULIB_USLEEP@
+GNULIB_UTIMENSAT = @GNULIB_UTIMENSAT@
+GNULIB_VASPRINTF = @GNULIB_VASPRINTF@
+GNULIB_VDPRINTF = @GNULIB_VDPRINTF@
+GNULIB_VFPRINTF = @GNULIB_VFPRINTF@
+GNULIB_VFPRINTF_POSIX = @GNULIB_VFPRINTF_POSIX@
+GNULIB_VFSCANF = @GNULIB_VFSCANF@
+GNULIB_VPRINTF = @GNULIB_VPRINTF@
+GNULIB_VPRINTF_POSIX = @GNULIB_VPRINTF_POSIX@
+GNULIB_VSCANF = @GNULIB_VSCANF@
+GNULIB_VSNPRINTF = @GNULIB_VSNPRINTF@
+GNULIB_VSPRINTF_POSIX = @GNULIB_VSPRINTF_POSIX@
+GNULIB_WAITPID = @GNULIB_WAITPID@
+GNULIB_WCPCPY = @GNULIB_WCPCPY@
+GNULIB_WCPNCPY = @GNULIB_WCPNCPY@
+GNULIB_WCRTOMB = @GNULIB_WCRTOMB@
+GNULIB_WCSCASECMP = @GNULIB_WCSCASECMP@
+GNULIB_WCSCAT = @GNULIB_WCSCAT@
+GNULIB_WCSCHR = @GNULIB_WCSCHR@
+GNULIB_WCSCMP = @GNULIB_WCSCMP@
+GNULIB_WCSCOLL = @GNULIB_WCSCOLL@
+GNULIB_WCSCPY = @GNULIB_WCSCPY@
+GNULIB_WCSCSPN = @GNULIB_WCSCSPN@
+GNULIB_WCSDUP = @GNULIB_WCSDUP@
+GNULIB_WCSLEN = @GNULIB_WCSLEN@
+GNULIB_WCSNCASECMP = @GNULIB_WCSNCASECMP@
+GNULIB_WCSNCAT = @GNULIB_WCSNCAT@
+GNULIB_WCSNCMP = @GNULIB_WCSNCMP@
+GNULIB_WCSNCPY = @GNULIB_WCSNCPY@
+GNULIB_WCSNLEN = @GNULIB_WCSNLEN@
+GNULIB_WCSNRTOMBS = @GNULIB_WCSNRTOMBS@
+GNULIB_WCSPBRK = @GNULIB_WCSPBRK@
+GNULIB_WCSRCHR = @GNULIB_WCSRCHR@
+GNULIB_WCSRTOMBS = @GNULIB_WCSRTOMBS@
+GNULIB_WCSSPN = @GNULIB_WCSSPN@
+GNULIB_WCSSTR = @GNULIB_WCSSTR@
+GNULIB_WCSTOK = @GNULIB_WCSTOK@
+GNULIB_WCSWIDTH = @GNULIB_WCSWIDTH@
+GNULIB_WCSXFRM = @GNULIB_WCSXFRM@
+GNULIB_WCTOB = @GNULIB_WCTOB@
+GNULIB_WCTOMB = @GNULIB_WCTOMB@
+GNULIB_WCTRANS = @GNULIB_WCTRANS@
+GNULIB_WCTYPE = @GNULIB_WCTYPE@
+GNULIB_WCWIDTH = @GNULIB_WCWIDTH@
+GNULIB_WMEMCHR = @GNULIB_WMEMCHR@
+GNULIB_WMEMCMP = @GNULIB_WMEMCMP@
+GNULIB_WMEMCPY = @GNULIB_WMEMCPY@
+GNULIB_WMEMMOVE = @GNULIB_WMEMMOVE@
+GNULIB_WMEMSET = @GNULIB_WMEMSET@
+GNULIB_WRITE = @GNULIB_WRITE@
+GNULIB__EXIT = @GNULIB__EXIT@
+GREP = @GREP@
+HAVE_ACCEPT4 = @HAVE_ACCEPT4@
+HAVE_ARPA_INET_H = @HAVE_ARPA_INET_H@
+HAVE_ATOLL = @HAVE_ATOLL@
+HAVE_BTOWC = @HAVE_BTOWC@
+HAVE_CANONICALIZE_FILE_NAME = @HAVE_CANONICALIZE_FILE_NAME@
+HAVE_CHOWN = @HAVE_CHOWN@
+HAVE_DECL_ENVIRON = @HAVE_DECL_ENVIRON@
+HAVE_DECL_FCHDIR = @HAVE_DECL_FCHDIR@
+HAVE_DECL_FPURGE = @HAVE_DECL_FPURGE@
+HAVE_DECL_FREEADDRINFO = @HAVE_DECL_FREEADDRINFO@
+HAVE_DECL_FSEEKO = @HAVE_DECL_FSEEKO@
+HAVE_DECL_FTELLO = @HAVE_DECL_FTELLO@
+HAVE_DECL_GAI_STRERROR = @HAVE_DECL_GAI_STRERROR@
+HAVE_DECL_GETADDRINFO = @HAVE_DECL_GETADDRINFO@
+HAVE_DECL_GETDELIM = @HAVE_DECL_GETDELIM@
+HAVE_DECL_GETDOMAINNAME = @HAVE_DECL_GETDOMAINNAME@
+HAVE_DECL_GETLINE = @HAVE_DECL_GETLINE@
+HAVE_DECL_GETLOADAVG = @HAVE_DECL_GETLOADAVG@
+HAVE_DECL_GETLOGIN_R = @HAVE_DECL_GETLOGIN_R@
+HAVE_DECL_GETNAMEINFO = @HAVE_DECL_GETNAMEINFO@
+HAVE_DECL_GETPAGESIZE = @HAVE_DECL_GETPAGESIZE@
+HAVE_DECL_GETUSERSHELL = @HAVE_DECL_GETUSERSHELL@
+HAVE_DECL_INET_NTOP = @HAVE_DECL_INET_NTOP@
+HAVE_DECL_INET_PTON = @HAVE_DECL_INET_PTON@
+HAVE_DECL_LOCALTIME_R = @HAVE_DECL_LOCALTIME_R@
+HAVE_DECL_MEMMEM = @HAVE_DECL_MEMMEM@
+HAVE_DECL_MEMRCHR = @HAVE_DECL_MEMRCHR@
+HAVE_DECL_OBSTACK_PRINTF = @HAVE_DECL_OBSTACK_PRINTF@
+HAVE_DECL_SETENV = @HAVE_DECL_SETENV@
+HAVE_DECL_SNPRINTF = @HAVE_DECL_SNPRINTF@
+HAVE_DECL_STRDUP = @HAVE_DECL_STRDUP@
+HAVE_DECL_STRERROR_R = @HAVE_DECL_STRERROR_R@
+HAVE_DECL_STRNCASECMP = @HAVE_DECL_STRNCASECMP@
+HAVE_DECL_STRNDUP = @HAVE_DECL_STRNDUP@
+HAVE_DECL_STRNLEN = @HAVE_DECL_STRNLEN@
+HAVE_DECL_STRSIGNAL = @HAVE_DECL_STRSIGNAL@
+HAVE_DECL_STRTOK_R = @HAVE_DECL_STRTOK_R@
+HAVE_DECL_TTYNAME_R = @HAVE_DECL_TTYNAME_R@
+HAVE_DECL_UNSETENV = @HAVE_DECL_UNSETENV@
+HAVE_DECL_VSNPRINTF = @HAVE_DECL_VSNPRINTF@
+HAVE_DECL_WCTOB = @HAVE_DECL_WCTOB@
+HAVE_DECL_WCWIDTH = @HAVE_DECL_WCWIDTH@
+HAVE_DPRINTF = @HAVE_DPRINTF@
+HAVE_DUP2 = @HAVE_DUP2@
+HAVE_DUP3 = @HAVE_DUP3@
+HAVE_EUIDACCESS = @HAVE_EUIDACCESS@
+HAVE_FACCESSAT = @HAVE_FACCESSAT@
+HAVE_FCHDIR = @HAVE_FCHDIR@
+HAVE_FCHMODAT = @HAVE_FCHMODAT@
+HAVE_FCHOWNAT = @HAVE_FCHOWNAT@
+HAVE_FCNTL = @HAVE_FCNTL@
+HAVE_FEATURES_H = @HAVE_FEATURES_H@
+HAVE_FFS = @HAVE_FFS@
+HAVE_FFSL = @HAVE_FFSL@
+HAVE_FFSLL = @HAVE_FFSLL@
+HAVE_FSEEKO = @HAVE_FSEEKO@
+HAVE_FSTATAT = @HAVE_FSTATAT@
+HAVE_FSYNC = @HAVE_FSYNC@
+HAVE_FTELLO = @HAVE_FTELLO@
+HAVE_FTRUNCATE = @HAVE_FTRUNCATE@
+HAVE_FUTIMENS = @HAVE_FUTIMENS@
+HAVE_GETDTABLESIZE = @HAVE_GETDTABLESIZE@
+HAVE_GETGROUPS = @HAVE_GETGROUPS@
+HAVE_GETHOSTNAME = @HAVE_GETHOSTNAME@
+HAVE_GETLOGIN = @HAVE_GETLOGIN@
+HAVE_GETOPT_H = @HAVE_GETOPT_H@
+HAVE_GETPAGESIZE = @HAVE_GETPAGESIZE@
+HAVE_GETSUBOPT = @HAVE_GETSUBOPT@
+HAVE_GETTIMEOFDAY = @HAVE_GETTIMEOFDAY@
+HAVE_GRANTPT = @HAVE_GRANTPT@
+HAVE_GROUP_MEMBER = @HAVE_GROUP_MEMBER@
+HAVE_INTTYPES_H = @HAVE_INTTYPES_H@
+HAVE_ISWBLANK = @HAVE_ISWBLANK@
+HAVE_ISWCNTRL = @HAVE_ISWCNTRL@
+HAVE_LCHMOD = @HAVE_LCHMOD@
+HAVE_LCHOWN = @HAVE_LCHOWN@
+HAVE_LIBGNUTLS = @HAVE_LIBGNUTLS@
+HAVE_LIBSSL = @HAVE_LIBSSL@
+HAVE_LINK = @HAVE_LINK@
+HAVE_LINKAT = @HAVE_LINKAT@
+HAVE_LONG_LONG_INT = @HAVE_LONG_LONG_INT@
+HAVE_LSTAT = @HAVE_LSTAT@
+HAVE_MBRLEN = @HAVE_MBRLEN@
+HAVE_MBRTOWC = @HAVE_MBRTOWC@
+HAVE_MBSINIT = @HAVE_MBSINIT@
+HAVE_MBSLEN = @HAVE_MBSLEN@
+HAVE_MBSNRTOWCS = @HAVE_MBSNRTOWCS@
+HAVE_MBSRTOWCS = @HAVE_MBSRTOWCS@
+HAVE_MEMCHR = @HAVE_MEMCHR@
+HAVE_MEMPCPY = @HAVE_MEMPCPY@
+HAVE_MKDIRAT = @HAVE_MKDIRAT@
+HAVE_MKDTEMP = @HAVE_MKDTEMP@
+HAVE_MKFIFO = @HAVE_MKFIFO@
+HAVE_MKFIFOAT = @HAVE_MKFIFOAT@
+HAVE_MKNOD = @HAVE_MKNOD@
+HAVE_MKNODAT = @HAVE_MKNODAT@
+HAVE_MKOSTEMP = @HAVE_MKOSTEMP@
+HAVE_MKOSTEMPS = @HAVE_MKOSTEMPS@
+HAVE_MKSTEMP = @HAVE_MKSTEMP@
+HAVE_MKSTEMPS = @HAVE_MKSTEMPS@
+HAVE_NANOSLEEP = @HAVE_NANOSLEEP@
+HAVE_NETDB_H = @HAVE_NETDB_H@
+HAVE_NETINET_IN_H = @HAVE_NETINET_IN_H@
+HAVE_OPENAT = @HAVE_OPENAT@
+HAVE_OS_H = @HAVE_OS_H@
+HAVE_PIPE = @HAVE_PIPE@
+HAVE_PIPE2 = @HAVE_PIPE2@
+HAVE_POSIX_SIGNALBLOCKING = @HAVE_POSIX_SIGNALBLOCKING@
+HAVE_POSIX_SPAWN = @HAVE_POSIX_SPAWN@
+HAVE_POSIX_SPAWNATTR_T = @HAVE_POSIX_SPAWNATTR_T@
+HAVE_POSIX_SPAWN_FILE_ACTIONS_T = @HAVE_POSIX_SPAWN_FILE_ACTIONS_T@
+HAVE_PREAD = @HAVE_PREAD@
+HAVE_PSELECT = @HAVE_PSELECT@
+HAVE_PTHREAD_SIGMASK = @HAVE_PTHREAD_SIGMASK@
+HAVE_PTSNAME = @HAVE_PTSNAME@
+HAVE_PWRITE = @HAVE_PWRITE@
+HAVE_RANDOM_H = @HAVE_RANDOM_H@
+HAVE_RANDOM_R = @HAVE_RANDOM_R@
+HAVE_RAWMEMCHR = @HAVE_RAWMEMCHR@
+HAVE_READLINK = @HAVE_READLINK@
+HAVE_READLINKAT = @HAVE_READLINKAT@
+HAVE_REALPATH = @HAVE_REALPATH@
+HAVE_RENAMEAT = @HAVE_RENAMEAT@
+HAVE_RPMATCH = @HAVE_RPMATCH@
+HAVE_SA_FAMILY_T = @HAVE_SA_FAMILY_T@
+HAVE_SCHED_H = @HAVE_SCHED_H@
+HAVE_SETENV = @HAVE_SETENV@
+HAVE_SIGACTION = @HAVE_SIGACTION@
+HAVE_SIGHANDLER_T = @HAVE_SIGHANDLER_T@
+HAVE_SIGINFO_T = @HAVE_SIGINFO_T@
+HAVE_SIGNED_SIG_ATOMIC_T = @HAVE_SIGNED_SIG_ATOMIC_T@
+HAVE_SIGNED_WCHAR_T = @HAVE_SIGNED_WCHAR_T@
+HAVE_SIGNED_WINT_T = @HAVE_SIGNED_WINT_T@
+HAVE_SIGSET_T = @HAVE_SIGSET_T@
+HAVE_SLEEP = @HAVE_SLEEP@
+HAVE_SPAWN_H = @HAVE_SPAWN_H@
+HAVE_STDINT_H = @HAVE_STDINT_H@
+HAVE_STPCPY = @HAVE_STPCPY@
+HAVE_STPNCPY = @HAVE_STPNCPY@
+HAVE_STRCASECMP = @HAVE_STRCASECMP@
+HAVE_STRCASESTR = @HAVE_STRCASESTR@
+HAVE_STRCHRNUL = @HAVE_STRCHRNUL@
+HAVE_STRINGS_H = @HAVE_STRINGS_H@
+HAVE_STRPBRK = @HAVE_STRPBRK@
+HAVE_STRPTIME = @HAVE_STRPTIME@
+HAVE_STRSEP = @HAVE_STRSEP@
+HAVE_STRTOD = @HAVE_STRTOD@
+HAVE_STRTOLL = @HAVE_STRTOLL@
+HAVE_STRTOULL = @HAVE_STRTOULL@
+HAVE_STRUCT_ADDRINFO = @HAVE_STRUCT_ADDRINFO@
+HAVE_STRUCT_RANDOM_DATA = @HAVE_STRUCT_RANDOM_DATA@
+HAVE_STRUCT_SCHED_PARAM = @HAVE_STRUCT_SCHED_PARAM@
+HAVE_STRUCT_SIGACTION_SA_SIGACTION = @HAVE_STRUCT_SIGACTION_SA_SIGACTION@
+HAVE_STRUCT_SOCKADDR_STORAGE = @HAVE_STRUCT_SOCKADDR_STORAGE@
+HAVE_STRUCT_SOCKADDR_STORAGE_SS_FAMILY = @HAVE_STRUCT_SOCKADDR_STORAGE_SS_FAMILY@
+HAVE_STRUCT_TIMEVAL = @HAVE_STRUCT_TIMEVAL@
+HAVE_STRVERSCMP = @HAVE_STRVERSCMP@
+HAVE_SYMLINK = @HAVE_SYMLINK@
+HAVE_SYMLINKAT = @HAVE_SYMLINKAT@
+HAVE_SYS_BITYPES_H = @HAVE_SYS_BITYPES_H@
+HAVE_SYS_INTTYPES_H = @HAVE_SYS_INTTYPES_H@
+HAVE_SYS_IOCTL_H = @HAVE_SYS_IOCTL_H@
+HAVE_SYS_LOADAVG_H = @HAVE_SYS_LOADAVG_H@
+HAVE_SYS_PARAM_H = @HAVE_SYS_PARAM_H@
+HAVE_SYS_SELECT_H = @HAVE_SYS_SELECT_H@
+HAVE_SYS_SOCKET_H = @HAVE_SYS_SOCKET_H@
+HAVE_SYS_TIME_H = @HAVE_SYS_TIME_H@
+HAVE_SYS_TYPES_H = @HAVE_SYS_TYPES_H@
+HAVE_SYS_UIO_H = @HAVE_SYS_UIO_H@
+HAVE_TIMEGM = @HAVE_TIMEGM@
+HAVE_TYPE_VOLATILE_SIG_ATOMIC_T = @HAVE_TYPE_VOLATILE_SIG_ATOMIC_T@
+HAVE_UNISTD_H = @HAVE_UNISTD_H@
+HAVE_UNLINKAT = @HAVE_UNLINKAT@
+HAVE_UNLOCKPT = @HAVE_UNLOCKPT@
+HAVE_UNSIGNED_LONG_LONG_INT = @HAVE_UNSIGNED_LONG_LONG_INT@
+HAVE_USLEEP = @HAVE_USLEEP@
+HAVE_UTIMENSAT = @HAVE_UTIMENSAT@
+HAVE_VASPRINTF = @HAVE_VASPRINTF@
+HAVE_VDPRINTF = @HAVE_VDPRINTF@
+HAVE_WCHAR_H = @HAVE_WCHAR_H@
+HAVE_WCHAR_T = @HAVE_WCHAR_T@
+HAVE_WCPCPY = @HAVE_WCPCPY@
+HAVE_WCPNCPY = @HAVE_WCPNCPY@
+HAVE_WCRTOMB = @HAVE_WCRTOMB@
+HAVE_WCSCASECMP = @HAVE_WCSCASECMP@
+HAVE_WCSCAT = @HAVE_WCSCAT@
+HAVE_WCSCHR = @HAVE_WCSCHR@
+HAVE_WCSCMP = @HAVE_WCSCMP@
+HAVE_WCSCOLL = @HAVE_WCSCOLL@
+HAVE_WCSCPY = @HAVE_WCSCPY@
+HAVE_WCSCSPN = @HAVE_WCSCSPN@
+HAVE_WCSDUP = @HAVE_WCSDUP@
+HAVE_WCSLEN = @HAVE_WCSLEN@
+HAVE_WCSNCASECMP = @HAVE_WCSNCASECMP@
+HAVE_WCSNCAT = @HAVE_WCSNCAT@
+HAVE_WCSNCMP = @HAVE_WCSNCMP@
+HAVE_WCSNCPY = @HAVE_WCSNCPY@
+HAVE_WCSNLEN = @HAVE_WCSNLEN@
+HAVE_WCSNRTOMBS = @HAVE_WCSNRTOMBS@
+HAVE_WCSPBRK = @HAVE_WCSPBRK@
+HAVE_WCSRCHR = @HAVE_WCSRCHR@
+HAVE_WCSRTOMBS = @HAVE_WCSRTOMBS@
+HAVE_WCSSPN = @HAVE_WCSSPN@
+HAVE_WCSSTR = @HAVE_WCSSTR@
+HAVE_WCSTOK = @HAVE_WCSTOK@
+HAVE_WCSWIDTH = @HAVE_WCSWIDTH@
+HAVE_WCSXFRM = @HAVE_WCSXFRM@
+HAVE_WCTRANS_T = @HAVE_WCTRANS_T@
+HAVE_WCTYPE_H = @HAVE_WCTYPE_H@
+HAVE_WCTYPE_T = @HAVE_WCTYPE_T@
+HAVE_WINSOCK2_H = @HAVE_WINSOCK2_H@
+HAVE_WINT_T = @HAVE_WINT_T@
+HAVE_WMEMCHR = @HAVE_WMEMCHR@
+HAVE_WMEMCMP = @HAVE_WMEMCMP@
+HAVE_WMEMCPY = @HAVE_WMEMCPY@
+HAVE_WMEMMOVE = @HAVE_WMEMMOVE@
+HAVE_WMEMSET = @HAVE_WMEMSET@
+HAVE_WS2TCPIP_H = @HAVE_WS2TCPIP_H@
+HAVE__BOOL = @HAVE__BOOL@
+HAVE__EXIT = @HAVE__EXIT@
+HOSTENT_LIB = @HOSTENT_LIB@
+ICONV_CONST = @ICONV_CONST@
+ICONV_H = @ICONV_H@
+INCLUDE_NEXT = @INCLUDE_NEXT@
+INCLUDE_NEXT_AS_FIRST_DIRECTIVE = @INCLUDE_NEXT_AS_FIRST_DIRECTIVE@
+INET_NTOP_LIB = @INET_NTOP_LIB@
+INSTALL = @INSTALL@
+INSTALL_DATA = @INSTALL_DATA@
+INSTALL_PROGRAM = @INSTALL_PROGRAM@
+INSTALL_SCRIPT = @INSTALL_SCRIPT@
+INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
+INTLLIBS = @INTLLIBS@
+INTL_MACOSX_LIBS = @INTL_MACOSX_LIBS@
+LDFLAGS = @LDFLAGS@
+LEX = @LEX@
+LEXLIB = @LEXLIB@
+LEX_OUTPUT_ROOT = @LEX_OUTPUT_ROOT@
+LIBGNUTLS = @LIBGNUTLS@
+LIBGNUTLS_PREFIX = @LIBGNUTLS_PREFIX@
+LIBGNU_LIBDEPS = @LIBGNU_LIBDEPS@
+LIBGNU_LTLIBDEPS = @LIBGNU_LTLIBDEPS@
+LIBICONV = @LIBICONV@
+LIBINTL = @LIBINTL@
+LIBMULTITHREAD = @LIBMULTITHREAD@
+LIBOBJS = @LIBOBJS@
+LIBPTH = @LIBPTH@
+LIBPTH_PREFIX = @LIBPTH_PREFIX@
+LIBS = @LIBS@
+LIBSOCKET = @LIBSOCKET@
+LIBSSL = @LIBSSL@
+LIBSSL_PREFIX = @LIBSSL_PREFIX@
+LIBTHREAD = @LIBTHREAD@
+LIB_CLOCK_GETTIME = @LIB_CLOCK_GETTIME@
+LOCALCHARSET_TESTS_ENVIRONMENT = @LOCALCHARSET_TESTS_ENVIRONMENT@
+LOCALE_FR_UTF8 = @LOCALE_FR_UTF8@
+LOCALE_JA = @LOCALE_JA@
+LOCALE_ZH_CN = @LOCALE_ZH_CN@
+LTLIBGNUTLS = @LTLIBGNUTLS@
+LTLIBICONV = @LTLIBICONV@
+LTLIBINTL = @LTLIBINTL@
+LTLIBMULTITHREAD = @LTLIBMULTITHREAD@
+LTLIBOBJS = @LTLIBOBJS@
+LTLIBPTH = @LTLIBPTH@
+LTLIBSSL = @LTLIBSSL@
+LTLIBTHREAD = @LTLIBTHREAD@
+MAKEINFO = @MAKEINFO@
+MKDIR_P = @MKDIR_P@
+MSGFMT = @MSGFMT@
+MSGFMT_015 = @MSGFMT_015@
+MSGMERGE = @MSGMERGE@
+NETINET_IN_H = @NETINET_IN_H@
+NEXT_ARPA_INET_H = @NEXT_ARPA_INET_H@
+NEXT_AS_FIRST_DIRECTIVE_ARPA_INET_H = @NEXT_AS_FIRST_DIRECTIVE_ARPA_INET_H@
+NEXT_AS_FIRST_DIRECTIVE_ERRNO_H = @NEXT_AS_FIRST_DIRECTIVE_ERRNO_H@
+NEXT_AS_FIRST_DIRECTIVE_FCNTL_H = @NEXT_AS_FIRST_DIRECTIVE_FCNTL_H@
+NEXT_AS_FIRST_DIRECTIVE_FLOAT_H = @NEXT_AS_FIRST_DIRECTIVE_FLOAT_H@
+NEXT_AS_FIRST_DIRECTIVE_GETOPT_H = @NEXT_AS_FIRST_DIRECTIVE_GETOPT_H@
+NEXT_AS_FIRST_DIRECTIVE_ICONV_H = @NEXT_AS_FIRST_DIRECTIVE_ICONV_H@
+NEXT_AS_FIRST_DIRECTIVE_NETDB_H = @NEXT_AS_FIRST_DIRECTIVE_NETDB_H@
+NEXT_AS_FIRST_DIRECTIVE_NETINET_IN_H = @NEXT_AS_FIRST_DIRECTIVE_NETINET_IN_H@
+NEXT_AS_FIRST_DIRECTIVE_SCHED_H = @NEXT_AS_FIRST_DIRECTIVE_SCHED_H@
+NEXT_AS_FIRST_DIRECTIVE_SIGNAL_H = @NEXT_AS_FIRST_DIRECTIVE_SIGNAL_H@
+NEXT_AS_FIRST_DIRECTIVE_SPAWN_H = @NEXT_AS_FIRST_DIRECTIVE_SPAWN_H@
+NEXT_AS_FIRST_DIRECTIVE_STDDEF_H = @NEXT_AS_FIRST_DIRECTIVE_STDDEF_H@
+NEXT_AS_FIRST_DIRECTIVE_STDINT_H = @NEXT_AS_FIRST_DIRECTIVE_STDINT_H@
+NEXT_AS_FIRST_DIRECTIVE_STDIO_H = @NEXT_AS_FIRST_DIRECTIVE_STDIO_H@
+NEXT_AS_FIRST_DIRECTIVE_STDLIB_H = @NEXT_AS_FIRST_DIRECTIVE_STDLIB_H@
+NEXT_AS_FIRST_DIRECTIVE_STRINGS_H = @NEXT_AS_FIRST_DIRECTIVE_STRINGS_H@
+NEXT_AS_FIRST_DIRECTIVE_STRING_H = @NEXT_AS_FIRST_DIRECTIVE_STRING_H@
+NEXT_AS_FIRST_DIRECTIVE_SYS_IOCTL_H = @NEXT_AS_FIRST_DIRECTIVE_SYS_IOCTL_H@
+NEXT_AS_FIRST_DIRECTIVE_SYS_SELECT_H = @NEXT_AS_FIRST_DIRECTIVE_SYS_SELECT_H@
+NEXT_AS_FIRST_DIRECTIVE_SYS_SOCKET_H = @NEXT_AS_FIRST_DIRECTIVE_SYS_SOCKET_H@
+NEXT_AS_FIRST_DIRECTIVE_SYS_STAT_H = @NEXT_AS_FIRST_DIRECTIVE_SYS_STAT_H@
+NEXT_AS_FIRST_DIRECTIVE_SYS_TIME_H = @NEXT_AS_FIRST_DIRECTIVE_SYS_TIME_H@
+NEXT_AS_FIRST_DIRECTIVE_SYS_TYPES_H = @NEXT_AS_FIRST_DIRECTIVE_SYS_TYPES_H@
+NEXT_AS_FIRST_DIRECTIVE_SYS_UIO_H = @NEXT_AS_FIRST_DIRECTIVE_SYS_UIO_H@
+NEXT_AS_FIRST_DIRECTIVE_SYS_WAIT_H = @NEXT_AS_FIRST_DIRECTIVE_SYS_WAIT_H@
+NEXT_AS_FIRST_DIRECTIVE_TIME_H = @NEXT_AS_FIRST_DIRECTIVE_TIME_H@
+NEXT_AS_FIRST_DIRECTIVE_UNISTD_H = @NEXT_AS_FIRST_DIRECTIVE_UNISTD_H@
+NEXT_AS_FIRST_DIRECTIVE_WCHAR_H = @NEXT_AS_FIRST_DIRECTIVE_WCHAR_H@
+NEXT_AS_FIRST_DIRECTIVE_WCTYPE_H = @NEXT_AS_FIRST_DIRECTIVE_WCTYPE_H@
+NEXT_ERRNO_H = @NEXT_ERRNO_H@
+NEXT_FCNTL_H = @NEXT_FCNTL_H@
+NEXT_FLOAT_H = @NEXT_FLOAT_H@
+NEXT_GETOPT_H = @NEXT_GETOPT_H@
+NEXT_ICONV_H = @NEXT_ICONV_H@
+NEXT_NETDB_H = @NEXT_NETDB_H@
+NEXT_NETINET_IN_H = @NEXT_NETINET_IN_H@
+NEXT_SCHED_H = @NEXT_SCHED_H@
+NEXT_SIGNAL_H = @NEXT_SIGNAL_H@
+NEXT_SPAWN_H = @NEXT_SPAWN_H@
+NEXT_STDDEF_H = @NEXT_STDDEF_H@
+NEXT_STDINT_H = @NEXT_STDINT_H@
+NEXT_STDIO_H = @NEXT_STDIO_H@
+NEXT_STDLIB_H = @NEXT_STDLIB_H@
+NEXT_STRINGS_H = @NEXT_STRINGS_H@
+NEXT_STRING_H = @NEXT_STRING_H@
+NEXT_SYS_IOCTL_H = @NEXT_SYS_IOCTL_H@
+NEXT_SYS_SELECT_H = @NEXT_SYS_SELECT_H@
+NEXT_SYS_SOCKET_H = @NEXT_SYS_SOCKET_H@
+NEXT_SYS_STAT_H = @NEXT_SYS_STAT_H@
+NEXT_SYS_TIME_H = @NEXT_SYS_TIME_H@
+NEXT_SYS_TYPES_H = @NEXT_SYS_TYPES_H@
+NEXT_SYS_UIO_H = @NEXT_SYS_UIO_H@
+NEXT_SYS_WAIT_H = @NEXT_SYS_WAIT_H@
+NEXT_TIME_H = @NEXT_TIME_H@
+NEXT_UNISTD_H = @NEXT_UNISTD_H@
+NEXT_WCHAR_H = @NEXT_WCHAR_H@
+NEXT_WCTYPE_H = @NEXT_WCTYPE_H@
+OBJEXT = @OBJEXT@
+PACKAGE = @PACKAGE@
+PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
+PACKAGE_NAME = @PACKAGE_NAME@
+PACKAGE_STRING = @PACKAGE_STRING@
+PACKAGE_TARNAME = @PACKAGE_TARNAME@
+PACKAGE_URL = @PACKAGE_URL@
+PACKAGE_VERSION = @PACKAGE_VERSION@
+PATH_SEPARATOR = @PATH_SEPARATOR@
+PERL = @PERL@
+POD2MAN = @POD2MAN@
+POSUB = @POSUB@
+PRAGMA_COLUMNS = @PRAGMA_COLUMNS@
+PRAGMA_SYSTEM_HEADER = @PRAGMA_SYSTEM_HEADER@
+PTHREAD_H_DEFINES_STRUCT_TIMESPEC = @PTHREAD_H_DEFINES_STRUCT_TIMESPEC@
+PTRDIFF_T_SUFFIX = @PTRDIFF_T_SUFFIX@
+RANLIB = @RANLIB@
+REPLACE_BTOWC = @REPLACE_BTOWC@
+REPLACE_CALLOC = @REPLACE_CALLOC@
+REPLACE_CANONICALIZE_FILE_NAME = @REPLACE_CANONICALIZE_FILE_NAME@
+REPLACE_CHOWN = @REPLACE_CHOWN@
+REPLACE_CLOSE = @REPLACE_CLOSE@
+REPLACE_DPRINTF = @REPLACE_DPRINTF@
+REPLACE_DUP = @REPLACE_DUP@
+REPLACE_DUP2 = @REPLACE_DUP2@
+REPLACE_FCHOWNAT = @REPLACE_FCHOWNAT@
+REPLACE_FCLOSE = @REPLACE_FCLOSE@
+REPLACE_FCNTL = @REPLACE_FCNTL@
+REPLACE_FFLUSH = @REPLACE_FFLUSH@
+REPLACE_FOPEN = @REPLACE_FOPEN@
+REPLACE_FPRINTF = @REPLACE_FPRINTF@
+REPLACE_FPURGE = @REPLACE_FPURGE@
+REPLACE_FREOPEN = @REPLACE_FREOPEN@
+REPLACE_FSEEK = @REPLACE_FSEEK@
+REPLACE_FSEEKO = @REPLACE_FSEEKO@
+REPLACE_FSTAT = @REPLACE_FSTAT@
+REPLACE_FSTATAT = @REPLACE_FSTATAT@
+REPLACE_FTELL = @REPLACE_FTELL@
+REPLACE_FTELLO = @REPLACE_FTELLO@
+REPLACE_FUTIMENS = @REPLACE_FUTIMENS@
+REPLACE_GAI_STRERROR = @REPLACE_GAI_STRERROR@
+REPLACE_GETCWD = @REPLACE_GETCWD@
+REPLACE_GETDELIM = @REPLACE_GETDELIM@
+REPLACE_GETDOMAINNAME = @REPLACE_GETDOMAINNAME@
+REPLACE_GETGROUPS = @REPLACE_GETGROUPS@
+REPLACE_GETLINE = @REPLACE_GETLINE@
+REPLACE_GETLOGIN_R = @REPLACE_GETLOGIN_R@
+REPLACE_GETPAGESIZE = @REPLACE_GETPAGESIZE@
+REPLACE_GETTIMEOFDAY = @REPLACE_GETTIMEOFDAY@
+REPLACE_ICONV = @REPLACE_ICONV@
+REPLACE_ICONV_OPEN = @REPLACE_ICONV_OPEN@
+REPLACE_ICONV_UTF = @REPLACE_ICONV_UTF@
+REPLACE_IOCTL = @REPLACE_IOCTL@
+REPLACE_ISWBLANK = @REPLACE_ISWBLANK@
+REPLACE_ISWCNTRL = @REPLACE_ISWCNTRL@
+REPLACE_LCHOWN = @REPLACE_LCHOWN@
+REPLACE_LINK = @REPLACE_LINK@
+REPLACE_LINKAT = @REPLACE_LINKAT@
+REPLACE_LOCALTIME_R = @REPLACE_LOCALTIME_R@
+REPLACE_LSEEK = @REPLACE_LSEEK@
+REPLACE_LSTAT = @REPLACE_LSTAT@
+REPLACE_MALLOC = @REPLACE_MALLOC@
+REPLACE_MBRLEN = @REPLACE_MBRLEN@
+REPLACE_MBRTOWC = @REPLACE_MBRTOWC@
+REPLACE_MBSINIT = @REPLACE_MBSINIT@
+REPLACE_MBSNRTOWCS = @REPLACE_MBSNRTOWCS@
+REPLACE_MBSRTOWCS = @REPLACE_MBSRTOWCS@
+REPLACE_MBSTATE_T = @REPLACE_MBSTATE_T@
+REPLACE_MBTOWC = @REPLACE_MBTOWC@
+REPLACE_MEMCHR = @REPLACE_MEMCHR@
+REPLACE_MEMMEM = @REPLACE_MEMMEM@
+REPLACE_MKDIR = @REPLACE_MKDIR@
+REPLACE_MKFIFO = @REPLACE_MKFIFO@
+REPLACE_MKNOD = @REPLACE_MKNOD@
+REPLACE_MKSTEMP = @REPLACE_MKSTEMP@
+REPLACE_MKTIME = @REPLACE_MKTIME@
+REPLACE_NANOSLEEP = @REPLACE_NANOSLEEP@
+REPLACE_NULL = @REPLACE_NULL@
+REPLACE_OBSTACK_PRINTF = @REPLACE_OBSTACK_PRINTF@
+REPLACE_OPEN = @REPLACE_OPEN@
+REPLACE_OPENAT = @REPLACE_OPENAT@
+REPLACE_PERROR = @REPLACE_PERROR@
+REPLACE_POPEN = @REPLACE_POPEN@
+REPLACE_POSIX_SPAWN = @REPLACE_POSIX_SPAWN@
+REPLACE_PREAD = @REPLACE_PREAD@
+REPLACE_PRINTF = @REPLACE_PRINTF@
+REPLACE_PSELECT = @REPLACE_PSELECT@
+REPLACE_PTHREAD_SIGMASK = @REPLACE_PTHREAD_SIGMASK@
+REPLACE_PUTENV = @REPLACE_PUTENV@
+REPLACE_PWRITE = @REPLACE_PWRITE@
+REPLACE_READ = @REPLACE_READ@
+REPLACE_READLINK = @REPLACE_READLINK@
+REPLACE_REALLOC = @REPLACE_REALLOC@
+REPLACE_REALPATH = @REPLACE_REALPATH@
+REPLACE_REMOVE = @REPLACE_REMOVE@
+REPLACE_RENAME = @REPLACE_RENAME@
+REPLACE_RENAMEAT = @REPLACE_RENAMEAT@
+REPLACE_RMDIR = @REPLACE_RMDIR@
+REPLACE_SELECT = @REPLACE_SELECT@
+REPLACE_SETENV = @REPLACE_SETENV@
+REPLACE_SLEEP = @REPLACE_SLEEP@
+REPLACE_SNPRINTF = @REPLACE_SNPRINTF@
+REPLACE_SPRINTF = @REPLACE_SPRINTF@
+REPLACE_STAT = @REPLACE_STAT@
+REPLACE_STDIO_READ_FUNCS = @REPLACE_STDIO_READ_FUNCS@
+REPLACE_STDIO_WRITE_FUNCS = @REPLACE_STDIO_WRITE_FUNCS@
+REPLACE_STPNCPY = @REPLACE_STPNCPY@
+REPLACE_STRCASESTR = @REPLACE_STRCASESTR@
+REPLACE_STRCHRNUL = @REPLACE_STRCHRNUL@
+REPLACE_STRDUP = @REPLACE_STRDUP@
+REPLACE_STRERROR = @REPLACE_STRERROR@
+REPLACE_STRERROR_R = @REPLACE_STRERROR_R@
+REPLACE_STRNCAT = @REPLACE_STRNCAT@
+REPLACE_STRNDUP = @REPLACE_STRNDUP@
+REPLACE_STRNLEN = @REPLACE_STRNLEN@
+REPLACE_STRSIGNAL = @REPLACE_STRSIGNAL@
+REPLACE_STRSTR = @REPLACE_STRSTR@
+REPLACE_STRTOD = @REPLACE_STRTOD@
+REPLACE_STRTOK_R = @REPLACE_STRTOK_R@
+REPLACE_SYMLINK = @REPLACE_SYMLINK@
+REPLACE_TIMEGM = @REPLACE_TIMEGM@
+REPLACE_TMPFILE = @REPLACE_TMPFILE@
+REPLACE_TOWLOWER = @REPLACE_TOWLOWER@
+REPLACE_TTYNAME_R = @REPLACE_TTYNAME_R@
+REPLACE_UNLINK = @REPLACE_UNLINK@
+REPLACE_UNLINKAT = @REPLACE_UNLINKAT@
+REPLACE_UNSETENV = @REPLACE_UNSETENV@
+REPLACE_USLEEP = @REPLACE_USLEEP@
+REPLACE_UTIMENSAT = @REPLACE_UTIMENSAT@
+REPLACE_VASPRINTF = @REPLACE_VASPRINTF@
+REPLACE_VDPRINTF = @REPLACE_VDPRINTF@
+REPLACE_VFPRINTF = @REPLACE_VFPRINTF@
+REPLACE_VPRINTF = @REPLACE_VPRINTF@
+REPLACE_VSNPRINTF = @REPLACE_VSNPRINTF@
+REPLACE_VSPRINTF = @REPLACE_VSPRINTF@
+REPLACE_WCRTOMB = @REPLACE_WCRTOMB@
+REPLACE_WCSNRTOMBS = @REPLACE_WCSNRTOMBS@
+REPLACE_WCSRTOMBS = @REPLACE_WCSRTOMBS@
+REPLACE_WCSWIDTH = @REPLACE_WCSWIDTH@
+REPLACE_WCTOB = @REPLACE_WCTOB@
+REPLACE_WCTOMB = @REPLACE_WCTOMB@
+REPLACE_WCWIDTH = @REPLACE_WCWIDTH@
+REPLACE_WRITE = @REPLACE_WRITE@
+SCHED_H = @SCHED_H@
+SERVENT_LIB = @SERVENT_LIB@
+SET_MAKE = @SET_MAKE@
+SHELL = @SHELL@
+SIG_ATOMIC_T_SUFFIX = @SIG_ATOMIC_T_SUFFIX@
+SIZE_T_SUFFIX = @SIZE_T_SUFFIX@
+STDBOOL_H = @STDBOOL_H@
+STDDEF_H = @STDDEF_H@
+STDINT_H = @STDINT_H@
+STRIP = @STRIP@
+SYS_IOCTL_H_HAVE_WINSOCK2_H = @SYS_IOCTL_H_HAVE_WINSOCK2_H@
+SYS_IOCTL_H_HAVE_WINSOCK2_H_AND_USE_SOCKETS = @SYS_IOCTL_H_HAVE_WINSOCK2_H_AND_USE_SOCKETS@
+SYS_TIME_H_DEFINES_STRUCT_TIMESPEC = @SYS_TIME_H_DEFINES_STRUCT_TIMESPEC@
+TIME_H_DEFINES_STRUCT_TIMESPEC = @TIME_H_DEFINES_STRUCT_TIMESPEC@
+UNDEFINE_STRTOK_R = @UNDEFINE_STRTOK_R@
+UNISTD_H_HAVE_WINSOCK2_H = @UNISTD_H_HAVE_WINSOCK2_H@
+UNISTD_H_HAVE_WINSOCK2_H_AND_USE_SOCKETS = @UNISTD_H_HAVE_WINSOCK2_H_AND_USE_SOCKETS@
+USE_NLS = @USE_NLS@
+VERSION = @VERSION@
+WCHAR_T_SUFFIX = @WCHAR_T_SUFFIX@
+WINT_T_SUFFIX = @WINT_T_SUFFIX@
+XGETTEXT = @XGETTEXT@
+XGETTEXT_015 = @XGETTEXT_015@
+XGETTEXT_EXTRA_OPTIONS = @XGETTEXT_EXTRA_OPTIONS@
+abs_builddir = @abs_builddir@
+abs_srcdir = @abs_srcdir@
+abs_top_builddir = @abs_top_builddir@
+abs_top_srcdir = @abs_top_srcdir@
+ac_ct_CC = @ac_ct_CC@
+am__include = @am__include@
+am__leading_dot = @am__leading_dot@
+am__quote = @am__quote@
+am__tar = @am__tar@
+am__untar = @am__untar@
+bindir = @bindir@
+build = @build@
+build_alias = @build_alias@
+build_cpu = @build_cpu@
+build_os = @build_os@
+build_vendor = @build_vendor@
+builddir = @builddir@
+datadir = @datadir@
+datarootdir = @datarootdir@
+docdir = @docdir@
+dvidir = @dvidir@
+exec_prefix = @exec_prefix@
+gl_LIBOBJS = @gl_LIBOBJS@
+gl_LTLIBOBJS = @gl_LTLIBOBJS@
+gltests_LIBOBJS = @gltests_LIBOBJS@
+gltests_LTLIBOBJS = @gltests_LTLIBOBJS@
+gltests_WITNESS = @gltests_WITNESS@
+host = @host@
+host_alias = @host_alias@
+host_cpu = @host_cpu@
+host_os = @host_os@
+host_vendor = @host_vendor@
+htmldir = @htmldir@
+includedir = @includedir@
+infodir = @infodir@
+install_sh = @install_sh@
+libdir = @libdir@
+libexecdir = @libexecdir@
+lispdir = @lispdir@
+localedir = @localedir@
+localstatedir = @localstatedir@
+mandir = @mandir@
+mkdir_p = @mkdir_p@
+oldincludedir = @oldincludedir@
+pdfdir = @pdfdir@
+prefix = @prefix@
+program_transform_name = @program_transform_name@
+psdir = @psdir@
+sbindir = @sbindir@
+sharedstatedir = @sharedstatedir@
+srcdir = @srcdir@
+sysconfdir = @sysconfdir@
+target_alias = @target_alias@
+top_build_prefix = @top_build_prefix@
+top_builddir = @top_builddir@
+top_srcdir = @top_srcdir@
+
+# Program to convert DVI files to PostScript
+DVIPS = dvips -D 300
+# Program to convert texinfo files to html
+TEXI2HTML = texi2html -expandinfo -split_chapter
+manext = 1
+RM = rm -f
+TEXI2POD = $(srcdir)/texi2pod.pl
+MAN = wget.$(manext)
+WGETRC = $(sysconfdir)/wgetrc
+SAMPLERCTEXI = sample.wgetrc.munged_for_texi_inclusion
+
+#
+# Dependencies for building
+#
+man_MANS = $(MAN)
+info_TEXINFOS = wget.texi
+wget_TEXINFOS = fdl.texi sample.wgetrc.munged_for_texi_inclusion
+EXTRA_DIST = sample.wgetrc \
+ $(SAMPLERCTEXI) \
+ texi2pod.pl
+
+
+#
+# Dependencies for cleanup
+#
+CLEANFILES = *~ *.bak *.cat *.pod
+DISTCLEANFILES = $(MAN)
+all: all-am
+
+.SUFFIXES:
+.SUFFIXES: .dvi .html .info .pdf .ps .texi
+$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
+ @for dep in $?; do \
+ case '$(am__configure_deps)' in \
+ *$$dep*) \
+ ( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \
+ && { if test -f $@; then exit 0; else break; fi; }; \
+ exit 1;; \
+ esac; \
+ done; \
+ echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu doc/Makefile'; \
+ $(am__cd) $(top_srcdir) && \
+ $(AUTOMAKE) --gnu doc/Makefile
+.PRECIOUS: Makefile
+Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
+ @case '$?' in \
+ *config.status*) \
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
+ *) \
+ echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
+ cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
+ esac;
+
+$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+
+$(top_srcdir)/configure: $(am__configure_deps)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+$(ACLOCAL_M4): $(am__aclocal_m4_deps)
+ cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
+$(am__aclocal_m4_deps):
+
+.texi.info:
+ restore=: && backupdir="$(am__leading_dot)am$$$$" && \
+ am__cwd=`pwd` && $(am__cd) $(srcdir) && \
+ rm -rf $$backupdir && mkdir $$backupdir && \
+ if ($(MAKEINFO) --version) >/dev/null 2>&1; then \
+ for f in $@ $@-[0-9] $@-[0-9][0-9] $(@:.info=).i[0-9] $(@:.info=).i[0-9][0-9]; do \
+ if test -f $$f; then mv $$f $$backupdir; restore=mv; else :; fi; \
+ done; \
+ else :; fi && \
+ cd "$$am__cwd"; \
+ if $(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
+ -o $@ $<; \
+ then \
+ rc=0; \
+ $(am__cd) $(srcdir); \
+ else \
+ rc=$$?; \
+ $(am__cd) $(srcdir) && \
+ $$restore $$backupdir/* `echo "./$@" | sed 's|[^/]*$$||'`; \
+ fi; \
+ rm -rf $$backupdir; exit $$rc
+
+.texi.dvi:
+ TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
+ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \
+ $(TEXI2DVI) $<
+
+.texi.pdf:
+ TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
+ MAKEINFO='$(MAKEINFO) $(AM_MAKEINFOFLAGS) $(MAKEINFOFLAGS) -I $(srcdir)' \
+ $(TEXI2PDF) $<
+
+.texi.html:
+ rm -rf $(@:.html=.htp)
+ if $(MAKEINFOHTML) $(AM_MAKEINFOHTMLFLAGS) $(MAKEINFOFLAGS) -I $(srcdir) \
+ -o $(@:.html=.htp) $<; \
+ then \
+ rm -rf $@; \
+ if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \
+ mv $(@:.html=) $@; else mv $(@:.html=.htp) $@; fi; \
+ else \
+ if test ! -d $(@:.html=.htp) && test -d $(@:.html=); then \
+ rm -rf $(@:.html=); else rm -Rf $(@:.html=.htp) $@; fi; \
+ exit 1; \
+ fi
+$(srcdir)/wget.info: wget.texi $(srcdir)/version.texi $(wget_TEXINFOS)
+wget.dvi: wget.texi $(srcdir)/version.texi $(wget_TEXINFOS)
+wget.pdf: wget.texi $(srcdir)/version.texi $(wget_TEXINFOS)
+wget.html: wget.texi $(srcdir)/version.texi $(wget_TEXINFOS)
+$(srcdir)/version.texi: $(srcdir)/stamp-vti
+$(srcdir)/stamp-vti: wget.texi $(top_srcdir)/configure
+ @(dir=.; test -f ./wget.texi || dir=$(srcdir); \
+ set `$(SHELL) $(top_srcdir)/build-aux/mdate-sh $$dir/wget.texi`; \
+ echo "@set UPDATED $$1 $$2 $$3"; \
+ echo "@set UPDATED-MONTH $$2 $$3"; \
+ echo "@set EDITION $(VERSION)"; \
+ echo "@set VERSION $(VERSION)") > vti.tmp
+ @cmp -s vti.tmp $(srcdir)/version.texi \
+ || (echo "Updating $(srcdir)/version.texi"; \
+ cp vti.tmp $(srcdir)/version.texi)
+ -@rm -f vti.tmp
+ @cp $(srcdir)/version.texi $@
+
+mostlyclean-vti:
+ -rm -f vti.tmp
+
+maintainer-clean-vti:
+ -rm -f $(srcdir)/stamp-vti $(srcdir)/version.texi
+.dvi.ps:
+ TEXINPUTS="$(am__TEXINFO_TEX_DIR)$(PATH_SEPARATOR)$$TEXINPUTS" \
+ $(DVIPS) -o $@ $<
+
+uninstall-dvi-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(DVIS)'; test -n "$(dvidir)" || list=; \
+ for p in $$list; do \
+ $(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(dvidir)/$$f'"; \
+ rm -f "$(DESTDIR)$(dvidir)/$$f"; \
+ done
+
+uninstall-html-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(HTMLS)'; test -n "$(htmldir)" || list=; \
+ for p in $$list; do \
+ $(am__strip_dir) \
+ echo " rm -rf '$(DESTDIR)$(htmldir)/$$f'"; \
+ rm -rf "$(DESTDIR)$(htmldir)/$$f"; \
+ done
+
+uninstall-info-am:
+ @$(PRE_UNINSTALL)
+ @if test -d '$(DESTDIR)$(infodir)' && \
+ (install-info --version && \
+ install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \
+ list='$(INFO_DEPS)'; \
+ for file in $$list; do \
+ relfile=`echo "$$file" | sed 's|^.*/||'`; \
+ echo " install-info --info-dir='$(DESTDIR)$(infodir)' --remove '$(DESTDIR)$(infodir)/$$relfile'"; \
+ if install-info --info-dir="$(DESTDIR)$(infodir)" --remove "$(DESTDIR)$(infodir)/$$relfile"; \
+ then :; else test ! -f "$(DESTDIR)$(infodir)/$$relfile" || exit 1; fi; \
+ done; \
+ else :; fi
+ @$(NORMAL_UNINSTALL)
+ @list='$(INFO_DEPS)'; \
+ for file in $$list; do \
+ relfile=`echo "$$file" | sed 's|^.*/||'`; \
+ relfile_i=`echo "$$relfile" | sed 's|\.info$$||;s|$$|.i|'`; \
+ (if test -d "$(DESTDIR)$(infodir)" && cd "$(DESTDIR)$(infodir)"; then \
+ echo " cd '$(DESTDIR)$(infodir)' && rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]"; \
+ rm -f $$relfile $$relfile-[0-9] $$relfile-[0-9][0-9] $$relfile_i[0-9] $$relfile_i[0-9][0-9]; \
+ else :; fi); \
+ done
+
+uninstall-pdf-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(PDFS)'; test -n "$(pdfdir)" || list=; \
+ for p in $$list; do \
+ $(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(pdfdir)/$$f'"; \
+ rm -f "$(DESTDIR)$(pdfdir)/$$f"; \
+ done
+
+uninstall-ps-am:
+ @$(NORMAL_UNINSTALL)
+ @list='$(PSS)'; test -n "$(psdir)" || list=; \
+ for p in $$list; do \
+ $(am__strip_dir) \
+ echo " rm -f '$(DESTDIR)$(psdir)/$$f'"; \
+ rm -f "$(DESTDIR)$(psdir)/$$f"; \
+ done
+
+dist-info: $(INFO_DEPS)
+ @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
+ list='$(INFO_DEPS)'; \
+ for base in $$list; do \
+ case $$base in \
+ $(srcdir)/*) base=`echo "$$base" | sed "s|^$$srcdirstrip/||"`;; \
+ esac; \
+ if test -f $$base; then d=.; else d=$(srcdir); fi; \
+ base_i=`echo "$$base" | sed 's|\.info$$||;s|$$|.i|'`; \
+ for file in $$d/$$base $$d/$$base-[0-9] $$d/$$base-[0-9][0-9] $$d/$$base_i[0-9] $$d/$$base_i[0-9][0-9]; do \
+ if test -f $$file; then \
+ relfile=`expr "$$file" : "$$d/\(.*\)"`; \
+ test -f "$(distdir)/$$relfile" || \
+ cp -p $$file "$(distdir)/$$relfile"; \
+ else :; fi; \
+ done; \
+ done
+
+mostlyclean-aminfo:
+ -rm -rf wget.aux wget.cp wget.cps wget.fn wget.fns wget.ky wget.kys \
+ wget.log wget.pg wget.pgs wget.tmp wget.toc wget.tp wget.tps \
+ wget.vr wget.vrs
+
+clean-aminfo:
+ -test -z "wget.dvi wget.pdf wget.ps wget.html" \
+ || rm -rf wget.dvi wget.pdf wget.ps wget.html
+
+maintainer-clean-aminfo:
+ @list='$(INFO_DEPS)'; for i in $$list; do \
+ i_i=`echo "$$i" | sed 's|\.info$$||;s|$$|.i|'`; \
+ echo " rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]"; \
+ rm -f $$i $$i-[0-9] $$i-[0-9][0-9] $$i_i[0-9] $$i_i[0-9][0-9]; \
+ done
+tags: TAGS
+TAGS:
+
+ctags: CTAGS
+CTAGS:
+
+
+distdir: $(DISTFILES)
+ @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
+ list='$(DISTFILES)'; \
+ dist_files=`for file in $$list; do echo $$file; done | \
+ sed -e "s|^$$srcdirstrip/||;t" \
+ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
+ case $$dist_files in \
+ */*) $(MKDIR_P) `echo "$$dist_files" | \
+ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
+ sort -u` ;; \
+ esac; \
+ for file in $$dist_files; do \
+ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
+ if test -d $$d/$$file; then \
+ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
+ if test -d "$(distdir)/$$file"; then \
+ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
+ fi; \
+ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
+ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
+ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
+ fi; \
+ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
+ else \
+ test -f "$(distdir)/$$file" \
+ || cp -p $$d/$$file "$(distdir)/$$file" \
+ || exit 1; \
+ fi; \
+ done
+ $(MAKE) $(AM_MAKEFLAGS) \
+ top_distdir="$(top_distdir)" distdir="$(distdir)" \
+ dist-info
+check-am: all-am
+check: check-am
+all-am: Makefile $(INFO_DEPS)
+installdirs:
+ for dir in "$(DESTDIR)$(infodir)"; do \
+ test -z "$$dir" || $(MKDIR_P) "$$dir"; \
+ done
+install: install-am
+install-exec: install-exec-am
+install-data: install-data-am
+uninstall: uninstall-am
+
+install-am: all-am
+ @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
+
+installcheck: installcheck-am
+install-strip:
+ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
+ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
+ `test -z '$(STRIP)' || \
+ echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
+mostlyclean-generic:
+
+clean-generic:
+ -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES)
+
+distclean-generic:
+ -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
+ -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
+ -test -z "$(DISTCLEANFILES)" || rm -f $(DISTCLEANFILES)
+
+maintainer-clean-generic:
+ @echo "This command is intended for maintainers to use"
+ @echo "it deletes files that may require special tools to rebuild."
+clean: clean-am
+
+clean-am: clean-aminfo clean-generic mostlyclean-am
+
+distclean: distclean-am
+ -rm -f Makefile
+distclean-am: clean-am distclean-generic
+
+dvi: dvi-am
+
+dvi-am: $(DVIS)
+
+html: html-am
+
+html-am: $(HTMLS)
+
+info: info-am
+
+info-am: $(INFO_DEPS)
+
+install-data-am: install-data-local install-info-am
+
+install-dvi: install-dvi-am
+
+install-dvi-am: $(DVIS)
+ @$(NORMAL_INSTALL)
+ test -z "$(dvidir)" || $(MKDIR_P) "$(DESTDIR)$(dvidir)"
+ @list='$(DVIS)'; test -n "$(dvidir)" || list=; \
+ for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ echo "$$d$$p"; \
+ done | $(am__base_list) | \
+ while read files; do \
+ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(dvidir)'"; \
+ $(INSTALL_DATA) $$files "$(DESTDIR)$(dvidir)" || exit $$?; \
+ done
+install-exec-am:
+
+install-html: install-html-am
+
+install-html-am: $(HTMLS)
+ @$(NORMAL_INSTALL)
+ test -z "$(htmldir)" || $(MKDIR_P) "$(DESTDIR)$(htmldir)"
+ @list='$(HTMLS)'; list2=; test -n "$(htmldir)" || list=; \
+ for p in $$list; do \
+ if test -f "$$p" || test -d "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ $(am__strip_dir) \
+ if test -d "$$d$$p"; then \
+ echo " $(MKDIR_P) '$(DESTDIR)$(htmldir)/$$f'"; \
+ $(MKDIR_P) "$(DESTDIR)$(htmldir)/$$f" || exit 1; \
+ echo " $(INSTALL_DATA) '$$d$$p'/* '$(DESTDIR)$(htmldir)/$$f'"; \
+ $(INSTALL_DATA) "$$d$$p"/* "$(DESTDIR)$(htmldir)/$$f" || exit $$?; \
+ else \
+ list2="$$list2 $$d$$p"; \
+ fi; \
+ done; \
+ test -z "$$list2" || { echo "$$list2" | $(am__base_list) | \
+ while read files; do \
+ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(htmldir)'"; \
+ $(INSTALL_DATA) $$files "$(DESTDIR)$(htmldir)" || exit $$?; \
+ done; }
+install-info: install-info-am
+
+install-info-am: $(INFO_DEPS)
+ @$(NORMAL_INSTALL)
+ test -z "$(infodir)" || $(MKDIR_P) "$(DESTDIR)$(infodir)"
+ @srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
+ list='$(INFO_DEPS)'; test -n "$(infodir)" || list=; \
+ for file in $$list; do \
+ case $$file in \
+ $(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
+ esac; \
+ if test -f $$file; then d=.; else d=$(srcdir); fi; \
+ file_i=`echo "$$file" | sed 's|\.info$$||;s|$$|.i|'`; \
+ for ifile in $$d/$$file $$d/$$file-[0-9] $$d/$$file-[0-9][0-9] \
+ $$d/$$file_i[0-9] $$d/$$file_i[0-9][0-9] ; do \
+ if test -f $$ifile; then \
+ echo "$$ifile"; \
+ else : ; fi; \
+ done; \
+ done | $(am__base_list) | \
+ while read files; do \
+ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(infodir)'"; \
+ $(INSTALL_DATA) $$files "$(DESTDIR)$(infodir)" || exit $$?; done
+ @$(POST_INSTALL)
+ @if (install-info --version && \
+ install-info --version 2>&1 | sed 1q | grep -i -v debian) >/dev/null 2>&1; then \
+ list='$(INFO_DEPS)'; test -n "$(infodir)" || list=; \
+ for file in $$list; do \
+ relfile=`echo "$$file" | sed 's|^.*/||'`; \
+ echo " install-info --info-dir='$(DESTDIR)$(infodir)' '$(DESTDIR)$(infodir)/$$relfile'";\
+ install-info --info-dir="$(DESTDIR)$(infodir)" "$(DESTDIR)$(infodir)/$$relfile" || :;\
+ done; \
+ else : ; fi
+install-man:
+
+install-pdf: install-pdf-am
+
+install-pdf-am: $(PDFS)
+ @$(NORMAL_INSTALL)
+ test -z "$(pdfdir)" || $(MKDIR_P) "$(DESTDIR)$(pdfdir)"
+ @list='$(PDFS)'; test -n "$(pdfdir)" || list=; \
+ for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ echo "$$d$$p"; \
+ done | $(am__base_list) | \
+ while read files; do \
+ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(pdfdir)'"; \
+ $(INSTALL_DATA) $$files "$(DESTDIR)$(pdfdir)" || exit $$?; done
+install-ps: install-ps-am
+
+install-ps-am: $(PSS)
+ @$(NORMAL_INSTALL)
+ test -z "$(psdir)" || $(MKDIR_P) "$(DESTDIR)$(psdir)"
+ @list='$(PSS)'; test -n "$(psdir)" || list=; \
+ for p in $$list; do \
+ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
+ echo "$$d$$p"; \
+ done | $(am__base_list) | \
+ while read files; do \
+ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(psdir)'"; \
+ $(INSTALL_DATA) $$files "$(DESTDIR)$(psdir)" || exit $$?; done
+installcheck-am:
+
+maintainer-clean: maintainer-clean-am
+ -rm -f Makefile
+maintainer-clean-am: distclean-am maintainer-clean-aminfo \
+ maintainer-clean-generic maintainer-clean-vti
+
+mostlyclean: mostlyclean-am
+
+mostlyclean-am: mostlyclean-aminfo mostlyclean-generic mostlyclean-vti
+
+pdf: pdf-am
+
+pdf-am: $(PDFS)
+
+ps: ps-am
+
+ps-am: $(PSS)
+
+uninstall-am: uninstall-dvi-am uninstall-html-am uninstall-info-am \
+ uninstall-local uninstall-pdf-am uninstall-ps-am
+
+.MAKE: install-am install-strip
+
+.PHONY: all all-am check check-am clean clean-aminfo clean-generic \
+ dist-info distclean distclean-generic distdir dvi dvi-am html \
+ html-am info info-am install install-am install-data \
+ install-data-am install-data-local install-dvi install-dvi-am \
+ install-exec install-exec-am install-html install-html-am \
+ install-info install-info-am install-man install-pdf \
+ install-pdf-am install-ps install-ps-am install-strip \
+ installcheck installcheck-am installdirs maintainer-clean \
+ maintainer-clean-aminfo maintainer-clean-generic \
+ maintainer-clean-vti mostlyclean mostlyclean-aminfo \
+ mostlyclean-generic mostlyclean-vti pdf pdf-am ps ps-am \
+ uninstall uninstall-am uninstall-dvi-am uninstall-html-am \
+ uninstall-info-am uninstall-local uninstall-pdf-am \
+ uninstall-ps-am
+
+
+all: wget.info @COMMENT_IF_NO_POD2MAN@$(MAN)
+
+everything: all wget_us.ps wget_a4.ps wget_toc.html
+
+$(SAMPLERCTEXI): $(srcdir)/sample.wgetrc
+ sed s/@/@@/g $? > $@
+
+wget.pod: $(srcdir)/wget.texi version.texi
+ $(TEXI2POD) -D VERSION="$(VERSION)" $(srcdir)/wget.texi $@
+
+$(MAN): wget.pod
+ $(POD2MAN) --center="GNU Wget" --release="GNU Wget @VERSION@" $? > $@
+
+#wget.cat: $(MAN)
+# nroff -man $? > $@
+
+wget_us.ps: wget.dvi
+ $(DVIPS) -t letter -o $@ wget.dvi
+
+wget_a4.ps: wget.dvi
+ $(DVIPS) -t a4 -o $@ wget.dvi
+
+wget_toc.html: $(srcdir)/wget.texi
+ $(TEXI2HTML) $(srcdir)/wget.texi
+
+#
+# Dependencies for installing
+#
+
+# install all the documentation
+install-data-local: install.wgetrc @COMMENT_IF_NO_POD2MAN@install.man
+
+# uninstall all the documentation
+uninstall-local: @COMMENT_IF_NO_POD2MAN@uninstall.man
+
+# install man page, creating install directory if necessary
+install.man: $(MAN)
+ $(mkinstalldirs) $(DESTDIR)$(mandir)/man$(manext)
+ $(INSTALL_DATA) $(MAN) $(DESTDIR)$(mandir)/man$(manext)/$(MAN)
+
+# install sample.wgetrc
+install.wgetrc: $(srcdir)/sample.wgetrc
+ $(mkinstalldirs) $(DESTDIR)$(sysconfdir)
+ @if test -f $(DESTDIR)$(WGETRC); then \
+ if cmp -s $(srcdir)/sample.wgetrc $(DESTDIR)$(WGETRC); then echo ""; \
+ else \
+ echo ' $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(DESTDIR)$(WGETRC).new'; \
+ $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(DESTDIR)$(WGETRC).new; \
+ echo; \
+ echo "WARNING: Differing \`$(DESTDIR)$(WGETRC)'"; \
+ echo " exists and has been spared. You might want to"; \
+ echo " consider merging in the new lines from"; \
+ echo " \`$(DESTDIR)$(WGETRC).new'."; \
+ echo; \
+ fi; \
+ else \
+ $(INSTALL_DATA) $(srcdir)/sample.wgetrc $(DESTDIR)$(WGETRC); \
+ fi
+
+# uninstall man page
+uninstall.man:
+ $(RM) $(DESTDIR)$(mandir)/man$(manext)/$(MAN)
+
+# Tell versions [3.59,3.63) of GNU make to not export all variables.
+# Otherwise a system limit (for SysV at least) may be exceeded.
+.NOEXPORT:
diff --git a/doc/fdl.texi b/doc/fdl.texi
new file mode 100644
index 0000000..1c83c00
--- /dev/null
+++ b/doc/fdl.texi
@@ -0,0 +1,507 @@
+@c The GNU Free Documentation License.
+@center Version 1.3, 3 November 2008
+
+@c This file is intended to be included within another document,
+@c hence no sectioning command or @node.
+
+@display
+Copyright @copyright{} 2000, 2001, 2002, 2007, 2008, 2009, 2010, 2011
+Free Software Foundation, Inc.
+@uref{http://fsf.org/}
+
+Everyone is permitted to copy and distribute verbatim copies
+of this license document, but changing it is not allowed.
+@end display
+
+@enumerate 0
+@item
+PREAMBLE
+
+The purpose of this License is to make a manual, textbook, or other
+functional and useful document @dfn{free} in the sense of freedom: to
+assure everyone the effective freedom to copy and redistribute it,
+with or without modifying it, either commercially or noncommercially.
+Secondarily, this License preserves for the author and publisher a way
+to get credit for their work, while not being considered responsible
+for modifications made by others.
+
+This License is a kind of ``copyleft'', which means that derivative
+works of the document must themselves be free in the same sense. It
+complements the GNU General Public License, which is a copyleft
+license designed for free software.
+
+We have designed this License in order to use it for manuals for free
+software, because free software needs free documentation: a free
+program should come with manuals providing the same freedoms that the
+software does. But this License is not limited to software manuals;
+it can be used for any textual work, regardless of subject matter or
+whether it is published as a printed book. We recommend this License
+principally for works whose purpose is instruction or reference.
+
+@item
+APPLICABILITY AND DEFINITIONS
+
+This License applies to any manual or other work, in any medium, that
+contains a notice placed by the copyright holder saying it can be
+distributed under the terms of this License. Such a notice grants a
+world-wide, royalty-free license, unlimited in duration, to use that
+work under the conditions stated herein. The ``Document'', below,
+refers to any such manual or work. Any member of the public is a
+licensee, and is addressed as ``you''. You accept the license if you
+copy, modify or distribute the work in a way requiring permission
+under copyright law.
+
+A ``Modified Version'' of the Document means any work containing the
+Document or a portion of it, either copied verbatim, or with
+modifications and/or translated into another language.
+
+A ``Secondary Section'' is a named appendix or a front-matter section
+of the Document that deals exclusively with the relationship of the
+publishers or authors of the Document to the Document's overall
+subject (or to related matters) and contains nothing that could fall
+directly within that overall subject. (Thus, if the Document is in
+part a textbook of mathematics, a Secondary Section may not explain
+any mathematics.) The relationship could be a matter of historical
+connection with the subject or with related matters, or of legal,
+commercial, philosophical, ethical or political position regarding
+them.
+
+The ``Invariant Sections'' are certain Secondary Sections whose titles
+are designated, as being those of Invariant Sections, in the notice
+that says that the Document is released under this License. If a
+section does not fit the above definition of Secondary then it is not
+allowed to be designated as Invariant. The Document may contain zero
+Invariant Sections. If the Document does not identify any Invariant
+Sections then there are none.
+
+The ``Cover Texts'' are certain short passages of text that are listed,
+as Front-Cover Texts or Back-Cover Texts, in the notice that says that
+the Document is released under this License. A Front-Cover Text may
+be at most 5 words, and a Back-Cover Text may be at most 25 words.
+
+A ``Transparent'' copy of the Document means a machine-readable copy,
+represented in a format whose specification is available to the
+general public, that is suitable for revising the document
+straightforwardly with generic text editors or (for images composed of
+pixels) generic paint programs or (for drawings) some widely available
+drawing editor, and that is suitable for input to text formatters or
+for automatic translation to a variety of formats suitable for input
+to text formatters. A copy made in an otherwise Transparent file
+format whose markup, or absence of markup, has been arranged to thwart
+or discourage subsequent modification by readers is not Transparent.
+An image format is not Transparent if used for any substantial amount
+of text. A copy that is not ``Transparent'' is called ``Opaque''.
+
+Examples of suitable formats for Transparent copies include plain
+@sc{ascii} without markup, Texinfo input format, La@TeX{} input
+format, @acronym{SGML} or @acronym{XML} using a publicly available
+@acronym{DTD}, and standard-conforming simple @acronym{HTML},
+PostScript or @acronym{PDF} designed for human modification. Examples
+of transparent image formats include @acronym{PNG}, @acronym{XCF} and
+@acronym{JPG}. Opaque formats include proprietary formats that can be
+read and edited only by proprietary word processors, @acronym{SGML} or
+@acronym{XML} for which the @acronym{DTD} and/or processing tools are
+not generally available, and the machine-generated @acronym{HTML},
+PostScript or @acronym{PDF} produced by some word processors for
+output purposes only.
+
+The ``Title Page'' means, for a printed book, the title page itself,
+plus such following pages as are needed to hold, legibly, the material
+this License requires to appear in the title page. For works in
+formats which do not have any title page as such, ``Title Page'' means
+the text near the most prominent appearance of the work's title,
+preceding the beginning of the body of the text.
+
+The ``publisher'' means any person or entity that distributes copies
+of the Document to the public.
+
+A section ``Entitled XYZ'' means a named subunit of the Document whose
+title either is precisely XYZ or contains XYZ in parentheses following
+text that translates XYZ in another language. (Here XYZ stands for a
+specific section name mentioned below, such as ``Acknowledgements'',
+``Dedications'', ``Endorsements'', or ``History''.) To ``Preserve the Title''
+of such a section when you modify the Document means that it remains a
+section ``Entitled XYZ'' according to this definition.
+
+The Document may include Warranty Disclaimers next to the notice which
+states that this License applies to the Document. These Warranty
+Disclaimers are considered to be included by reference in this
+License, but only as regards disclaiming warranties: any other
+implication that these Warranty Disclaimers may have is void and has
+no effect on the meaning of this License.
+
+@item
+VERBATIM COPYING
+
+You may copy and distribute the Document in any medium, either
+commercially or noncommercially, provided that this License, the
+copyright notices, and the license notice saying this License applies
+to the Document are reproduced in all copies, and that you add no other
+conditions whatsoever to those of this License. You may not use
+technical measures to obstruct or control the reading or further
+copying of the copies you make or distribute. However, you may accept
+compensation in exchange for copies. If you distribute a large enough
+number of copies you must also follow the conditions in section 3.
+
+You may also lend copies, under the same conditions stated above, and
+you may publicly display copies.
+
+@item
+COPYING IN QUANTITY
+
+If you publish printed copies (or copies in media that commonly have
+printed covers) of the Document, numbering more than 100, and the
+Document's license notice requires Cover Texts, you must enclose the
+copies in covers that carry, clearly and legibly, all these Cover
+Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
+the back cover. Both covers must also clearly and legibly identify
+you as the publisher of these copies. The front cover must present
+the full title with all words of the title equally prominent and
+visible. You may add other material on the covers in addition.
+Copying with changes limited to the covers, as long as they preserve
+the title of the Document and satisfy these conditions, can be treated
+as verbatim copying in other respects.
+
+If the required texts for either cover are too voluminous to fit
+legibly, you should put the first ones listed (as many as fit
+reasonably) on the actual cover, and continue the rest onto adjacent
+pages.
+
+If you publish or distribute Opaque copies of the Document numbering
+more than 100, you must either include a machine-readable Transparent
+copy along with each Opaque copy, or state in or with each Opaque copy
+a computer-network location from which the general network-using
+public has access to download using public-standard network protocols
+a complete Transparent copy of the Document, free of added material.
+If you use the latter option, you must take reasonably prudent steps,
+when you begin distribution of Opaque copies in quantity, to ensure
+that this Transparent copy will remain thus accessible at the stated
+location until at least one year after the last time you distribute an
+Opaque copy (directly or through your agents or retailers) of that
+edition to the public.
+
+It is requested, but not required, that you contact the authors of the
+Document well before redistributing any large number of copies, to give
+them a chance to provide you with an updated version of the Document.
+
+@item
+MODIFICATIONS
+
+You may copy and distribute a Modified Version of the Document under
+the conditions of sections 2 and 3 above, provided that you release
+the Modified Version under precisely this License, with the Modified
+Version filling the role of the Document, thus licensing distribution
+and modification of the Modified Version to whoever possesses a copy
+of it. In addition, you must do these things in the Modified Version:
+
+@enumerate A
+@item
+Use in the Title Page (and on the covers, if any) a title distinct
+from that of the Document, and from those of previous versions
+(which should, if there were any, be listed in the History section
+of the Document). You may use the same title as a previous version
+if the original publisher of that version gives permission.
+
+@item
+List on the Title Page, as authors, one or more persons or entities
+responsible for authorship of the modifications in the Modified
+Version, together with at least five of the principal authors of the
+Document (all of its principal authors, if it has fewer than five),
+unless they release you from this requirement.
+
+@item
+State on the Title page the name of the publisher of the
+Modified Version, as the publisher.
+
+@item
+Preserve all the copyright notices of the Document.
+
+@item
+Add an appropriate copyright notice for your modifications
+adjacent to the other copyright notices.
+
+@item
+Include, immediately after the copyright notices, a license notice
+giving the public permission to use the Modified Version under the
+terms of this License, in the form shown in the Addendum below.
+
+@item
+Preserve in that license notice the full lists of Invariant Sections
+and required Cover Texts given in the Document's license notice.
+
+@item
+Include an unaltered copy of this License.
+
+@item
+Preserve the section Entitled ``History'', Preserve its Title, and add
+to it an item stating at least the title, year, new authors, and
+publisher of the Modified Version as given on the Title Page. If
+there is no section Entitled ``History'' in the Document, create one
+stating the title, year, authors, and publisher of the Document as
+given on its Title Page, then add an item describing the Modified
+Version as stated in the previous sentence.
+
+@item
+Preserve the network location, if any, given in the Document for
+public access to a Transparent copy of the Document, and likewise
+the network locations given in the Document for previous versions
+it was based on. These may be placed in the ``History'' section.
+You may omit a network location for a work that was published at
+least four years before the Document itself, or if the original
+publisher of the version it refers to gives permission.
+
+@item
+For any section Entitled ``Acknowledgements'' or ``Dedications'', Preserve
+the Title of the section, and preserve in the section all the
+substance and tone of each of the contributor acknowledgements and/or
+dedications given therein.
+
+@item
+Preserve all the Invariant Sections of the Document,
+unaltered in their text and in their titles. Section numbers
+or the equivalent are not considered part of the section titles.
+
+@item
+Delete any section Entitled ``Endorsements''. Such a section
+may not be included in the Modified Version.
+
+@item
+Do not retitle any existing section to be Entitled ``Endorsements'' or
+to conflict in title with any Invariant Section.
+
+@item
+Preserve any Warranty Disclaimers.
+@end enumerate
+
+If the Modified Version includes new front-matter sections or
+appendices that qualify as Secondary Sections and contain no material
+copied from the Document, you may at your option designate some or all
+of these sections as invariant. To do this, add their titles to the
+list of Invariant Sections in the Modified Version's license notice.
+These titles must be distinct from any other section titles.
+
+You may add a section Entitled ``Endorsements'', provided it contains
+nothing but endorsements of your Modified Version by various
+parties---for example, statements of peer review or that the text has
+been approved by an organization as the authoritative definition of a
+standard.
+
+You may add a passage of up to five words as a Front-Cover Text, and a
+passage of up to 25 words as a Back-Cover Text, to the end of the list
+of Cover Texts in the Modified Version. Only one passage of
+Front-Cover Text and one of Back-Cover Text may be added by (or
+through arrangements made by) any one entity. If the Document already
+includes a cover text for the same cover, previously added by you or
+by arrangement made by the same entity you are acting on behalf of,
+you may not add another; but you may replace the old one, on explicit
+permission from the previous publisher that added the old one.
+
+The author(s) and publisher(s) of the Document do not by this License
+give permission to use their names for publicity for or to assert or
+imply endorsement of any Modified Version.
+
+@item
+COMBINING DOCUMENTS
+
+You may combine the Document with other documents released under this
+License, under the terms defined in section 4 above for modified
+versions, provided that you include in the combination all of the
+Invariant Sections of all of the original documents, unmodified, and
+list them all as Invariant Sections of your combined work in its
+license notice, and that you preserve all their Warranty Disclaimers.
+
+The combined work need only contain one copy of this License, and
+multiple identical Invariant Sections may be replaced with a single
+copy. If there are multiple Invariant Sections with the same name but
+different contents, make the title of each such section unique by
+adding at the end of it, in parentheses, the name of the original
+author or publisher of that section if known, or else a unique number.
+Make the same adjustment to the section titles in the list of
+Invariant Sections in the license notice of the combined work.
+
+In the combination, you must combine any sections Entitled ``History''
+in the various original documents, forming one section Entitled
+``History''; likewise combine any sections Entitled ``Acknowledgements'',
+and any sections Entitled ``Dedications''. You must delete all
+sections Entitled ``Endorsements.''
+
+@item
+COLLECTIONS OF DOCUMENTS
+
+You may make a collection consisting of the Document and other documents
+released under this License, and replace the individual copies of this
+License in the various documents with a single copy that is included in
+the collection, provided that you follow the rules of this License for
+verbatim copying of each of the documents in all other respects.
+
+You may extract a single document from such a collection, and distribute
+it individually under this License, provided you insert a copy of this
+License into the extracted document, and follow this License in all
+other respects regarding verbatim copying of that document.
+
+@item
+AGGREGATION WITH INDEPENDENT WORKS
+
+A compilation of the Document or its derivatives with other separate
+and independent documents or works, in or on a volume of a storage or
+distribution medium, is called an ``aggregate'' if the copyright
+resulting from the compilation is not used to limit the legal rights
+of the compilation's users beyond what the individual works permit.
+When the Document is included in an aggregate, this License does not
+apply to the other works in the aggregate which are not themselves
+derivative works of the Document.
+
+If the Cover Text requirement of section 3 is applicable to these
+copies of the Document, then if the Document is less than one half of
+the entire aggregate, the Document's Cover Texts may be placed on
+covers that bracket the Document within the aggregate, or the
+electronic equivalent of covers if the Document is in electronic form.
+Otherwise they must appear on printed covers that bracket the whole
+aggregate.
+
+@item
+TRANSLATION
+
+Translation is considered a kind of modification, so you may
+distribute translations of the Document under the terms of section 4.
+Replacing Invariant Sections with translations requires special
+permission from their copyright holders, but you may include
+translations of some or all Invariant Sections in addition to the
+original versions of these Invariant Sections. You may include a
+translation of this License, and all the license notices in the
+Document, and any Warranty Disclaimers, provided that you also include
+the original English version of this License and the original versions
+of those notices and disclaimers. In case of a disagreement between
+the translation and the original version of this License or a notice
+or disclaimer, the original version will prevail.
+
+If a section in the Document is Entitled ``Acknowledgements'',
+``Dedications'', or ``History'', the requirement (section 4) to Preserve
+its Title (section 1) will typically require changing the actual
+title.
+
+@item
+TERMINATION
+
+You may not copy, modify, sublicense, or distribute the Document
+except as expressly provided under this License. Any attempt
+otherwise to copy, modify, sublicense, or distribute it is void, and
+will automatically terminate your rights under this License.
+
+However, if you cease all violation of this License, then your license
+from a particular copyright holder is reinstated (a) provisionally,
+unless and until the copyright holder explicitly and finally
+terminates your license, and (b) permanently, if the copyright holder
+fails to notify you of the violation by some reasonable means prior to
+60 days after the cessation.
+
+Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License. If your rights have been terminated and not permanently
+reinstated, receipt of a copy of some or all of the same material does
+not give you any rights to use it.
+
+@item
+FUTURE REVISIONS OF THIS LICENSE
+
+The Free Software Foundation may publish new, revised versions
+of the GNU Free Documentation License from time to time. Such new
+versions will be similar in spirit to the present version, but may
+differ in detail to address new problems or concerns. See
+@uref{http://www.gnu.org/copyleft/}.
+
+Each version of the License is given a distinguishing version number.
+If the Document specifies that a particular numbered version of this
+License ``or any later version'' applies to it, you have the option of
+following the terms and conditions either of that specified version or
+of any later version that has been published (not as a draft) by the
+Free Software Foundation. If the Document does not specify a version
+number of this License, you may choose any version ever published (not
+as a draft) by the Free Software Foundation. If the Document
+specifies that a proxy can decide which future versions of this
+License can be used, that proxy's public statement of acceptance of a
+version permanently authorizes you to choose that version for the
+Document.
+
+@item
+RELICENSING
+
+``Massive Multiauthor Collaboration Site'' (or ``MMC Site'') means any
+World Wide Web server that publishes copyrightable works and also
+provides prominent facilities for anybody to edit those works. A
+public wiki that anybody can edit is an example of such a server. A
+``Massive Multiauthor Collaboration'' (or ``MMC'') contained in the
+site means any set of copyrightable works thus published on the MMC
+site.
+
+``CC-BY-SA'' means the Creative Commons Attribution-Share Alike 3.0
+license published by Creative Commons Corporation, a not-for-profit
+corporation with a principal place of business in San Francisco,
+California, as well as future copyleft versions of that license
+published by that same organization.
+
+``Incorporate'' means to publish or republish a Document, in whole or
+in part, as part of another Document.
+
+An MMC is ``eligible for relicensing'' if it is licensed under this
+License, and if all works that were first published under this License
+somewhere other than this MMC, and subsequently incorporated in whole
+or in part into the MMC, (1) had no cover texts or invariant sections,
+and (2) were thus incorporated prior to November 1, 2008.
+
+The operator of an MMC Site may republish an MMC contained in the site
+under CC-BY-SA on the same site at any time before August 1, 2009,
+provided the MMC is eligible for relicensing.
+
+@end enumerate
+
+@page
+@heading ADDENDUM: How to use this License for your documents
+
+To use this License in a document you have written, include a copy of
+the License in the document and put the following copyright and
+license notices just after the title page:
+
+@smallexample
+@group
+ Copyright (C) @var{year} @var{your name}.
+ Permission is granted to copy, distribute and/or modify this document
+ under the terms of the GNU Free Documentation License, Version 1.3
+ or any later version published by the Free Software Foundation;
+ with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
+ Texts. A copy of the license is included in the section entitled ``GNU
+ Free Documentation License''.
+@end group
+@end smallexample
+
+If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
+replace the ``with@dots{}Texts.'' line with this:
+
+@smallexample
+@group
+ with the Invariant Sections being @var{list their titles}, with
+ the Front-Cover Texts being @var{list}, and with the Back-Cover Texts
+ being @var{list}.
+@end group
+@end smallexample
+
+If you have Invariant Sections without Cover Texts, or some other
+combination of the three, merge those two alternatives to suit the
+situation.
+
+If your document contains nontrivial examples of program code, we
+recommend releasing these examples in parallel under your choice of
+free software license, such as the GNU General Public License,
+to permit their use in free software.
+
+@c Local Variables:
+@c ispell-local-pdict: "ispell-dict"
+@c End:
+
diff --git a/doc/sample.wgetrc b/doc/sample.wgetrc
new file mode 100644
index 0000000..064ceb1
--- /dev/null
+++ b/doc/sample.wgetrc
@@ -0,0 +1,125 @@
+###
+### Sample Wget initialization file .wgetrc
+###
+
+## You can use this file to change the default behaviour of wget or to
+## avoid having to type many many command-line options. This file does
+## not contain a comprehensive list of commands -- look at the manual
+## to find out what you can put into this file.
+##
+## Wget initialization file can reside in /usr/local/etc/wgetrc
+## (global, for all users) or $HOME/.wgetrc (for a single user).
+##
+## To use the settings in this file, you will have to uncomment them,
+## as well as change them, in most cases, as the values on the
+## commented-out lines are the default values (e.g. "off").
+
+
+##
+## Global settings (useful for setting up in /usr/local/etc/wgetrc).
+## Think well before you change them, since they may reduce wget's
+## functionality, and make it behave contrary to the documentation:
+##
+
+# You can set retrieve quota for beginners by specifying a value
+# optionally followed by 'K' (kilobytes) or 'M' (megabytes). The
+# default quota is unlimited.
+#quota = inf
+
+# You can lower (or raise) the default number of retries when
+# downloading a file (default is 20).
+#tries = 20
+
+# Lowering the maximum depth of the recursive retrieval is handy to
+# prevent newbies from going too "deep" when they unwittingly start
+# the recursive retrieval. The default is 5.
+#reclevel = 5
+
+# By default Wget uses "passive FTP" transfer where the client
+# initiates the data connection to the server rather than the other
+# way around. That is required on systems behind NAT where the client
+# computer cannot be easily reached from the Internet. However, some
+# firewalls software explicitly supports active FTP and in fact has
+# problems supporting passive transfer. If you are in such
+# environment, use "passive_ftp = off" to revert to active FTP.
+#passive_ftp = off
+
+# The "wait" command below makes Wget wait between every connection.
+# If, instead, you want Wget to wait only between retries of failed
+# downloads, set waitretry to maximum number of seconds to wait (Wget
+# will use "linear backoff", waiting 1 second after the first failure
+# on a file, 2 seconds after the second failure, etc. up to this max).
+#waitretry = 10
+
+
+##
+## Local settings (for a user to set in his $HOME/.wgetrc). It is
+## *highly* undesirable to put these settings in the global file, since
+## they are potentially dangerous to "normal" users.
+##
+## Even when setting up your own ~/.wgetrc, you should know what you
+## are doing before doing so.
+##
+
+# Set this to on to use timestamping by default:
+#timestamping = off
+
+# It is a good idea to make Wget send your email address in a `From:'
+# header with your request (so that server administrators can contact
+# you in case of errors). Wget does *not* send `From:' by default.
+#header = From: Your Name <username@site.domain>
+
+# You can set up other headers, like Accept-Language. Accept-Language
+# is *not* sent by default.
+#header = Accept-Language: en
+
+# You can set the default proxies for Wget to use for http, https, and ftp.
+# They will override the value in the environment.
+#https_proxy = http://proxy.yoyodyne.com:18023/
+#http_proxy = http://proxy.yoyodyne.com:18023/
+#ftp_proxy = http://proxy.yoyodyne.com:18023/
+
+# If you do not want to use proxy at all, set this to off.
+#use_proxy = on
+
+# You can customize the retrieval outlook. Valid options are default,
+# binary, mega and micro.
+#dot_style = default
+
+# Setting this to off makes Wget not download /robots.txt. Be sure to
+# know *exactly* what /robots.txt is and how it is used before changing
+# the default!
+#robots = on
+
+# It can be useful to make Wget wait between connections. Set this to
+# the number of seconds you want Wget to wait.
+#wait = 0
+
+# You can force creating directory structure, even if a single is being
+# retrieved, by setting this to on.
+#dirstruct = off
+
+# You can turn on recursive retrieving by default (don't do this if
+# you are not sure you know what it means) by setting this to on.
+#recursive = off
+
+# To always back up file X as X.orig before converting its links (due
+# to -k / --convert-links / convert_links = on having been specified),
+# set this variable to on:
+#backup_converted = off
+
+# To have Wget follow FTP links from HTML files by default, set this
+# to on:
+#follow_ftp = off
+
+# To try ipv6 addresses first:
+#prefer-family = IPv6
+
+# Set default IRI support state
+#iri = off
+
+# Force the default system encoding
+#locale = UTF-8
+
+# Force the default remote server encoding
+#remoteencoding = UTF-8
diff --git a/doc/sample.wgetrc.munged_for_texi_inclusion b/doc/sample.wgetrc.munged_for_texi_inclusion
new file mode 100644
index 0000000..9b34de6
--- /dev/null
+++ b/doc/sample.wgetrc.munged_for_texi_inclusion
@@ -0,0 +1,125 @@
+###
+### Sample Wget initialization file .wgetrc
+###
+
+## You can use this file to change the default behaviour of wget or to
+## avoid having to type many many command-line options. This file does
+## not contain a comprehensive list of commands -- look at the manual
+## to find out what you can put into this file.
+##
+## Wget initialization file can reside in /usr/local/etc/wgetrc
+## (global, for all users) or $HOME/.wgetrc (for a single user).
+##
+## To use the settings in this file, you will have to uncomment them,
+## as well as change them, in most cases, as the values on the
+## commented-out lines are the default values (e.g. "off").
+
+
+##
+## Global settings (useful for setting up in /usr/local/etc/wgetrc).
+## Think well before you change them, since they may reduce wget's
+## functionality, and make it behave contrary to the documentation:
+##
+
+# You can set retrieve quota for beginners by specifying a value
+# optionally followed by 'K' (kilobytes) or 'M' (megabytes). The
+# default quota is unlimited.
+#quota = inf
+
+# You can lower (or raise) the default number of retries when
+# downloading a file (default is 20).
+#tries = 20
+
+# Lowering the maximum depth of the recursive retrieval is handy to
+# prevent newbies from going too "deep" when they unwittingly start
+# the recursive retrieval. The default is 5.
+#reclevel = 5
+
+# By default Wget uses "passive FTP" transfer where the client
+# initiates the data connection to the server rather than the other
+# way around. That is required on systems behind NAT where the client
+# computer cannot be easily reached from the Internet. However, some
+# firewalls software explicitly supports active FTP and in fact has
+# problems supporting passive transfer. If you are in such
+# environment, use "passive_ftp = off" to revert to active FTP.
+#passive_ftp = off
+
+# The "wait" command below makes Wget wait between every connection.
+# If, instead, you want Wget to wait only between retries of failed
+# downloads, set waitretry to maximum number of seconds to wait (Wget
+# will use "linear backoff", waiting 1 second after the first failure
+# on a file, 2 seconds after the second failure, etc. up to this max).
+#waitretry = 10
+
+
+##
+## Local settings (for a user to set in his $HOME/.wgetrc). It is
+## *highly* undesirable to put these settings in the global file, since
+## they are potentially dangerous to "normal" users.
+##
+## Even when setting up your own ~/.wgetrc, you should know what you
+## are doing before doing so.
+##
+
+# Set this to on to use timestamping by default:
+#timestamping = off
+
+# It is a good idea to make Wget send your email address in a `From:'
+# header with your request (so that server administrators can contact
+# you in case of errors). Wget does *not* send `From:' by default.
+#header = From: Your Name <username@@site.domain>
+
+# You can set up other headers, like Accept-Language. Accept-Language
+# is *not* sent by default.
+#header = Accept-Language: en
+
+# You can set the default proxies for Wget to use for http, https, and ftp.
+# They will override the value in the environment.
+#https_proxy = http://proxy.yoyodyne.com:18023/
+#http_proxy = http://proxy.yoyodyne.com:18023/
+#ftp_proxy = http://proxy.yoyodyne.com:18023/
+
+# If you do not want to use proxy at all, set this to off.
+#use_proxy = on
+
+# You can customize the retrieval outlook. Valid options are default,
+# binary, mega and micro.
+#dot_style = default
+
+# Setting this to off makes Wget not download /robots.txt. Be sure to
+# know *exactly* what /robots.txt is and how it is used before changing
+# the default!
+#robots = on
+
+# It can be useful to make Wget wait between connections. Set this to
+# the number of seconds you want Wget to wait.
+#wait = 0
+
+# You can force creating directory structure, even if a single is being
+# retrieved, by setting this to on.
+#dirstruct = off
+
+# You can turn on recursive retrieving by default (don't do this if
+# you are not sure you know what it means) by setting this to on.
+#recursive = off
+
+# To always back up file X as X.orig before converting its links (due
+# to -k / --convert-links / convert_links = on having been specified),
+# set this variable to on:
+#backup_converted = off
+
+# To have Wget follow FTP links from HTML files by default, set this
+# to on:
+#follow_ftp = off
+
+# To try ipv6 addresses first:
+#prefer-family = IPv6
+
+# Set default IRI support state
+#iri = off
+
+# Force the default system encoding
+#locale = UTF-8
+
+# Force the default remote server encoding
+#remoteencoding = UTF-8
diff --git a/doc/stamp-vti b/doc/stamp-vti
new file mode 100644
index 0000000..0ec97f0
--- /dev/null
+++ b/doc/stamp-vti
@@ -0,0 +1,4 @@
+@set UPDATED 6 August 2011
+@set UPDATED-MONTH August 2011
+@set EDITION 1.13.4
+@set VERSION 1.13.4
diff --git a/doc/texi2pod.pl b/doc/texi2pod.pl
new file mode 100755
index 0000000..9c0e94c
--- /dev/null
+++ b/doc/texi2pod.pl
@@ -0,0 +1,500 @@
+#! /usr/bin/env perl
+
+# Copyright (C) 1999, 2000, 2001, 2003, 2010 Free Software Foundation, Inc.
+
+# This file is part of GCC.
+
+# GCC is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3, or (at your option)
+# any later version.
+
+# GCC is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with GCC; see the file COPYING. If not, write to
+# the Free Software Foundation, 51 Franklin Street, Fifth Floor,
+# Boston MA 02110-1301, USA.
+
+# This does trivial (and I mean _trivial_) conversion of Texinfo
+# markup to Perl POD format. It's intended to be used to extract
+# something suitable for a manpage from a Texinfo document.
+
+$output = 0;
+$skipping = 0;
+%sects = ();
+$section = "";
+@icstack = ();
+@endwstack = ();
+@skstack = ();
+@instack = ();
+$shift = "";
+%defs = ();
+$fnno = 1;
+$inf = "";
+$ibase = "";
+@ipath = ();
+
+while ($_ = shift) {
+ if (/^-D(.*)$/) {
+ if ($1 ne "") {
+ $flag = $1;
+ } else {
+ $flag = shift;
+ }
+ $value = "";
+ ($flag, $value) = ($flag =~ /^([^=]+)(?:=(.+))?/);
+ die "no flag specified for -D\n"
+ unless $flag ne "";
+ die "flags may only contain letters, digits, hyphens, dashes and underscores\n"
+ unless $flag =~ /^[a-zA-Z0-9_-]+$/;
+ $defs{$flag} = $value;
+ } elsif (/^-I(.*)$/) {
+ if ($1 ne "") {
+ $flag = $1;
+ } else {
+ $flag = shift;
+ }
+ push (@ipath, $flag);
+ } elsif (/^-/) {
+ usage();
+ } else {
+ $in = $_, next unless defined $in;
+ $out = $_, next unless defined $out;
+ usage();
+ }
+}
+
+if (defined $in) {
+ $inf = gensym();
+ open($inf, "<$in") or die "opening \"$in\": $!\n";
+ $ibase = $1 if $in =~ m|^(.+)/[^/]+$|;
+} else {
+ $inf = \*STDIN;
+}
+
+if (defined $out) {
+ open(STDOUT, ">$out") or die "opening \"$out\": $!\n";
+}
+
+while(defined $inf) {
+while(<$inf>) {
+ # Certain commands are discarded without further processing.
+ /^\@(?:
+ [a-z]+index # @*index: useful only in complete manual
+ |need # @need: useful only in printed manual
+ |(?:end\s+)?group # @group .. @end group: ditto
+ |page # @page: ditto
+ |node # @node: useful only in .info file
+ |(?:end\s+)?ifnottex # @ifnottex .. @end ifnottex: use contents
+ )\b/x and next;
+
+ chomp;
+
+ # Look for filename and title markers.
+ /^\@setfilename\s+([^.]+)/ and $fn = $1, next;
+ /^\@settitle\s+([^.]+)/ and $tl = postprocess($1), next;
+
+ # Identify a man title but keep only the one we are interested in.
+ /^\@c\s+man\s+title\s+([A-Za-z0-9-]+)\s+(.+)/ and do {
+ if (exists $defs{$1}) {
+ $fn = $1;
+ $tl = postprocess($2);
+ }
+ next;
+ };
+
+ # Look for blocks surrounded by @c man begin SECTION ... @c man end.
+ # This really oughta be @ifman ... @end ifman and the like, but such
+ # would require rev'ing all other Texinfo translators.
+ /^\@c\s+man\s+begin\s+([A-Z]+)\s+([A-Za-z0-9-]+)/ and do {
+ $output = 1 if exists $defs{$2};
+ $sect = $1;
+ next;
+ };
+ /^\@c\s+man\s+begin\s+([A-Z]+)/ and $sect = $1, $output = 1, next;
+ /^\@c\s+man\s+end/ and do {
+ $sects{$sect} = "" unless exists $sects{$sect};
+ $sects{$sect} .= postprocess($section);
+ $section = "";
+ $output = 0;
+ next;
+ };
+
+ # handle variables
+ /^\@set\s+([a-zA-Z0-9_-]+)\s*(.*)$/ and do {
+ $defs{$1} = $2;
+ next;
+ };
+ /^\@clear\s+([a-zA-Z0-9_-]+)/ and do {
+ delete $defs{$1};
+ next;
+ };
+
+ next unless $output;
+
+ # Discard comments. (Can't do it above, because then we'd never see
+ # @c man lines.)
+ /^\@c\b/ and next;
+
+ # End-block handler goes up here because it needs to operate even
+ # if we are skipping.
+ /^\@end\s+([a-z]+)/ and do {
+ # Ignore @end foo, where foo is not an operation which may
+ # cause us to skip, if we are presently skipping.
+ my $ended = $1;
+ next if $skipping && $ended !~ /^(?:ifset|ifclear|ignore|menu|iftex|copying)$/;
+
+ die "\@end $ended without \@$ended at line $.\n" unless defined $endw;
+ die "\@$endw ended by \@end $ended at line $.\n" unless $ended eq $endw;
+
+ $endw = pop @endwstack;
+
+ if ($ended =~ /^(?:ifset|ifclear|ignore|menu|iftex)$/) {
+ $skipping = pop @skstack;
+ next;
+ } elsif ($ended =~ /^(?:example|smallexample|display)$/) {
+ $shift = "";
+ $_ = ""; # need a paragraph break
+ } elsif ($ended =~ /^(?:itemize|enumerate|[fv]?table)$/) {
+ $_ = "\n=back\n";
+ $ic = pop @icstack;
+ } elsif ($ended eq "multitable") {
+ $_ = "\n=back\n";
+ } else {
+ die "unknown command \@end $ended at line $.\n";
+ }
+ };
+
+ # We must handle commands which can cause skipping even while we
+ # are skipping, otherwise we will not process nested conditionals
+ # correctly.
+ /^\@ifset\s+([a-zA-Z0-9_-]+)/ and do {
+ push @endwstack, $endw;
+ push @skstack, $skipping;
+ $endw = "ifset";
+ $skipping = 1 unless exists $defs{$1};
+ next;
+ };
+
+ /^\@ifclear\s+([a-zA-Z0-9_-]+)/ and do {
+ push @endwstack, $endw;
+ push @skstack, $skipping;
+ $endw = "ifclear";
+ $skipping = 1 if exists $defs{$1};
+ next;
+ };
+
+ /^\@(ignore|menu|iftex|copying)\b/ and do {
+ push @endwstack, $endw;
+ push @skstack, $skipping;
+ $endw = $1;
+ $skipping = 1;
+ next;
+ };
+
+ next if $skipping;
+
+ # Character entities. First the ones that can be replaced by raw text
+ # or discarded outright:
+ s/\@copyright\{\}/(c)/g;
+ s/\@dots\{\}/.../g;
+ s/\@enddots\{\}/..../g;
+ s/\@([.!? ])/$1/g;
+ s/\@[:-]//g;
+ s/\@bullet(?:\{\})?/*/g;
+ s/\@TeX\{\}/TeX/g;
+ s/\@pounds\{\}/\#/g;
+ s/\@minus(?:\{\})?/-/g;
+ s/\\,/,/g;
+
+ # Now the ones that have to be replaced by special escapes
+ # (which will be turned back into text by unmunge())
+ # Replace @@ before @{ and @} in order to parse @samp{@@} correctly.
+ s/&/&amp;/g;
+ s/\@\@/&at;/g;
+ s/\@\{/&lbrace;/g;
+ s/\@\}/&rbrace;/g;
+ s/\@`\{(.)\}/&$1grave;/g;
+
+ # Inside a verbatim block, handle @var, @samp and @url specially.
+ if ($shift ne "") {
+ s/\@var\{([^\}]*)\}/<$1>/g;
+ s/\@samp\{([^\}]*)\}/"$1"/g;
+ s/\@url\{([^\}]*)\}/<$1>/g;
+ }
+
+ # POD doesn't interpret E<> inside a verbatim block.
+ if ($shift eq "") {
+ s/</&lt;/g;
+ s/>/&gt;/g;
+ } else {
+ s/</&LT;/g;
+ s/>/&GT;/g;
+ }
+
+ # Single line command handlers.
+
+ /^\@include\s+(.+)$/ and do {
+ push @instack, $inf;
+ $inf = gensym();
+ $file = postprocess($1);
+
+ # Try cwd and $ibase, then explicit -I paths.
+ $done = 0;
+ foreach $path ("", $ibase, @ipath) {
+ $mypath = $file;
+ $mypath = $path . "/" . $mypath if ($path ne "");
+ open($inf, "<" . $mypath) and ($done = 1, last);
+ }
+ die "cannot find $file" if !$done;
+ next;
+ };
+
+ /^\@(?:section|unnumbered|unnumberedsec|center|heading)\s+(.+)$/
+ and $_ = "\n=head2 $1\n";
+ /^\@subsection\s+(.+)$/
+ and $_ = "\n=head3 $1\n";
+ /^\@subsubsection\s+(.+)$/
+ and $_ = "\n=head4 $1\n";
+
+ # Block command handlers:
+ /^\@itemize(?:\s+(\@[a-z]+|\*|-))?/ and do {
+ push @endwstack, $endw;
+ push @icstack, $ic;
+ if (defined $1) {
+ $ic = $1;
+ } else {
+ $ic = '*';
+ }
+ $_ = "\n=over 4\n";
+ $endw = "itemize";
+ };
+
+ /^\@enumerate(?:\s+([a-zA-Z0-9]+))?/ and do {
+ push @endwstack, $endw;
+ push @icstack, $ic;
+ if (defined $1) {
+ $ic = $1 . ".";
+ } else {
+ $ic = "1.";
+ }
+ $_ = "\n=over 4\n";
+ $endw = "enumerate";
+ };
+
+ /^\@multitable\s.*/ and do {
+ push @endwstack, $endw;
+ $endw = "multitable";
+ $_ = "\n=over 4\n";
+ };
+
+ /^\@([fv]?table)\s+(\@[a-z]+)/ and do {
+ push @endwstack, $endw;
+ push @icstack, $ic;
+ $endw = $1;
+ $ic = $2;
+ $ic =~ s/\@(?:samp|strong|key|gcctabopt|env)/B/;
+ $ic =~ s/\@(?:code|kbd)/C/;
+ $ic =~ s/\@(?:dfn|var|emph|cite|i)/I/;
+ $ic =~ s/\@(?:file)/F/;
+ $ic =~ s/\@(?:asis)//;
+ $_ = "\n=over 4\n";
+ };
+
+ /^\@((?:small)?example|display)/ and do {
+ push @endwstack, $endw;
+ $endw = $1;
+ $shift = "\t";
+ $_ = ""; # need a paragraph break
+ };
+
+ /^\@item\s+(.*\S)\s*$/ and $endw eq "multitable" and do {
+ @columns = ();
+ for $column (split (/\s*\@tab\s*/, $1)) {
+ # @strong{...} is used a @headitem work-alike
+ $column =~ s/^\@strong{(.*)}$/$1/;
+ push @columns, $column;
+ }
+ $_ = "\n=item ".join (" : ", @columns)."\n";
+ };
+
+ /^\@itemx?\s*(.+)?$/ and do {
+ if (defined $1) {
+ if ($ic) {
+ if ($endw eq "enumerate") {
+ $_ = "\n=item $ic $1\n";
+ $ic =~ s/(\d+)/$1 + 1/eg;
+ } else {
+ # Entity escapes prevent munging by the <>
+ # processing below.
+ $_ = "\n=item $ic\&LT;$1\&GT;\n";
+ }
+ } else {
+ $_ = "\n=item $1\n";
+ }
+ } else {
+ $_ = "\n=item $ic\n";
+ $ic =~ y/A-Ya-y/B-Zb-z/;
+ $ic =~ s/(\d+)/$1 + 1/eg;
+ }
+ };
+
+ $section .= $shift.$_."\n";
+}
+# End of current file.
+close($inf);
+$inf = pop @instack;
+}
+
+die "No filename or title\n" unless defined $fn && defined $tl;
+
+$sects{NAME} = "$fn \- $tl\n";
+$sects{FOOTNOTES} .= "=back\n" if exists $sects{FOOTNOTES};
+
+for $sect (qw(NAME SYNOPSIS DESCRIPTION OPTIONS ENVIRONMENT FILES
+ BUGS NOTES FOOTNOTES SEEALSO AUTHOR COPYRIGHT)) {
+ if(exists $sects{$sect}) {
+ $head = $sect;
+ $head =~ s/SEEALSO/SEE ALSO/;
+ print "=head1 $head\n\n";
+ print scalar unmunge ($sects{$sect});
+ print "\n";
+ }
+}
+
+sub usage
+{
+ die "usage: $0 [-D toggle...] [infile [outfile]]\n";
+}
+
+sub postprocess
+{
+ local $_ = $_[0];
+
+ # @value{foo} is replaced by whatever 'foo' is defined as.
+ while (m/(\@value\{([a-zA-Z0-9_-]+)\})/g) {
+ if (! exists $defs{$2}) {
+ print STDERR "Option $2 not defined\n";
+ s/\Q$1\E//;
+ } else {
+ $value = $defs{$2};
+ s/\Q$1\E/$value/;
+ }
+ }
+
+ # Formatting commands.
+ # Temporary escape for @r.
+ s/\@r\{([^\}]*)\}/R<$1>/g;
+ s/\@(?:dfn|var|emph|cite|i)\{([^\}]*)\}/I<$1>/g;
+ s/\@(?:code|kbd)\{([^\}]*)\}/C<$1>/g;
+ s/\@(?:samp|strong|key|option|env|command|b)\{([^\}]*)\}/B<$1>/g;
+ s/\@sc\{([^\}]*)\}/\U$1/g;
+ s/\@acronym\{([^\}]*)\}/\U$1/g;
+ s/\@file\{([^\}]*)\}/F<$1>/g;
+ s/\@w\{([^\}]*)\}/S<$1>/g;
+ s/\@(?:dmn|math)\{([^\}]*)\}/$1/g;
+ s/\@\///g;
+
+ # keep references of the form @ref{...}, print them bold
+ s/\@(?:ref)\{([^\}]*)\}/B<$1>/g;
+
+ # Change double single quotes to double quotes.
+ s/''/"/g;
+ s/``/"/g;
+
+ # Cross references are thrown away, as are @noindent and @refill.
+ # (@noindent is impossible in .pod, and @refill is unnecessary.)
+ # @* is also impossible in .pod; we discard it and any newline that
+ # follows it. Similarly, our macro @gol must be discarded.
+
+ s/\(?\@xref\{(?:[^\}]*)\}(?:[^.<]|(?:<[^<>]*>))*\.\)?//g;
+ s/\s+\(\@pxref\{(?:[^\}]*)\}\)//g;
+ s/;\s+\@pxref\{(?:[^\}]*)\}//g;
+ s/\@noindent\s*//g;
+ s/\@refill//g;
+ s/\@gol//g;
+ s/\@\*\s*\n?//g;
+
+ # Anchors are thrown away
+ s/\@anchor\{(?:[^\}]*)\}//g;
+
+ # @uref can take one, two, or three arguments, with different
+ # semantics each time. @url and @email are just like @uref with
+ # one argument, for our purposes.
+ s/\@(?:uref|url|email)\{([^\},]*)\}/&lt;B<$1>&gt;/g;
+ s/\@uref\{([^\},]*),([^\},]*)\}/$2 (C<$1>)/g;
+ s/\@uref\{([^\},]*),([^\},]*),([^\},]*)\}/$3/g;
+
+ # Handle gccoptlist here, so it can contain the above formatting
+ # commands.
+ s/\@gccoptlist\{([^\}]*)\}/B<$1>/g;
+
+ # Un-escape <> at this point.
+ s/&LT;/</g;
+ s/&GT;/>/g;
+
+ # Now un-nest all B<>, I<>, R<>. Theoretically we could have
+ # indefinitely deep nesting; in practice, one level suffices.
+ 1 while s/([BIR])<([^<>]*)([BIR])<([^<>]*)>/$1<$2>$3<$4>$1</g;
+
+ # Replace R<...> with bare ...; eliminate empty markup, B<>;
+ # shift white space at the ends of [BI]<...> expressions outside
+ # the expression.
+ s/R<([^<>]*)>/$1/g;
+ s/[BI]<>//g;
+ s/([BI])<(\s+)([^>]+)>/$2$1<$3>/g;
+ s/([BI])<([^>]+?)(\s+)>/$1<$2>$3/g;
+
+ # Extract footnotes. This has to be done after all other
+ # processing because otherwise the regexp will choke on formatting
+ # inside @footnote.
+ while (/\@footnote/g) {
+ s/\@footnote\{([^\}]+)\}/[$fnno]/;
+ add_footnote($1, $fnno);
+ $fnno++;
+ }
+
+ return $_;
+}
+
+sub unmunge
+{
+ # Replace escaped symbols with their equivalents.
+ local $_ = $_[0];
+
+ s/&(.)grave;/E<$1grave>/g;
+ s/&lt;/E<lt>/g;
+ s/&gt;/E<gt>/g;
+ s/&lbrace;/\{/g;
+ s/&rbrace;/\}/g;
+ s/&at;/\@/g;
+ s/&amp;/&/g;
+ return $_;
+}
+
+sub add_footnote
+{
+ unless (exists $sects{FOOTNOTES}) {
+ $sects{FOOTNOTES} = "\n=over 4\n\n";
+ }
+
+ $sects{FOOTNOTES} .= "=item $fnno.\n\n"; $fnno++;
+ $sects{FOOTNOTES} .= $_[0];
+ $sects{FOOTNOTES} .= "\n\n";
+}
+
+# stolen from Symbol.pm
+{
+ my $genseq = 0;
+ sub gensym
+ {
+ my $name = "GEN" . $genseq++;
+ my $ref = \*{$name};
+ delete $::{$name};
+ return $ref;
+ }
+}
diff --git a/doc/version.texi b/doc/version.texi
new file mode 100644
index 0000000..0ec97f0
--- /dev/null
+++ b/doc/version.texi
@@ -0,0 +1,4 @@
+@set UPDATED 6 August 2011
+@set UPDATED-MONTH August 2011
+@set EDITION 1.13.4
+@set VERSION 1.13.4
diff --git a/doc/wget.info b/doc/wget.info
new file mode 100644
index 0000000..d757e71
--- /dev/null
+++ b/doc/wget.info
@@ -0,0 +1,4556 @@
+This is wget.info, produced by makeinfo version 4.13 from wget.texi.
+
+INFO-DIR-SECTION Network Applications
+START-INFO-DIR-ENTRY
+* Wget: (wget). The non-interactive network downloader.
+END-INFO-DIR-ENTRY
+
+ This file documents the GNU Wget utility for downloading network
+data.
+
+ Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
+2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc.
+
+ Permission is granted to copy, distribute and/or modify this document
+under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no
+Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A
+copy of the license is included in the section entitled "GNU Free
+Documentation License".
+
+
+File: wget.info, Node: Top, Next: Overview, Prev: (dir), Up: (dir)
+
+Wget 1.13.4
+***********
+
+This file documents the GNU Wget utility for downloading network data.
+
+ Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
+2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc.
+
+ Permission is granted to copy, distribute and/or modify this document
+under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no
+Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A
+copy of the license is included in the section entitled "GNU Free
+Documentation License".
+
+* Menu:
+
+* Overview:: Features of Wget.
+* Invoking:: Wget command-line arguments.
+* Recursive Download:: Downloading interlinked pages.
+* Following Links:: The available methods of chasing links.
+* Time-Stamping:: Mirroring according to time-stamps.
+* Startup File:: Wget's initialization file.
+* Examples:: Examples of usage.
+* Various:: The stuff that doesn't fit anywhere else.
+* Appendices:: Some useful references.
+* Copying this manual:: You may give out copies of this manual.
+* Concept Index:: Topics covered by this manual.
+
+
+File: wget.info, Node: Overview, Next: Invoking, Prev: Top, Up: Top
+
+1 Overview
+**********
+
+GNU Wget is a free utility for non-interactive download of files from
+the Web. It supports HTTP, HTTPS, and FTP protocols, as well as
+retrieval through HTTP proxies.
+
+ This chapter is a partial overview of Wget's features.
+
+ * Wget is non-interactive, meaning that it can work in the
+ background, while the user is not logged on. This allows you to
+ start a retrieval and disconnect from the system, letting Wget
+ finish the work. By contrast, most of the Web browsers require
+ constant user's presence, which can be a great hindrance when
+ transferring a lot of data.
+
+ * Wget can follow links in HTML, XHTML, and CSS pages, to create
+ local versions of remote web sites, fully recreating the directory
+ structure of the original site. This is sometimes referred to as
+ "recursive downloading." While doing that, Wget respects the Robot
+ Exclusion Standard (`/robots.txt'). Wget can be instructed to
+ convert the links in downloaded files to point at the local files,
+ for offline viewing.
+
+ * File name wildcard matching and recursive mirroring of directories
+ are available when retrieving via FTP. Wget can read the
+ time-stamp information given by both HTTP and FTP servers, and
+ store it locally. Thus Wget can see if the remote file has
+ changed since last retrieval, and automatically retrieve the new
+ version if it has. This makes Wget suitable for mirroring of FTP
+ sites, as well as home pages.
+
+ * Wget has been designed for robustness over slow or unstable network
+ connections; if a download fails due to a network problem, it will
+ keep retrying until the whole file has been retrieved. If the
+ server supports regetting, it will instruct the server to continue
+ the download from where it left off.
+
+ * Wget supports proxy servers, which can lighten the network load,
+ speed up retrieval and provide access behind firewalls. Wget uses
+ the passive FTP downloading by default, active FTP being an option.
+
+ * Wget supports IP version 6, the next generation of IP. IPv6 is
+ autodetected at compile-time, and can be disabled at either build
+ or run time. Binaries built with IPv6 support work well in both
+ IPv4-only and dual family environments.
+
+ * Built-in features offer mechanisms to tune which links you wish to
+ follow (*note Following Links::).
+
+ * The progress of individual downloads is traced using a progress
+ gauge. Interactive downloads are tracked using a
+ "thermometer"-style gauge, whereas non-interactive ones are traced
+ with dots, each dot representing a fixed amount of data received
+ (1KB by default). Either gauge can be customized to your
+ preferences.
+
+ * Most of the features are fully configurable, either through
+ command line options, or via the initialization file `.wgetrc'
+ (*note Startup File::). Wget allows you to define "global"
+ startup files (`/usr/local/etc/wgetrc' by default) for site
+ settings. You can also specify the location of a startup file with
+ the -config option.
+
+ * Finally, GNU Wget is free software. This means that everyone may
+ use it, redistribute it and/or modify it under the terms of the
+ GNU General Public License, as published by the Free Software
+ Foundation (see the file `COPYING' that came with GNU Wget, for
+ details).
+
+
+File: wget.info, Node: Invoking, Next: Recursive Download, Prev: Overview, Up: Top
+
+2 Invoking
+**********
+
+By default, Wget is very simple to invoke. The basic syntax is:
+
+ wget [OPTION]... [URL]...
+
+ Wget will simply download all the URLs specified on the command
+line. URL is a "Uniform Resource Locator", as defined below.
+
+ However, you may wish to change some of the default parameters of
+Wget. You can do it two ways: permanently, adding the appropriate
+command to `.wgetrc' (*note Startup File::), or specifying it on the
+command line.
+
+* Menu:
+
+* URL Format::
+* Option Syntax::
+* Basic Startup Options::
+* Logging and Input File Options::
+* Download Options::
+* Directory Options::
+* HTTP Options::
+* HTTPS (SSL/TLS) Options::
+* FTP Options::
+* Recursive Retrieval Options::
+* Recursive Accept/Reject Options::
+* Exit Status::
+
+
+File: wget.info, Node: URL Format, Next: Option Syntax, Prev: Invoking, Up: Invoking
+
+2.1 URL Format
+==============
+
+"URL" is an acronym for Uniform Resource Locator. A uniform resource
+locator is a compact string representation for a resource available via
+the Internet. Wget recognizes the URL syntax as per RFC1738. This is
+the most widely used form (square brackets denote optional parts):
+
+ http://host[:port]/directory/file
+ ftp://host[:port]/directory/file
+
+ You can also encode your username and password within a URL:
+
+ ftp://user:password@host/path
+ http://user:password@host/path
+
+ Either USER or PASSWORD, or both, may be left out. If you leave out
+either the HTTP username or password, no authentication will be sent.
+If you leave out the FTP username, `anonymous' will be used. If you
+leave out the FTP password, your email address will be supplied as a
+default password.(1)
+
+ *Important Note*: if you specify a password-containing URL on the
+command line, the username and password will be plainly visible to all
+users on the system, by way of `ps'. On multi-user systems, this is a
+big security risk. To work around it, use `wget -i -' and feed the
+URLs to Wget's standard input, each on a separate line, terminated by
+`C-d'.
+
+ You can encode unsafe characters in a URL as `%xy', `xy' being the
+hexadecimal representation of the character's ASCII value. Some common
+unsafe characters include `%' (quoted as `%25'), `:' (quoted as `%3A'),
+and `@' (quoted as `%40'). Refer to RFC1738 for a comprehensive list
+of unsafe characters.
+
+ Wget also supports the `type' feature for FTP URLs. By default, FTP
+documents are retrieved in the binary mode (type `i'), which means that
+they are downloaded unchanged. Another useful mode is the `a'
+("ASCII") mode, which converts the line delimiters between the
+different operating systems, and is thus useful for text files. Here
+is an example:
+
+ ftp://host/directory/file;type=a
+
+ Two alternative variants of URL specification are also supported,
+because of historical (hysterical?) reasons and their widespreaded use.
+
+ FTP-only syntax (supported by `NcFTP'):
+ host:/dir/file
+
+ HTTP-only syntax (introduced by `Netscape'):
+ host[:port]/dir/file
+
+ These two alternative forms are deprecated, and may cease being
+supported in the future.
+
+ If you do not understand the difference between these notations, or
+do not know which one to use, just use the plain ordinary format you use
+with your favorite browser, like `Lynx' or `Netscape'.
+
+ ---------- Footnotes ----------
+
+ (1) If you have a `.netrc' file in your home directory, password
+will also be searched for there.
+
+
+File: wget.info, Node: Option Syntax, Next: Basic Startup Options, Prev: URL Format, Up: Invoking
+
+2.2 Option Syntax
+=================
+
+Since Wget uses GNU getopt to process command-line arguments, every
+option has a long form along with the short one. Long options are more
+convenient to remember, but take time to type. You may freely mix
+different option styles, or specify options after the command-line
+arguments. Thus you may write:
+
+ wget -r --tries=10 http://fly.srk.fer.hr/ -o log
+
+ The space between the option accepting an argument and the argument
+may be omitted. Instead of `-o log' you can write `-olog'.
+
+ You may put several options that do not require arguments together,
+like:
+
+ wget -drc URL
+
+ This is completely equivalent to:
+
+ wget -d -r -c URL
+
+ Since the options can be specified after the arguments, you may
+terminate them with `--'. So the following will try to download URL
+`-x', reporting failure to `log':
+
+ wget -o log -- -x
+
+ The options that accept comma-separated lists all respect the
+convention that specifying an empty list clears its value. This can be
+useful to clear the `.wgetrc' settings. For instance, if your `.wgetrc'
+sets `exclude_directories' to `/cgi-bin', the following example will
+first reset it, and then set it to exclude `/~nobody' and `/~somebody'.
+You can also clear the lists in `.wgetrc' (*note Wgetrc Syntax::).
+
+ wget -X '' -X /~nobody,/~somebody
+
+ Most options that do not accept arguments are "boolean" options, so
+named because their state can be captured with a yes-or-no ("boolean")
+variable. For example, `--follow-ftp' tells Wget to follow FTP links
+from HTML files and, on the other hand, `--no-glob' tells it not to
+perform file globbing on FTP URLs. A boolean option is either
+"affirmative" or "negative" (beginning with `--no'). All such options
+share several properties.
+
+ Unless stated otherwise, it is assumed that the default behavior is
+the opposite of what the option accomplishes. For example, the
+documented existence of `--follow-ftp' assumes that the default is to
+_not_ follow FTP links from HTML pages.
+
+ Affirmative options can be negated by prepending the `--no-' to the
+option name; negative options can be negated by omitting the `--no-'
+prefix. This might seem superfluous--if the default for an affirmative
+option is to not do something, then why provide a way to explicitly
+turn it off? But the startup file may in fact change the default. For
+instance, using `follow_ftp = on' in `.wgetrc' makes Wget _follow_ FTP
+links by default, and using `--no-follow-ftp' is the only way to
+restore the factory default from the command line.
+
+
+File: wget.info, Node: Basic Startup Options, Next: Logging and Input File Options, Prev: Option Syntax, Up: Invoking
+
+2.3 Basic Startup Options
+=========================
+
+`-V'
+`--version'
+ Display the version of Wget.
+
+`-h'
+`--help'
+ Print a help message describing all of Wget's command-line options.
+
+`-b'
+`--background'
+ Go to background immediately after startup. If no output file is
+ specified via the `-o', output is redirected to `wget-log'.
+
+`-e COMMAND'
+`--execute COMMAND'
+ Execute COMMAND as if it were a part of `.wgetrc' (*note Startup
+ File::). A command thus invoked will be executed _after_ the
+ commands in `.wgetrc', thus taking precedence over them. If you
+ need to specify more than one wgetrc command, use multiple
+ instances of `-e'.
+
+
+
+File: wget.info, Node: Logging and Input File Options, Next: Download Options, Prev: Basic Startup Options, Up: Invoking
+
+2.4 Logging and Input File Options
+==================================
+
+`-o LOGFILE'
+`--output-file=LOGFILE'
+ Log all messages to LOGFILE. The messages are normally reported
+ to standard error.
+
+`-a LOGFILE'
+`--append-output=LOGFILE'
+ Append to LOGFILE. This is the same as `-o', only it appends to
+ LOGFILE instead of overwriting the old log file. If LOGFILE does
+ not exist, a new file is created.
+
+`-d'
+`--debug'
+ Turn on debug output, meaning various information important to the
+ developers of Wget if it does not work properly. Your system
+ administrator may have chosen to compile Wget without debug
+ support, in which case `-d' will not work. Please note that
+ compiling with debug support is always safe--Wget compiled with
+ the debug support will _not_ print any debug info unless requested
+ with `-d'. *Note Reporting Bugs::, for more information on how to
+ use `-d' for sending bug reports.
+
+`-q'
+`--quiet'
+ Turn off Wget's output.
+
+`-v'
+`--verbose'
+ Turn on verbose output, with all the available data. The default
+ output is verbose.
+
+`-nv'
+`--no-verbose'
+ Turn off verbose without being completely quiet (use `-q' for
+ that), which means that error messages and basic information still
+ get printed.
+
+`-i FILE'
+`--input-file=FILE'
+ Read URLs from a local or external FILE. If `-' is specified as
+ FILE, URLs are read from the standard input. (Use `./-' to read
+ from a file literally named `-'.)
+
+ If this function is used, no URLs need be present on the command
+ line. If there are URLs both on the command line and in an input
+ file, those on the command lines will be the first ones to be
+ retrieved. If `--force-html' is not specified, then FILE should
+ consist of a series of URLs, one per line.
+
+ However, if you specify `--force-html', the document will be
+ regarded as `html'. In that case you may have problems with
+ relative links, which you can solve either by adding `<base
+ href="URL">' to the documents or by specifying `--base=URL' on the
+ command line.
+
+ If the FILE is an external one, the document will be automatically
+ treated as `html' if the Content-Type matches `text/html'.
+ Furthermore, the FILE's location will be implicitly used as base
+ href if none was specified.
+
+`-F'
+`--force-html'
+ When input is read from a file, force it to be treated as an HTML
+ file. This enables you to retrieve relative links from existing
+ HTML files on your local disk, by adding `<base href="URL">' to
+ HTML, or using the `--base' command-line option.
+
+`-B URL'
+`--base=URL'
+ Resolves relative links using URL as the point of reference, when
+ reading links from an HTML file specified via the
+ `-i'/`--input-file' option (together with `--force-html', or when
+ the input file was fetched remotely from a server describing it as
+ HTML). This is equivalent to the presence of a `BASE' tag in the
+ HTML input file, with URL as the value for the `href' attribute.
+
+ For instance, if you specify `http://foo/bar/a.html' for URL, and
+ Wget reads `../baz/b.html' from the input file, it would be
+ resolved to `http://foo/baz/b.html'.
+
+`--config=FILE'
+ Specify the location of a startup file you wish to use.
+
+
+File: wget.info, Node: Download Options, Next: Directory Options, Prev: Logging and Input File Options, Up: Invoking
+
+2.5 Download Options
+====================
+
+`--bind-address=ADDRESS'
+ When making client TCP/IP connections, bind to ADDRESS on the
+ local machine. ADDRESS may be specified as a hostname or IP
+ address. This option can be useful if your machine is bound to
+ multiple IPs.
+
+`-t NUMBER'
+`--tries=NUMBER'
+ Set number of retries to NUMBER. Specify 0 or `inf' for infinite
+ retrying. The default is to retry 20 times, with the exception of
+ fatal errors like "connection refused" or "not found" (404), which
+ are not retried.
+
+`-O FILE'
+`--output-document=FILE'
+ The documents will not be written to the appropriate files, but all
+ will be concatenated together and written to FILE. If `-' is used
+ as FILE, documents will be printed to standard output, disabling
+ link conversion. (Use `./-' to print to a file literally named
+ `-'.)
+
+ Use of `-O' is _not_ intended to mean simply "use the name FILE
+ instead of the one in the URL;" rather, it is analogous to shell
+ redirection: `wget -O file http://foo' is intended to work like
+ `wget -O - http://foo > file'; `file' will be truncated
+ immediately, and _all_ downloaded content will be written there.
+
+ For this reason, `-N' (for timestamp-checking) is not supported in
+ combination with `-O': since FILE is always newly created, it will
+ always have a very new timestamp. A warning will be issued if this
+ combination is used.
+
+ Similarly, using `-r' or `-p' with `-O' may not work as you
+ expect: Wget won't just download the first file to FILE and then
+ download the rest to their normal names: _all_ downloaded content
+ will be placed in FILE. This was disabled in version 1.11, but has
+ been reinstated (with a warning) in 1.11.2, as there are some
+ cases where this behavior can actually have some use.
+
+ Note that a combination with `-k' is only permitted when
+ downloading a single document, as in that case it will just convert
+ all relative URIs to external ones; `-k' makes no sense for
+ multiple URIs when they're all being downloaded to a single file;
+ `-k' can be used only when the output is a regular file.
+
+`-nc'
+`--no-clobber'
+ If a file is downloaded more than once in the same directory,
+ Wget's behavior depends on a few options, including `-nc'. In
+ certain cases, the local file will be "clobbered", or overwritten,
+ upon repeated download. In other cases it will be preserved.
+
+ When running Wget without `-N', `-nc', `-r', or `-p', downloading
+ the same file in the same directory will result in the original
+ copy of FILE being preserved and the second copy being named
+ `FILE.1'. If that file is downloaded yet again, the third copy
+ will be named `FILE.2', and so on. (This is also the behavior
+ with `-nd', even if `-r' or `-p' are in effect.) When `-nc' is
+ specified, this behavior is suppressed, and Wget will refuse to
+ download newer copies of `FILE'. Therefore, "`no-clobber'" is
+ actually a misnomer in this mode--it's not clobbering that's
+ prevented (as the numeric suffixes were already preventing
+ clobbering), but rather the multiple version saving that's
+ prevented.
+
+ When running Wget with `-r' or `-p', but without `-N', `-nd', or
+ `-nc', re-downloading a file will result in the new copy simply
+ overwriting the old. Adding `-nc' will prevent this behavior,
+ instead causing the original version to be preserved and any newer
+ copies on the server to be ignored.
+
+ When running Wget with `-N', with or without `-r' or `-p', the
+ decision as to whether or not to download a newer copy of a file
+ depends on the local and remote timestamp and size of the file
+ (*note Time-Stamping::). `-nc' may not be specified at the same
+ time as `-N'.
+
+ Note that when `-nc' is specified, files with the suffixes `.html'
+ or `.htm' will be loaded from the local disk and parsed as if they
+ had been retrieved from the Web.
+
+`-c'
+`--continue'
+ Continue getting a partially-downloaded file. This is useful when
+ you want to finish up a download started by a previous instance of
+ Wget, or by another program. For instance:
+
+ wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
+
+ If there is a file named `ls-lR.Z' in the current directory, Wget
+ will assume that it is the first portion of the remote file, and
+ will ask the server to continue the retrieval from an offset equal
+ to the length of the local file.
+
+ Note that you don't need to specify this option if you just want
+ the current invocation of Wget to retry downloading a file should
+ the connection be lost midway through. This is the default
+ behavior. `-c' only affects resumption of downloads started
+ _prior_ to this invocation of Wget, and whose local files are
+ still sitting around.
+
+ Without `-c', the previous example would just download the remote
+ file to `ls-lR.Z.1', leaving the truncated `ls-lR.Z' file alone.
+
+ Beginning with Wget 1.7, if you use `-c' on a non-empty file, and
+ it turns out that the server does not support continued
+ downloading, Wget will refuse to start the download from scratch,
+ which would effectively ruin existing contents. If you really
+ want the download to start from scratch, remove the file.
+
+ Also beginning with Wget 1.7, if you use `-c' on a file which is of
+ equal size as the one on the server, Wget will refuse to download
+ the file and print an explanatory message. The same happens when
+ the file is smaller on the server than locally (presumably because
+ it was changed on the server since your last download
+ attempt)--because "continuing" is not meaningful, no download
+ occurs.
+
+ On the other side of the coin, while using `-c', any file that's
+ bigger on the server than locally will be considered an incomplete
+ download and only `(length(remote) - length(local))' bytes will be
+ downloaded and tacked onto the end of the local file. This
+ behavior can be desirable in certain cases--for instance, you can
+ use `wget -c' to download just the new portion that's been
+ appended to a data collection or log file.
+
+ However, if the file is bigger on the server because it's been
+ _changed_, as opposed to just _appended_ to, you'll end up with a
+ garbled file. Wget has no way of verifying that the local file is
+ really a valid prefix of the remote file. You need to be
+ especially careful of this when using `-c' in conjunction with
+ `-r', since every file will be considered as an "incomplete
+ download" candidate.
+
+ Another instance where you'll get a garbled file if you try to use
+ `-c' is if you have a lame HTTP proxy that inserts a "transfer
+ interrupted" string into the local file. In the future a
+ "rollback" option may be added to deal with this case.
+
+ Note that `-c' only works with FTP servers and with HTTP servers
+ that support the `Range' header.
+
+`--progress=TYPE'
+ Select the type of the progress indicator you wish to use. Legal
+ indicators are "dot" and "bar".
+
+ The "bar" indicator is used by default. It draws an ASCII progress
+ bar graphics (a.k.a "thermometer" display) indicating the status of
+ retrieval. If the output is not a TTY, the "dot" bar will be used
+ by default.
+
+ Use `--progress=dot' to switch to the "dot" display. It traces
+ the retrieval by printing dots on the screen, each dot
+ representing a fixed amount of downloaded data.
+
+ When using the dotted retrieval, you may also set the "style" by
+ specifying the type as `dot:STYLE'. Different styles assign
+ different meaning to one dot. With the `default' style each dot
+ represents 1K, there are ten dots in a cluster and 50 dots in a
+ line. The `binary' style has a more "computer"-like
+ orientation--8K dots, 16-dots clusters and 48 dots per line (which
+ makes for 384K lines). The `mega' style is suitable for
+ downloading very large files--each dot represents 64K retrieved,
+ there are eight dots in a cluster, and 48 dots on each line (so
+ each line contains 3M).
+
+ Note that you can set the default style using the `progress'
+ command in `.wgetrc'. That setting may be overridden from the
+ command line. The exception is that, when the output is not a
+ TTY, the "dot" progress will be favored over "bar". To force the
+ bar output, use `--progress=bar:force'.
+
+`-N'
+`--timestamping'
+ Turn on time-stamping. *Note Time-Stamping::, for details.
+
+`--no-use-server-timestamps'
+ Don't set the local file's timestamp by the one on the server.
+
+ By default, when a file is downloaded, it's timestamps are set to
+ match those from the remote file. This allows the use of
+ `--timestamping' on subsequent invocations of wget. However, it is
+ sometimes useful to base the local file's timestamp on when it was
+ actually downloaded; for that purpose, the
+ `--no-use-server-timestamps' option has been provided.
+
+`-S'
+`--server-response'
+ Print the headers sent by HTTP servers and responses sent by FTP
+ servers.
+
+`--spider'
+ When invoked with this option, Wget will behave as a Web "spider",
+ which means that it will not download the pages, just check that
+ they are there. For example, you can use Wget to check your
+ bookmarks:
+
+ wget --spider --force-html -i bookmarks.html
+
+ This feature needs much more work for Wget to get close to the
+ functionality of real web spiders.
+
+`-T seconds'
+`--timeout=SECONDS'
+ Set the network timeout to SECONDS seconds. This is equivalent to
+ specifying `--dns-timeout', `--connect-timeout', and
+ `--read-timeout', all at the same time.
+
+ When interacting with the network, Wget can check for timeout and
+ abort the operation if it takes too long. This prevents anomalies
+ like hanging reads and infinite connects. The only timeout
+ enabled by default is a 900-second read timeout. Setting a
+ timeout to 0 disables it altogether. Unless you know what you are
+ doing, it is best not to change the default timeout settings.
+
+ All timeout-related options accept decimal values, as well as
+ subsecond values. For example, `0.1' seconds is a legal (though
+ unwise) choice of timeout. Subsecond timeouts are useful for
+ checking server response times or for testing network latency.
+
+`--dns-timeout=SECONDS'
+ Set the DNS lookup timeout to SECONDS seconds. DNS lookups that
+ don't complete within the specified time will fail. By default,
+ there is no timeout on DNS lookups, other than that implemented by
+ system libraries.
+
+`--connect-timeout=SECONDS'
+ Set the connect timeout to SECONDS seconds. TCP connections that
+ take longer to establish will be aborted. By default, there is no
+ connect timeout, other than that implemented by system libraries.
+
+`--read-timeout=SECONDS'
+ Set the read (and write) timeout to SECONDS seconds. The "time"
+ of this timeout refers to "idle time": if, at any point in the
+ download, no data is received for more than the specified number
+ of seconds, reading fails and the download is restarted. This
+ option does not directly affect the duration of the entire
+ download.
+
+ Of course, the remote server may choose to terminate the connection
+ sooner than this option requires. The default read timeout is 900
+ seconds.
+
+`--limit-rate=AMOUNT'
+ Limit the download speed to AMOUNT bytes per second. Amount may
+ be expressed in bytes, kilobytes with the `k' suffix, or megabytes
+ with the `m' suffix. For example, `--limit-rate=20k' will limit
+ the retrieval rate to 20KB/s. This is useful when, for whatever
+ reason, you don't want Wget to consume the entire available
+ bandwidth.
+
+ This option allows the use of decimal numbers, usually in
+ conjunction with power suffixes; for example, `--limit-rate=2.5k'
+ is a legal value.
+
+ Note that Wget implements the limiting by sleeping the appropriate
+ amount of time after a network read that took less time than
+ specified by the rate. Eventually this strategy causes the TCP
+ transfer to slow down to approximately the specified rate.
+ However, it may take some time for this balance to be achieved, so
+ don't be surprised if limiting the rate doesn't work well with
+ very small files.
+
+`-w SECONDS'
+`--wait=SECONDS'
+ Wait the specified number of seconds between the retrievals. Use
+ of this option is recommended, as it lightens the server load by
+ making the requests less frequent. Instead of in seconds, the
+ time can be specified in minutes using the `m' suffix, in hours
+ using `h' suffix, or in days using `d' suffix.
+
+ Specifying a large value for this option is useful if the network
+ or the destination host is down, so that Wget can wait long enough
+ to reasonably expect the network error to be fixed before the
+ retry. The waiting interval specified by this function is
+ influenced by `--random-wait', which see.
+
+`--waitretry=SECONDS'
+ If you don't want Wget to wait between _every_ retrieval, but only
+ between retries of failed downloads, you can use this option.
+ Wget will use "linear backoff", waiting 1 second after the first
+ failure on a given file, then waiting 2 seconds after the second
+ failure on that file, up to the maximum number of SECONDS you
+ specify.
+
+ By default, Wget will assume a value of 10 seconds.
+
+`--random-wait'
+ Some web sites may perform log analysis to identify retrieval
+ programs such as Wget by looking for statistically significant
+ similarities in the time between requests. This option causes the
+ time between requests to vary between 0.5 and 1.5 * WAIT seconds,
+ where WAIT was specified using the `--wait' option, in order to
+ mask Wget's presence from such analysis.
+
+ A 2001 article in a publication devoted to development on a popular
+ consumer platform provided code to perform this analysis on the
+ fly. Its author suggested blocking at the class C address level
+ to ensure automated retrieval programs were blocked despite
+ changing DHCP-supplied addresses.
+
+ The `--random-wait' option was inspired by this ill-advised
+ recommendation to block many unrelated users from a web site due
+ to the actions of one.
+
+`--no-proxy'
+ Don't use proxies, even if the appropriate `*_proxy' environment
+ variable is defined.
+
+ For more information about the use of proxies with Wget, *Note
+ Proxies::.
+
+`-Q QUOTA'
+`--quota=QUOTA'
+ Specify download quota for automatic retrievals. The value can be
+ specified in bytes (default), kilobytes (with `k' suffix), or
+ megabytes (with `m' suffix).
+
+ Note that quota will never affect downloading a single file. So
+ if you specify `wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz',
+ all of the `ls-lR.gz' will be downloaded. The same goes even when
+ several URLs are specified on the command-line. However, quota is
+ respected when retrieving either recursively, or from an input
+ file. Thus you may safely type `wget -Q2m -i sites'--download
+ will be aborted when the quota is exceeded.
+
+ Setting quota to 0 or to `inf' unlimits the download quota.
+
+`--no-dns-cache'
+ Turn off caching of DNS lookups. Normally, Wget remembers the IP
+ addresses it looked up from DNS so it doesn't have to repeatedly
+ contact the DNS server for the same (typically small) set of hosts
+ it retrieves from. This cache exists in memory only; a new Wget
+ run will contact DNS again.
+
+ However, it has been reported that in some situations it is not
+ desirable to cache host names, even for the duration of a
+ short-running application like Wget. With this option Wget issues
+ a new DNS lookup (more precisely, a new call to `gethostbyname' or
+ `getaddrinfo') each time it makes a new connection. Please note
+ that this option will _not_ affect caching that might be performed
+ by the resolving library or by an external caching layer, such as
+ NSCD.
+
+ If you don't understand exactly what this option does, you probably
+ won't need it.
+
+`--restrict-file-names=MODES'
+ Change which characters found in remote URLs must be escaped during
+ generation of local filenames. Characters that are "restricted"
+ by this option are escaped, i.e. replaced with `%HH', where `HH'
+ is the hexadecimal number that corresponds to the restricted
+ character. This option may also be used to force all alphabetical
+ cases to be either lower- or uppercase.
+
+ By default, Wget escapes the characters that are not valid or safe
+ as part of file names on your operating system, as well as control
+ characters that are typically unprintable. This option is useful
+ for changing these defaults, perhaps because you are downloading
+ to a non-native partition, or because you want to disable escaping
+ of the control characters, or you want to further restrict
+ characters to only those in the ASCII range of values.
+
+ The MODES are a comma-separated set of text values. The acceptable
+ values are `unix', `windows', `nocontrol', `ascii', `lowercase',
+ and `uppercase'. The values `unix' and `windows' are mutually
+ exclusive (one will override the other), as are `lowercase' and
+ `uppercase'. Those last are special cases, as they do not change
+ the set of characters that would be escaped, but rather force local
+ file paths to be converted either to lower- or uppercase.
+
+ When "unix" is specified, Wget escapes the character `/' and the
+ control characters in the ranges 0-31 and 128-159. This is the
+ default on Unix-like operating systems.
+
+ When "windows" is given, Wget escapes the characters `\', `|',
+ `/', `:', `?', `"', `*', `<', `>', and the control characters in
+ the ranges 0-31 and 128-159. In addition to this, Wget in Windows
+ mode uses `+' instead of `:' to separate host and port in local
+ file names, and uses `@' instead of `?' to separate the query
+ portion of the file name from the rest. Therefore, a URL that
+ would be saved as `www.xemacs.org:4300/search.pl?input=blah' in
+ Unix mode would be saved as
+ `www.xemacs.org+4300/search.pl@input=blah' in Windows mode. This
+ mode is the default on Windows.
+
+ If you specify `nocontrol', then the escaping of the control
+ characters is also switched off. This option may make sense when
+ you are downloading URLs whose names contain UTF-8 characters, on
+ a system which can save and display filenames in UTF-8 (some
+ possible byte values used in UTF-8 byte sequences fall in the
+ range of values designated by Wget as "controls").
+
+ The `ascii' mode is used to specify that any bytes whose values
+ are outside the range of ASCII characters (that is, greater than
+ 127) shall be escaped. This can be useful when saving filenames
+ whose encoding does not match the one used locally.
+
+`-4'
+`--inet4-only'
+`-6'
+`--inet6-only'
+ Force connecting to IPv4 or IPv6 addresses. With `--inet4-only'
+ or `-4', Wget will only connect to IPv4 hosts, ignoring AAAA
+ records in DNS, and refusing to connect to IPv6 addresses
+ specified in URLs. Conversely, with `--inet6-only' or `-6', Wget
+ will only connect to IPv6 hosts and ignore A records and IPv4
+ addresses.
+
+ Neither options should be needed normally. By default, an
+ IPv6-aware Wget will use the address family specified by the
+ host's DNS record. If the DNS responds with both IPv4 and IPv6
+ addresses, Wget will try them in sequence until it finds one it
+ can connect to. (Also see `--prefer-family' option described
+ below.)
+
+ These options can be used to deliberately force the use of IPv4 or
+ IPv6 address families on dual family systems, usually to aid
+ debugging or to deal with broken network configuration. Only one
+ of `--inet6-only' and `--inet4-only' may be specified at the same
+ time. Neither option is available in Wget compiled without IPv6
+ support.
+
+`--prefer-family=none/IPv4/IPv6'
+ When given a choice of several addresses, connect to the addresses
+ with specified address family first. The address order returned by
+ DNS is used without change by default.
+
+ This avoids spurious errors and connect attempts when accessing
+ hosts that resolve to both IPv6 and IPv4 addresses from IPv4
+ networks. For example, `www.kame.net' resolves to
+ `2001:200:0:8002:203:47ff:fea5:3085' and to `203.178.141.194'.
+ When the preferred family is `IPv4', the IPv4 address is used
+ first; when the preferred family is `IPv6', the IPv6 address is
+ used first; if the specified value is `none', the address order
+ returned by DNS is used without change.
+
+ Unlike `-4' and `-6', this option doesn't inhibit access to any
+ address family, it only changes the _order_ in which the addresses
+ are accessed. Also note that the reordering performed by this
+ option is "stable"--it doesn't affect order of addresses of the
+ same family. That is, the relative order of all IPv4 addresses
+ and of all IPv6 addresses remains intact in all cases.
+
+`--retry-connrefused'
+ Consider "connection refused" a transient error and try again.
+ Normally Wget gives up on a URL when it is unable to connect to the
+ site because failure to connect is taken as a sign that the server
+ is not running at all and that retries would not help. This
+ option is for mirroring unreliable sites whose servers tend to
+ disappear for short periods of time.
+
+`--user=USER'
+`--password=PASSWORD'
+ Specify the username USER and password PASSWORD for both FTP and
+ HTTP file retrieval. These parameters can be overridden using the
+ `--ftp-user' and `--ftp-password' options for FTP connections and
+ the `--http-user' and `--http-password' options for HTTP
+ connections.
+
+`--ask-password'
+ Prompt for a password for each connection established. Cannot be
+ specified when `--password' is being used, because they are
+ mutually exclusive.
+
+`--no-iri'
+ Turn off internationalized URI (IRI) support. Use `--iri' to turn
+ it on. IRI support is activated by default.
+
+ You can set the default state of IRI support using the `iri'
+ command in `.wgetrc'. That setting may be overridden from the
+ command line.
+
+`--local-encoding=ENCODING'
+ Force Wget to use ENCODING as the default system encoding. That
+ affects how Wget converts URLs specified as arguments from locale
+ to UTF-8 for IRI support.
+
+ Wget use the function `nl_langinfo()' and then the `CHARSET'
+ environment variable to get the locale. If it fails, ASCII is used.
+
+ You can set the default local encoding using the `local_encoding'
+ command in `.wgetrc'. That setting may be overridden from the
+ command line.
+
+`--remote-encoding=ENCODING'
+ Force Wget to use ENCODING as the default remote server encoding.
+ That affects how Wget converts URIs found in files from remote
+ encoding to UTF-8 during a recursive fetch. This options is only
+ useful for IRI support, for the interpretation of non-ASCII
+ characters.
+
+ For HTTP, remote encoding can be found in HTTP `Content-Type'
+ header and in HTML `Content-Type http-equiv' meta tag.
+
+ You can set the default encoding using the `remoteencoding'
+ command in `.wgetrc'. That setting may be overridden from the
+ command line.
+
+`--unlink'
+ Force Wget to unlink file instead of clobbering existing file. This
+ option is useful for downloading to the directory with hardlinks.
+
+
+
+File: wget.info, Node: Directory Options, Next: HTTP Options, Prev: Download Options, Up: Invoking
+
+2.6 Directory Options
+=====================
+
+`-nd'
+`--no-directories'
+ Do not create a hierarchy of directories when retrieving
+ recursively. With this option turned on, all files will get saved
+ to the current directory, without clobbering (if a name shows up
+ more than once, the filenames will get extensions `.n').
+
+`-x'
+`--force-directories'
+ The opposite of `-nd'--create a hierarchy of directories, even if
+ one would not have been created otherwise. E.g. `wget -x
+ http://fly.srk.fer.hr/robots.txt' will save the downloaded file to
+ `fly.srk.fer.hr/robots.txt'.
+
+`-nH'
+`--no-host-directories'
+ Disable generation of host-prefixed directories. By default,
+ invoking Wget with `-r http://fly.srk.fer.hr/' will create a
+ structure of directories beginning with `fly.srk.fer.hr/'. This
+ option disables such behavior.
+
+`--protocol-directories'
+ Use the protocol name as a directory component of local file
+ names. For example, with this option, `wget -r http://HOST' will
+ save to `http/HOST/...' rather than just to `HOST/...'.
+
+`--cut-dirs=NUMBER'
+ Ignore NUMBER directory components. This is useful for getting a
+ fine-grained control over the directory where recursive retrieval
+ will be saved.
+
+ Take, for example, the directory at
+ `ftp://ftp.xemacs.org/pub/xemacs/'. If you retrieve it with `-r',
+ it will be saved locally under `ftp.xemacs.org/pub/xemacs/'.
+ While the `-nH' option can remove the `ftp.xemacs.org/' part, you
+ are still stuck with `pub/xemacs'. This is where `--cut-dirs'
+ comes in handy; it makes Wget not "see" NUMBER remote directory
+ components. Here are several examples of how `--cut-dirs' option
+ works.
+
+ No options -> ftp.xemacs.org/pub/xemacs/
+ -nH -> pub/xemacs/
+ -nH --cut-dirs=1 -> xemacs/
+ -nH --cut-dirs=2 -> .
+
+ --cut-dirs=1 -> ftp.xemacs.org/xemacs/
+ ...
+
+ If you just want to get rid of the directory structure, this
+ option is similar to a combination of `-nd' and `-P'. However,
+ unlike `-nd', `--cut-dirs' does not lose with subdirectories--for
+ instance, with `-nH --cut-dirs=1', a `beta/' subdirectory will be
+ placed to `xemacs/beta', as one would expect.
+
+`-P PREFIX'
+`--directory-prefix=PREFIX'
+ Set directory prefix to PREFIX. The "directory prefix" is the
+ directory where all other files and subdirectories will be saved
+ to, i.e. the top of the retrieval tree. The default is `.' (the
+ current directory).
+
+
+File: wget.info, Node: HTTP Options, Next: HTTPS (SSL/TLS) Options, Prev: Directory Options, Up: Invoking
+
+2.7 HTTP Options
+================
+
+`--default-page=NAME'
+ Use NAME as the default file name when it isn't known (i.e., for
+ URLs that end in a slash), instead of `index.html'.
+
+`-E'
+`--adjust-extension'
+ If a file of type `application/xhtml+xml' or `text/html' is
+ downloaded and the URL does not end with the regexp
+ `\.[Hh][Tt][Mm][Ll]?', this option will cause the suffix `.html'
+ to be appended to the local filename. This is useful, for
+ instance, when you're mirroring a remote site that uses `.asp'
+ pages, but you want the mirrored pages to be viewable on your
+ stock Apache server. Another good use for this is when you're
+ downloading CGI-generated materials. A URL like
+ `http://site.com/article.cgi?25' will be saved as
+ `article.cgi?25.html'.
+
+ Note that filenames changed in this way will be re-downloaded
+ every time you re-mirror a site, because Wget can't tell that the
+ local `X.html' file corresponds to remote URL `X' (since it
+ doesn't yet know that the URL produces output of type `text/html'
+ or `application/xhtml+xml'.
+
+ As of version 1.12, Wget will also ensure that any downloaded
+ files of type `text/css' end in the suffix `.css', and the option
+ was renamed from `--html-extension', to better reflect its new
+ behavior. The old option name is still acceptable, but should now
+ be considered deprecated.
+
+ At some point in the future, this option may well be expanded to
+ include suffixes for other types of content, including content
+ types that are not parsed by Wget.
+
+`--http-user=USER'
+`--http-password=PASSWORD'
+ Specify the username USER and password PASSWORD on an HTTP server.
+ According to the type of the challenge, Wget will encode them
+ using either the `basic' (insecure), the `digest', or the Windows
+ `NTLM' authentication scheme.
+
+ Another way to specify username and password is in the URL itself
+ (*note URL Format::). Either method reveals your password to
+ anyone who bothers to run `ps'. To prevent the passwords from
+ being seen, store them in `.wgetrc' or `.netrc', and make sure to
+ protect those files from other users with `chmod'. If the
+ passwords are really important, do not leave them lying in those
+ files either--edit the files and delete them after Wget has
+ started the download.
+
+`--no-http-keep-alive'
+ Turn off the "keep-alive" feature for HTTP downloads. Normally,
+ Wget asks the server to keep the connection open so that, when you
+ download more than one document from the same server, they get
+ transferred over the same TCP connection. This saves time and at
+ the same time reduces the load on the server.
+
+ This option is useful when, for some reason, persistent
+ (keep-alive) connections don't work for you, for example due to a
+ server bug or due to the inability of server-side scripts to cope
+ with the connections.
+
+`--no-cache'
+ Disable server-side cache. In this case, Wget will send the remote
+ server an appropriate directive (`Pragma: no-cache') to get the
+ file from the remote service, rather than returning the cached
+ version. This is especially useful for retrieving and flushing
+ out-of-date documents on proxy servers.
+
+ Caching is allowed by default.
+
+`--no-cookies'
+ Disable the use of cookies. Cookies are a mechanism for
+ maintaining server-side state. The server sends the client a
+ cookie using the `Set-Cookie' header, and the client responds with
+ the same cookie upon further requests. Since cookies allow the
+ server owners to keep track of visitors and for sites to exchange
+ this information, some consider them a breach of privacy. The
+ default is to use cookies; however, _storing_ cookies is not on by
+ default.
+
+`--load-cookies FILE'
+ Load cookies from FILE before the first HTTP retrieval. FILE is a
+ textual file in the format originally used by Netscape's
+ `cookies.txt' file.
+
+ You will typically use this option when mirroring sites that
+ require that you be logged in to access some or all of their
+ content. The login process typically works by the web server
+ issuing an HTTP cookie upon receiving and verifying your
+ credentials. The cookie is then resent by the browser when
+ accessing that part of the site, and so proves your identity.
+
+ Mirroring such a site requires Wget to send the same cookies your
+ browser sends when communicating with the site. This is achieved
+ by `--load-cookies'--simply point Wget to the location of the
+ `cookies.txt' file, and it will send the same cookies your browser
+ would send in the same situation. Different browsers keep textual
+ cookie files in different locations:
+
+ Netscape 4.x.
+ The cookies are in `~/.netscape/cookies.txt'.
+
+ Mozilla and Netscape 6.x.
+ Mozilla's cookie file is also named `cookies.txt', located
+ somewhere under `~/.mozilla', in the directory of your
+ profile. The full path usually ends up looking somewhat like
+ `~/.mozilla/default/SOME-WEIRD-STRING/cookies.txt'.
+
+ Internet Explorer.
+ You can produce a cookie file Wget can use by using the File
+ menu, Import and Export, Export Cookies. This has been
+ tested with Internet Explorer 5; it is not guaranteed to work
+ with earlier versions.
+
+ Other browsers.
+ If you are using a different browser to create your cookies,
+ `--load-cookies' will only work if you can locate or produce a
+ cookie file in the Netscape format that Wget expects.
+
+ If you cannot use `--load-cookies', there might still be an
+ alternative. If your browser supports a "cookie manager", you can
+ use it to view the cookies used when accessing the site you're
+ mirroring. Write down the name and value of the cookie, and
+ manually instruct Wget to send those cookies, bypassing the
+ "official" cookie support:
+
+ wget --no-cookies --header "Cookie: NAME=VALUE"
+
+`--save-cookies FILE'
+ Save cookies to FILE before exiting. This will not save cookies
+ that have expired or that have no expiry time (so-called "session
+ cookies"), but also see `--keep-session-cookies'.
+
+`--keep-session-cookies'
+ When specified, causes `--save-cookies' to also save session
+ cookies. Session cookies are normally not saved because they are
+ meant to be kept in memory and forgotten when you exit the browser.
+ Saving them is useful on sites that require you to log in or to
+ visit the home page before you can access some pages. With this
+ option, multiple Wget runs are considered a single browser session
+ as far as the site is concerned.
+
+ Since the cookie file format does not normally carry session
+ cookies, Wget marks them with an expiry timestamp of 0. Wget's
+ `--load-cookies' recognizes those as session cookies, but it might
+ confuse other browsers. Also note that cookies so loaded will be
+ treated as other session cookies, which means that if you want
+ `--save-cookies' to preserve them again, you must use
+ `--keep-session-cookies' again.
+
+`--ignore-length'
+ Unfortunately, some HTTP servers (CGI programs, to be more
+ precise) send out bogus `Content-Length' headers, which makes Wget
+ go wild, as it thinks not all the document was retrieved. You can
+ spot this syndrome if Wget retries getting the same document again
+ and again, each time claiming that the (otherwise normal)
+ connection has closed on the very same byte.
+
+ With this option, Wget will ignore the `Content-Length' header--as
+ if it never existed.
+
+`--header=HEADER-LINE'
+ Send HEADER-LINE along with the rest of the headers in each HTTP
+ request. The supplied header is sent as-is, which means it must
+ contain name and value separated by colon, and must not contain
+ newlines.
+
+ You may define more than one additional header by specifying
+ `--header' more than once.
+
+ wget --header='Accept-Charset: iso-8859-2' \
+ --header='Accept-Language: hr' \
+ http://fly.srk.fer.hr/
+
+ Specification of an empty string as the header value will clear all
+ previous user-defined headers.
+
+ As of Wget 1.10, this option can be used to override headers
+ otherwise generated automatically. This example instructs Wget to
+ connect to localhost, but to specify `foo.bar' in the `Host'
+ header:
+
+ wget --header="Host: foo.bar" http://localhost/
+
+ In versions of Wget prior to 1.10 such use of `--header' caused
+ sending of duplicate headers.
+
+`--max-redirect=NUMBER'
+ Specifies the maximum number of redirections to follow for a
+ resource. The default is 20, which is usually far more than
+ necessary. However, on those occasions where you want to allow
+ more (or fewer), this is the option to use.
+
+`--proxy-user=USER'
+`--proxy-password=PASSWORD'
+ Specify the username USER and password PASSWORD for authentication
+ on a proxy server. Wget will encode them using the `basic'
+ authentication scheme.
+
+ Security considerations similar to those with `--http-password'
+ pertain here as well.
+
+`--referer=URL'
+ Include `Referer: URL' header in HTTP request. Useful for
+ retrieving documents with server-side processing that assume they
+ are always being retrieved by interactive web browsers and only
+ come out properly when Referer is set to one of the pages that
+ point to them.
+
+`--save-headers'
+ Save the headers sent by the HTTP server to the file, preceding the
+ actual contents, with an empty line as the separator.
+
+`-U AGENT-STRING'
+`--user-agent=AGENT-STRING'
+ Identify as AGENT-STRING to the HTTP server.
+
+ The HTTP protocol allows the clients to identify themselves using a
+ `User-Agent' header field. This enables distinguishing the WWW
+ software, usually for statistical purposes or for tracing of
+ protocol violations. Wget normally identifies as `Wget/VERSION',
+ VERSION being the current version number of Wget.
+
+ However, some sites have been known to impose the policy of
+ tailoring the output according to the `User-Agent'-supplied
+ information. While this is not such a bad idea in theory, it has
+ been abused by servers denying information to clients other than
+ (historically) Netscape or, more frequently, Microsoft Internet
+ Explorer. This option allows you to change the `User-Agent' line
+ issued by Wget. Use of this option is discouraged, unless you
+ really know what you are doing.
+
+ Specifying empty user agent with `--user-agent=""' instructs Wget
+ not to send the `User-Agent' header in HTTP requests.
+
+`--post-data=STRING'
+`--post-file=FILE'
+ Use POST as the method for all HTTP requests and send the specified
+ data in the request body. `--post-data' sends STRING as data,
+ whereas `--post-file' sends the contents of FILE. Other than
+ that, they work in exactly the same way. In particular, they
+ _both_ expect content of the form `key1=value1&key2=value2', with
+ percent-encoding for special characters; the only difference is
+ that one expects its content as a command-line parameter and the
+ other accepts its content from a file. In particular,
+ `--post-file' is _not_ for transmitting files as form attachments:
+ those must appear as `key=value' data (with appropriate
+ percent-coding) just like everything else. Wget does not currently
+ support `multipart/form-data' for transmitting POST data; only
+ `application/x-www-form-urlencoded'. Only one of `--post-data' and
+ `--post-file' should be specified.
+
+ Please be aware that Wget needs to know the size of the POST data
+ in advance. Therefore the argument to `--post-file' must be a
+ regular file; specifying a FIFO or something like `/dev/stdin'
+ won't work. It's not quite clear how to work around this
+ limitation inherent in HTTP/1.0. Although HTTP/1.1 introduces
+ "chunked" transfer that doesn't require knowing the request length
+ in advance, a client can't use chunked unless it knows it's
+ talking to an HTTP/1.1 server. And it can't know that until it
+ receives a response, which in turn requires the request to have
+ been completed - a chicken-and-egg problem.
+
+ Note: if Wget is redirected after the POST request is completed, it
+ will not send the POST data to the redirected URL. This is because
+ URLs that process POST often respond with a redirection to a
+ regular page, which does not desire or accept POST. It is not
+ completely clear that this behavior is optimal; if it doesn't work
+ out, it might be changed in the future.
+
+ This example shows how to log to a server using POST and then
+ proceed to download the desired pages, presumably only accessible
+ to authorized users:
+
+ # Log in to the server. This can be done only once.
+ wget --save-cookies cookies.txt \
+ --post-data 'user=foo&password=bar' \
+ http://server.com/auth.php
+
+ # Now grab the page or pages we care about.
+ wget --load-cookies cookies.txt \
+ -p http://server.com/interesting/article.php
+
+ If the server is using session cookies to track user
+ authentication, the above will not work because `--save-cookies'
+ will not save them (and neither will browsers) and the
+ `cookies.txt' file will be empty. In that case use
+ `--keep-session-cookies' along with `--save-cookies' to force
+ saving of session cookies.
+
+`--content-disposition'
+ If this is set to on, experimental (not fully-functional) support
+ for `Content-Disposition' headers is enabled. This can currently
+ result in extra round-trips to the server for a `HEAD' request,
+ and is known to suffer from a few bugs, which is why it is not
+ currently enabled by default.
+
+ This option is useful for some file-downloading CGI programs that
+ use `Content-Disposition' headers to describe what the name of a
+ downloaded file should be.
+
+`--trust-server-names'
+ If this is set to on, on a redirect the last component of the
+ redirection URL will be used as the local file name. By default
+ it is used the last component in the original URL.
+
+`--auth-no-challenge'
+ If this option is given, Wget will send Basic HTTP authentication
+ information (plaintext username and password) for all requests,
+ just like Wget 1.10.2 and prior did by default.
+
+ Use of this option is not recommended, and is intended only to
+ support some few obscure servers, which never send HTTP
+ authentication challenges, but accept unsolicited auth info, say,
+ in addition to form-based authentication.
+
+
+
+File: wget.info, Node: HTTPS (SSL/TLS) Options, Next: FTP Options, Prev: HTTP Options, Up: Invoking
+
+2.8 HTTPS (SSL/TLS) Options
+===========================
+
+To support encrypted HTTP (HTTPS) downloads, Wget must be compiled with
+an external SSL library, currently OpenSSL. If Wget is compiled
+without SSL support, none of these options are available.
+
+`--secure-protocol=PROTOCOL'
+ Choose the secure protocol to be used. Legal values are `auto',
+ `SSLv2', `SSLv3', and `TLSv1'. If `auto' is used, the SSL library
+ is given the liberty of choosing the appropriate protocol
+ automatically, which is achieved by sending an SSLv2 greeting and
+ announcing support for SSLv3 and TLSv1. This is the default.
+
+ Specifying `SSLv2', `SSLv3', or `TLSv1' forces the use of the
+ corresponding protocol. This is useful when talking to old and
+ buggy SSL server implementations that make it hard for OpenSSL to
+ choose the correct protocol version. Fortunately, such servers are
+ quite rare.
+
+`--no-check-certificate'
+ Don't check the server certificate against the available
+ certificate authorities. Also don't require the URL host name to
+ match the common name presented by the certificate.
+
+ As of Wget 1.10, the default is to verify the server's certificate
+ against the recognized certificate authorities, breaking the SSL
+ handshake and aborting the download if the verification fails.
+ Although this provides more secure downloads, it does break
+ interoperability with some sites that worked with previous Wget
+ versions, particularly those using self-signed, expired, or
+ otherwise invalid certificates. This option forces an "insecure"
+ mode of operation that turns the certificate verification errors
+ into warnings and allows you to proceed.
+
+ If you encounter "certificate verification" errors or ones saying
+ that "common name doesn't match requested host name", you can use
+ this option to bypass the verification and proceed with the
+ download. _Only use this option if you are otherwise convinced of
+ the site's authenticity, or if you really don't care about the
+ validity of its certificate._ It is almost always a bad idea not
+ to check the certificates when transmitting confidential or
+ important data.
+
+`--certificate=FILE'
+ Use the client certificate stored in FILE. This is needed for
+ servers that are configured to require certificates from the
+ clients that connect to them. Normally a certificate is not
+ required and this switch is optional.
+
+`--certificate-type=TYPE'
+ Specify the type of the client certificate. Legal values are
+ `PEM' (assumed by default) and `DER', also known as `ASN1'.
+
+`--private-key=FILE'
+ Read the private key from FILE. This allows you to provide the
+ private key in a file separate from the certificate.
+
+`--private-key-type=TYPE'
+ Specify the type of the private key. Accepted values are `PEM'
+ (the default) and `DER'.
+
+`--ca-certificate=FILE'
+ Use FILE as the file with the bundle of certificate authorities
+ ("CA") to verify the peers. The certificates must be in PEM
+ format.
+
+ Without this option Wget looks for CA certificates at the
+ system-specified locations, chosen at OpenSSL installation time.
+
+`--ca-directory=DIRECTORY'
+ Specifies directory containing CA certificates in PEM format. Each
+ file contains one CA certificate, and the file name is based on a
+ hash value derived from the certificate. This is achieved by
+ processing a certificate directory with the `c_rehash' utility
+ supplied with OpenSSL. Using `--ca-directory' is more efficient
+ than `--ca-certificate' when many certificates are installed
+ because it allows Wget to fetch certificates on demand.
+
+ Without this option Wget looks for CA certificates at the
+ system-specified locations, chosen at OpenSSL installation time.
+
+`--random-file=FILE'
+ Use FILE as the source of random data for seeding the
+ pseudo-random number generator on systems without `/dev/random'.
+
+ On such systems the SSL library needs an external source of
+ randomness to initialize. Randomness may be provided by EGD (see
+ `--egd-file' below) or read from an external source specified by
+ the user. If this option is not specified, Wget looks for random
+ data in `$RANDFILE' or, if that is unset, in `$HOME/.rnd'. If
+ none of those are available, it is likely that SSL encryption will
+ not be usable.
+
+ If you're getting the "Could not seed OpenSSL PRNG; disabling SSL."
+ error, you should provide random data using some of the methods
+ described above.
+
+`--egd-file=FILE'
+ Use FILE as the EGD socket. EGD stands for "Entropy Gathering
+ Daemon", a user-space program that collects data from various
+ unpredictable system sources and makes it available to other
+ programs that might need it. Encryption software, such as the SSL
+ library, needs sources of non-repeating randomness to seed the
+ random number generator used to produce cryptographically strong
+ keys.
+
+ OpenSSL allows the user to specify his own source of entropy using
+ the `RAND_FILE' environment variable. If this variable is unset,
+ or if the specified file does not produce enough randomness,
+ OpenSSL will read random data from EGD socket specified using this
+ option.
+
+ If this option is not specified (and the equivalent startup
+ command is not used), EGD is never contacted. EGD is not needed
+ on modern Unix systems that support `/dev/random'.
+
+
+File: wget.info, Node: FTP Options, Next: Recursive Retrieval Options, Prev: HTTPS (SSL/TLS) Options, Up: Invoking
+
+2.9 FTP Options
+===============
+
+`--ftp-user=USER'
+`--ftp-password=PASSWORD'
+ Specify the username USER and password PASSWORD on an FTP server.
+ Without this, or the corresponding startup option, the password
+ defaults to `-wget@', normally used for anonymous FTP.
+
+ Another way to specify username and password is in the URL itself
+ (*note URL Format::). Either method reveals your password to
+ anyone who bothers to run `ps'. To prevent the passwords from
+ being seen, store them in `.wgetrc' or `.netrc', and make sure to
+ protect those files from other users with `chmod'. If the
+ passwords are really important, do not leave them lying in those
+ files either--edit the files and delete them after Wget has
+ started the download.
+
+`--no-remove-listing'
+ Don't remove the temporary `.listing' files generated by FTP
+ retrievals. Normally, these files contain the raw directory
+ listings received from FTP servers. Not removing them can be
+ useful for debugging purposes, or when you want to be able to
+ easily check on the contents of remote server directories (e.g. to
+ verify that a mirror you're running is complete).
+
+ Note that even though Wget writes to a known filename for this
+ file, this is not a security hole in the scenario of a user making
+ `.listing' a symbolic link to `/etc/passwd' or something and
+ asking `root' to run Wget in his or her directory. Depending on
+ the options used, either Wget will refuse to write to `.listing',
+ making the globbing/recursion/time-stamping operation fail, or the
+ symbolic link will be deleted and replaced with the actual
+ `.listing' file, or the listing will be written to a
+ `.listing.NUMBER' file.
+
+ Even though this situation isn't a problem, though, `root' should
+ never run Wget in a non-trusted user's directory. A user could do
+ something as simple as linking `index.html' to `/etc/passwd' and
+ asking `root' to run Wget with `-N' or `-r' so the file will be
+ overwritten.
+
+`--no-glob'
+ Turn off FTP globbing. Globbing refers to the use of shell-like
+ special characters ("wildcards"), like `*', `?', `[' and `]' to
+ retrieve more than one file from the same directory at once, like:
+
+ wget ftp://gnjilux.srk.fer.hr/*.msg
+
+ By default, globbing will be turned on if the URL contains a
+ globbing character. This option may be used to turn globbing on
+ or off permanently.
+
+ You may have to quote the URL to protect it from being expanded by
+ your shell. Globbing makes Wget look for a directory listing,
+ which is system-specific. This is why it currently works only
+ with Unix FTP servers (and the ones emulating Unix `ls' output).
+
+`--no-passive-ftp'
+ Disable the use of the "passive" FTP transfer mode. Passive FTP
+ mandates that the client connect to the server to establish the
+ data connection rather than the other way around.
+
+ If the machine is connected to the Internet directly, both passive
+ and active FTP should work equally well. Behind most firewall and
+ NAT configurations passive FTP has a better chance of working.
+ However, in some rare firewall configurations, active FTP actually
+ works when passive FTP doesn't. If you suspect this to be the
+ case, use this option, or set `passive_ftp=off' in your init file.
+
+`--retr-symlinks'
+ Usually, when retrieving FTP directories recursively and a symbolic
+ link is encountered, the linked-to file is not downloaded.
+ Instead, a matching symbolic link is created on the local
+ filesystem. The pointed-to file will not be downloaded unless
+ this recursive retrieval would have encountered it separately and
+ downloaded it anyway.
+
+ When `--retr-symlinks' is specified, however, symbolic links are
+ traversed and the pointed-to files are retrieved. At this time,
+ this option does not cause Wget to traverse symlinks to
+ directories and recurse through them, but in the future it should
+ be enhanced to do this.
+
+ Note that when retrieving a file (not a directory) because it was
+ specified on the command-line, rather than because it was recursed
+ to, this option has no effect. Symbolic links are always
+ traversed in this case.
+
+
+File: wget.info, Node: Recursive Retrieval Options, Next: Recursive Accept/Reject Options, Prev: FTP Options, Up: Invoking
+
+2.10 Recursive Retrieval Options
+================================
+
+`-r'
+`--recursive'
+ Turn on recursive retrieving. *Note Recursive Download::, for more
+ details. The default maximum depth is 5.
+
+`-l DEPTH'
+`--level=DEPTH'
+ Specify recursion maximum depth level DEPTH (*note Recursive
+ Download::).
+
+`--delete-after'
+ This option tells Wget to delete every single file it downloads,
+ _after_ having done so. It is useful for pre-fetching popular
+ pages through a proxy, e.g.:
+
+ wget -r -nd --delete-after http://whatever.com/~popular/page/
+
+ The `-r' option is to retrieve recursively, and `-nd' to not
+ create directories.
+
+ Note that `--delete-after' deletes files on the local machine. It
+ does not issue the `DELE' command to remote FTP sites, for
+ instance. Also note that when `--delete-after' is specified,
+ `--convert-links' is ignored, so `.orig' files are simply not
+ created in the first place.
+
+`-k'
+`--convert-links'
+ After the download is complete, convert the links in the document
+ to make them suitable for local viewing. This affects not only
+ the visible hyperlinks, but any part of the document that links to
+ external content, such as embedded images, links to style sheets,
+ hyperlinks to non-HTML content, etc.
+
+ Each link will be changed in one of the two ways:
+
+ * The links to files that have been downloaded by Wget will be
+ changed to refer to the file they point to as a relative link.
+
+ Example: if the downloaded file `/foo/doc.html' links to
+ `/bar/img.gif', also downloaded, then the link in `doc.html'
+ will be modified to point to `../bar/img.gif'. This kind of
+ transformation works reliably for arbitrary combinations of
+ directories.
+
+ * The links to files that have not been downloaded by Wget will
+ be changed to include host name and absolute path of the
+ location they point to.
+
+ Example: if the downloaded file `/foo/doc.html' links to
+ `/bar/img.gif' (or to `../bar/img.gif'), then the link in
+ `doc.html' will be modified to point to
+ `http://HOSTNAME/bar/img.gif'.
+
+ Because of this, local browsing works reliably: if a linked file
+ was downloaded, the link will refer to its local name; if it was
+ not downloaded, the link will refer to its full Internet address
+ rather than presenting a broken link. The fact that the former
+ links are converted to relative links ensures that you can move
+ the downloaded hierarchy to another directory.
+
+ Note that only at the end of the download can Wget know which
+ links have been downloaded. Because of that, the work done by
+ `-k' will be performed at the end of all the downloads.
+
+`-K'
+`--backup-converted'
+ When converting a file, back up the original version with a `.orig'
+ suffix. Affects the behavior of `-N' (*note HTTP Time-Stamping
+ Internals::).
+
+`-m'
+`--mirror'
+ Turn on options suitable for mirroring. This option turns on
+ recursion and time-stamping, sets infinite recursion depth and
+ keeps FTP directory listings. It is currently equivalent to `-r
+ -N -l inf --no-remove-listing'.
+
+`-p'
+`--page-requisites'
+ This option causes Wget to download all the files that are
+ necessary to properly display a given HTML page. This includes
+ such things as inlined images, sounds, and referenced stylesheets.
+
+ Ordinarily, when downloading a single HTML page, any requisite
+ documents that may be needed to display it properly are not
+ downloaded. Using `-r' together with `-l' can help, but since
+ Wget does not ordinarily distinguish between external and inlined
+ documents, one is generally left with "leaf documents" that are
+ missing their requisites.
+
+ For instance, say document `1.html' contains an `<IMG>' tag
+ referencing `1.gif' and an `<A>' tag pointing to external document
+ `2.html'. Say that `2.html' is similar but that its image is
+ `2.gif' and it links to `3.html'. Say this continues up to some
+ arbitrarily high number.
+
+ If one executes the command:
+
+ wget -r -l 2 http://SITE/1.html
+
+ then `1.html', `1.gif', `2.html', `2.gif', and `3.html' will be
+ downloaded. As you can see, `3.html' is without its requisite
+ `3.gif' because Wget is simply counting the number of hops (up to
+ 2) away from `1.html' in order to determine where to stop the
+ recursion. However, with this command:
+
+ wget -r -l 2 -p http://SITE/1.html
+
+ all the above files _and_ `3.html''s requisite `3.gif' will be
+ downloaded. Similarly,
+
+ wget -r -l 1 -p http://SITE/1.html
+
+ will cause `1.html', `1.gif', `2.html', and `2.gif' to be
+ downloaded. One might think that:
+
+ wget -r -l 0 -p http://SITE/1.html
+
+ would download just `1.html' and `1.gif', but unfortunately this
+ is not the case, because `-l 0' is equivalent to `-l inf'--that
+ is, infinite recursion. To download a single HTML page (or a
+ handful of them, all specified on the command-line or in a `-i'
+ URL input file) and its (or their) requisites, simply leave off
+ `-r' and `-l':
+
+ wget -p http://SITE/1.html
+
+ Note that Wget will behave as if `-r' had been specified, but only
+ that single page and its requisites will be downloaded. Links
+ from that page to external documents will not be followed.
+ Actually, to download a single page and all its requisites (even
+ if they exist on separate websites), and make sure the lot
+ displays properly locally, this author likes to use a few options
+ in addition to `-p':
+
+ wget -E -H -k -K -p http://SITE/DOCUMENT
+
+ To finish off this topic, it's worth knowing that Wget's idea of an
+ external document link is any URL specified in an `<A>' tag, an
+ `<AREA>' tag, or a `<LINK>' tag other than `<LINK
+ REL="stylesheet">'.
+
+`--strict-comments'
+ Turn on strict parsing of HTML comments. The default is to
+ terminate comments at the first occurrence of `-->'.
+
+ According to specifications, HTML comments are expressed as SGML
+ "declarations". Declaration is special markup that begins with
+ `<!' and ends with `>', such as `<!DOCTYPE ...>', that may contain
+ comments between a pair of `--' delimiters. HTML comments are
+ "empty declarations", SGML declarations without any non-comment
+ text. Therefore, `<!--foo-->' is a valid comment, and so is
+ `<!--one-- --two-->', but `<!--1--2-->' is not.
+
+ On the other hand, most HTML writers don't perceive comments as
+ anything other than text delimited with `<!--' and `-->', which is
+ not quite the same. For example, something like `<!------------>'
+ works as a valid comment as long as the number of dashes is a
+ multiple of four (!). If not, the comment technically lasts until
+ the next `--', which may be at the other end of the document.
+ Because of this, many popular browsers completely ignore the
+ specification and implement what users have come to expect:
+ comments delimited with `<!--' and `-->'.
+
+ Until version 1.9, Wget interpreted comments strictly, which
+ resulted in missing links in many web pages that displayed fine in
+ browsers, but had the misfortune of containing non-compliant
+ comments. Beginning with version 1.9, Wget has joined the ranks
+ of clients that implements "naive" comments, terminating each
+ comment at the first occurrence of `-->'.
+
+ If, for whatever reason, you want strict comment parsing, use this
+ option to turn it on.
+
+
+File: wget.info, Node: Recursive Accept/Reject Options, Next: Exit Status, Prev: Recursive Retrieval Options, Up: Invoking
+
+2.11 Recursive Accept/Reject Options
+====================================
+
+`-A ACCLIST --accept ACCLIST'
+`-R REJLIST --reject REJLIST'
+ Specify comma-separated lists of file name suffixes or patterns to
+ accept or reject (*note Types of Files::). Note that if any of the
+ wildcard characters, `*', `?', `[' or `]', appear in an element of
+ ACCLIST or REJLIST, it will be treated as a pattern, rather than a
+ suffix.
+
+`-D DOMAIN-LIST'
+`--domains=DOMAIN-LIST'
+ Set domains to be followed. DOMAIN-LIST is a comma-separated list
+ of domains. Note that it does _not_ turn on `-H'.
+
+`--exclude-domains DOMAIN-LIST'
+ Specify the domains that are _not_ to be followed (*note Spanning
+ Hosts::).
+
+`--follow-ftp'
+ Follow FTP links from HTML documents. Without this option, Wget
+ will ignore all the FTP links.
+
+`--follow-tags=LIST'
+ Wget has an internal table of HTML tag / attribute pairs that it
+ considers when looking for linked documents during a recursive
+ retrieval. If a user wants only a subset of those tags to be
+ considered, however, he or she should be specify such tags in a
+ comma-separated LIST with this option.
+
+`--ignore-tags=LIST'
+ This is the opposite of the `--follow-tags' option. To skip
+ certain HTML tags when recursively looking for documents to
+ download, specify them in a comma-separated LIST.
+
+ In the past, this option was the best bet for downloading a single
+ page and its requisites, using a command-line like:
+
+ wget --ignore-tags=a,area -H -k -K -r http://SITE/DOCUMENT
+
+ However, the author of this option came across a page with tags
+ like `<LINK REL="home" HREF="/">' and came to the realization that
+ specifying tags to ignore was not enough. One can't just tell
+ Wget to ignore `<LINK>', because then stylesheets will not be
+ downloaded. Now the best bet for downloading a single page and
+ its requisites is the dedicated `--page-requisites' option.
+
+`--ignore-case'
+ Ignore case when matching files and directories. This influences
+ the behavior of -R, -A, -I, and -X options, as well as globbing
+ implemented when downloading from FTP sites. For example, with
+ this option, `-A *.txt' will match `file1.txt', but also
+ `file2.TXT', `file3.TxT', and so on.
+
+`-H'
+`--span-hosts'
+ Enable spanning across hosts when doing recursive retrieving
+ (*note Spanning Hosts::).
+
+`-L'
+`--relative'
+ Follow relative links only. Useful for retrieving a specific home
+ page without any distractions, not even those from the same hosts
+ (*note Relative Links::).
+
+`-I LIST'
+`--include-directories=LIST'
+ Specify a comma-separated list of directories you wish to follow
+ when downloading (*note Directory-Based Limits::). Elements of
+ LIST may contain wildcards.
+
+`-X LIST'
+`--exclude-directories=LIST'
+ Specify a comma-separated list of directories you wish to exclude
+ from download (*note Directory-Based Limits::). Elements of LIST
+ may contain wildcards.
+
+`-np'
+
+`--no-parent'
+ Do not ever ascend to the parent directory when retrieving
+ recursively. This is a useful option, since it guarantees that
+ only the files _below_ a certain hierarchy will be downloaded.
+ *Note Directory-Based Limits::, for more details.
+
+
+File: wget.info, Node: Exit Status, Prev: Recursive Accept/Reject Options, Up: Invoking
+
+2.12 Exit Status
+================
+
+Wget may return one of several error codes if it encounters problems.
+
+0
+ No problems occurred.
+
+1
+ Generic error code.
+
+2
+ Parse error--for instance, when parsing command-line options, the
+ `.wgetrc' or `.netrc'...
+
+3
+ File I/O error.
+
+4
+ Network failure.
+
+5
+ SSL verification failure.
+
+6
+ Username/password authentication failure.
+
+7
+ Protocol errors.
+
+8
+ Server issued an error response.
+
+ With the exceptions of 0 and 1, the lower-numbered exit codes take
+precedence over higher-numbered ones, when multiple types of errors are
+encountered.
+
+ In versions of Wget prior to 1.12, Wget's exit status tended to be
+unhelpful and inconsistent. Recursive downloads would virtually always
+return 0 (success), regardless of any issues encountered, and
+non-recursive fetches only returned the status corresponding to the
+most recently-attempted download.
+
+
+File: wget.info, Node: Recursive Download, Next: Following Links, Prev: Invoking, Up: Top
+
+3 Recursive Download
+********************
+
+GNU Wget is capable of traversing parts of the Web (or a single HTTP or
+FTP server), following links and directory structure. We refer to this
+as to "recursive retrieval", or "recursion".
+
+ With HTTP URLs, Wget retrieves and parses the HTML or CSS from the
+given URL, retrieving the files the document refers to, through markup
+like `href' or `src', or CSS URI values specified using the `url()'
+functional notation. If the freshly downloaded file is also of type
+`text/html', `application/xhtml+xml', or `text/css', it will be parsed
+and followed further.
+
+ Recursive retrieval of HTTP and HTML/CSS content is "breadth-first".
+This means that Wget first downloads the requested document, then the
+documents linked from that document, then the documents linked by them,
+and so on. In other words, Wget first downloads the documents at depth
+1, then those at depth 2, and so on until the specified maximum depth.
+
+ The maximum "depth" to which the retrieval may descend is specified
+with the `-l' option. The default maximum depth is five layers.
+
+ When retrieving an FTP URL recursively, Wget will retrieve all the
+data from the given directory tree (including the subdirectories up to
+the specified depth) on the remote server, creating its mirror image
+locally. FTP retrieval is also limited by the `depth' parameter.
+Unlike HTTP recursion, FTP recursion is performed depth-first.
+
+ By default, Wget will create a local directory tree, corresponding to
+the one found on the remote server.
+
+ Recursive retrieving can find a number of applications, the most
+important of which is mirroring. It is also useful for WWW
+presentations, and any other opportunities where slow network
+connections should be bypassed by storing the files locally.
+
+ You should be warned that recursive downloads can overload the remote
+servers. Because of that, many administrators frown upon them and may
+ban access from your site if they detect very fast downloads of big
+amounts of content. When downloading from Internet servers, consider
+using the `-w' option to introduce a delay between accesses to the
+server. The download will take a while longer, but the server
+administrator will not be alarmed by your rudeness.
+
+ Of course, recursive download may cause problems on your machine. If
+left to run unchecked, it can easily fill up the disk. If downloading
+from local network, it can also take bandwidth on the system, as well as
+consume memory and CPU.
+
+ Try to specify the criteria that match the kind of download you are
+trying to achieve. If you want to download only one page, use
+`--page-requisites' without any additional recursion. If you want to
+download things under one directory, use `-np' to avoid downloading
+things from other directories. If you want to download all the files
+from one directory, use `-l 1' to make sure the recursion depth never
+exceeds one. *Note Following Links::, for more information about this.
+
+ Recursive retrieval should be used with care. Don't say you were not
+warned.
+
+
+File: wget.info, Node: Following Links, Next: Time-Stamping, Prev: Recursive Download, Up: Top
+
+4 Following Links
+*****************
+
+When retrieving recursively, one does not wish to retrieve loads of
+unnecessary data. Most of the time the users bear in mind exactly what
+they want to download, and want Wget to follow only specific links.
+
+ For example, if you wish to download the music archive from
+`fly.srk.fer.hr', you will not want to download all the home pages that
+happen to be referenced by an obscure part of the archive.
+
+ Wget possesses several mechanisms that allows you to fine-tune which
+links it will follow.
+
+* Menu:
+
+* Spanning Hosts:: (Un)limiting retrieval based on host name.
+* Types of Files:: Getting only certain files.
+* Directory-Based Limits:: Getting only certain directories.
+* Relative Links:: Follow relative links only.
+* FTP Links:: Following FTP links.
+
+
+File: wget.info, Node: Spanning Hosts, Next: Types of Files, Prev: Following Links, Up: Following Links
+
+4.1 Spanning Hosts
+==================
+
+Wget's recursive retrieval normally refuses to visit hosts different
+than the one you specified on the command line. This is a reasonable
+default; without it, every retrieval would have the potential to turn
+your Wget into a small version of google.
+
+ However, visiting different hosts, or "host spanning," is sometimes
+a useful option. Maybe the images are served from a different server.
+Maybe you're mirroring a site that consists of pages interlinked between
+three servers. Maybe the server has two equivalent names, and the HTML
+pages refer to both interchangeably.
+
+Span to any host--`-H'
+ The `-H' option turns on host spanning, thus allowing Wget's
+ recursive run to visit any host referenced by a link. Unless
+ sufficient recursion-limiting criteria are applied depth, these
+ foreign hosts will typically link to yet more hosts, and so on
+ until Wget ends up sucking up much more data than you have
+ intended.
+
+Limit spanning to certain domains--`-D'
+ The `-D' option allows you to specify the domains that will be
+ followed, thus limiting the recursion only to the hosts that
+ belong to these domains. Obviously, this makes sense only in
+ conjunction with `-H'. A typical example would be downloading the
+ contents of `www.server.com', but allowing downloads from
+ `images.server.com', etc.:
+
+ wget -rH -Dserver.com http://www.server.com/
+
+ You can specify more than one address by separating them with a
+ comma, e.g. `-Ddomain1.com,domain2.com'.
+
+Keep download off certain domains--`--exclude-domains'
+ If there are domains you want to exclude specifically, you can do
+ it with `--exclude-domains', which accepts the same type of
+ arguments of `-D', but will _exclude_ all the listed domains. For
+ example, if you want to download all the hosts from `foo.edu'
+ domain, with the exception of `sunsite.foo.edu', you can do it like
+ this:
+
+ wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
+ http://www.foo.edu/
+
+
+
+File: wget.info, Node: Types of Files, Next: Directory-Based Limits, Prev: Spanning Hosts, Up: Following Links
+
+4.2 Types of Files
+==================
+
+When downloading material from the web, you will often want to restrict
+the retrieval to only certain file types. For example, if you are
+interested in downloading GIFs, you will not be overjoyed to get loads
+of PostScript documents, and vice versa.
+
+ Wget offers two options to deal with this problem. Each option
+description lists a short name, a long name, and the equivalent command
+in `.wgetrc'.
+
+`-A ACCLIST'
+`--accept ACCLIST'
+`accept = ACCLIST'
+ The argument to `--accept' option is a list of file suffixes or
+ patterns that Wget will download during recursive retrieval. A
+ suffix is the ending part of a file, and consists of "normal"
+ letters, e.g. `gif' or `.jpg'. A matching pattern contains
+ shell-like wildcards, e.g. `books*' or `zelazny*196[0-9]*'.
+
+ So, specifying `wget -A gif,jpg' will make Wget download only the
+ files ending with `gif' or `jpg', i.e. GIFs and JPEGs. On the
+ other hand, `wget -A "zelazny*196[0-9]*"' will download only files
+ beginning with `zelazny' and containing numbers from 1960 to 1969
+ anywhere within. Look up the manual of your shell for a
+ description of how pattern matching works.
+
+ Of course, any number of suffixes and patterns can be combined
+ into a comma-separated list, and given as an argument to `-A'.
+
+`-R REJLIST'
+`--reject REJLIST'
+`reject = REJLIST'
+ The `--reject' option works the same way as `--accept', only its
+ logic is the reverse; Wget will download all files _except_ the
+ ones matching the suffixes (or patterns) in the list.
+
+ So, if you want to download a whole page except for the cumbersome
+ MPEGs and .AU files, you can use `wget -R mpg,mpeg,au'.
+ Analogously, to download all files except the ones beginning with
+ `bjork', use `wget -R "bjork*"'. The quotes are to prevent
+ expansion by the shell.
+
+The `-A' and `-R' options may be combined to achieve even better
+fine-tuning of which files to retrieve. E.g. `wget -A "*zelazny*" -R
+.ps' will download all the files having `zelazny' as a part of their
+name, but _not_ the PostScript files.
+
+ Note that these two options do not affect the downloading of HTML
+files (as determined by a `.htm' or `.html' filename prefix). This
+behavior may not be desirable for all users, and may be changed for
+future versions of Wget.
+
+ Note, too, that query strings (strings at the end of a URL beginning
+with a question mark (`?') are not included as part of the filename for
+accept/reject rules, even though these will actually contribute to the
+name chosen for the local file. It is expected that a future version of
+Wget will provide an option to allow matching against query strings.
+
+ Finally, it's worth noting that the accept/reject lists are matched
+_twice_ against downloaded files: once against the URL's filename
+portion, to determine if the file should be downloaded in the first
+place; then, after it has been accepted and successfully downloaded,
+the local file's name is also checked against the accept/reject lists
+to see if it should be removed. The rationale was that, since `.htm'
+and `.html' files are always downloaded regardless of accept/reject
+rules, they should be removed _after_ being downloaded and scanned for
+links, if they did match the accept/reject lists. However, this can
+lead to unexpected results, since the local filenames can differ from
+the original URL filenames in the following ways, all of which can
+change whether an accept/reject rule matches:
+
+ * If the local file already exists and `--no-directories' was
+ specified, a numeric suffix will be appended to the original name.
+
+ * If `--adjust-extension' was specified, the local filename might
+ have `.html' appended to it. If Wget is invoked with `-E -A.php',
+ a filename such as `index.php' will match be accepted, but upon
+ download will be named `index.php.html', which no longer matches,
+ and so the file will be deleted.
+
+ * Query strings do not contribute to URL matching, but are included
+ in local filenames, and so _do_ contribute to filename matching.
+
+This behavior, too, is considered less-than-desirable, and may change
+in a future version of Wget.
+
+
+File: wget.info, Node: Directory-Based Limits, Next: Relative Links, Prev: Types of Files, Up: Following Links
+
+4.3 Directory-Based Limits
+==========================
+
+Regardless of other link-following facilities, it is often useful to
+place the restriction of what files to retrieve based on the directories
+those files are placed in. There can be many reasons for this--the
+home pages may be organized in a reasonable directory structure; or some
+directories may contain useless information, e.g. `/cgi-bin' or `/dev'
+directories.
+
+ Wget offers three different options to deal with this requirement.
+Each option description lists a short name, a long name, and the
+equivalent command in `.wgetrc'.
+
+`-I LIST'
+`--include LIST'
+`include_directories = LIST'
+ `-I' option accepts a comma-separated list of directories included
+ in the retrieval. Any other directories will simply be ignored.
+ The directories are absolute paths.
+
+ So, if you wish to download from `http://host/people/bozo/'
+ following only links to bozo's colleagues in the `/people'
+ directory and the bogus scripts in `/cgi-bin', you can specify:
+
+ wget -I /people,/cgi-bin http://host/people/bozo/
+
+`-X LIST'
+`--exclude LIST'
+`exclude_directories = LIST'
+ `-X' option is exactly the reverse of `-I'--this is a list of
+ directories _excluded_ from the download. E.g. if you do not want
+ Wget to download things from `/cgi-bin' directory, specify `-X
+ /cgi-bin' on the command line.
+
+ The same as with `-A'/`-R', these two options can be combined to
+ get a better fine-tuning of downloading subdirectories. E.g. if
+ you want to load all the files from `/pub' hierarchy except for
+ `/pub/worthless', specify `-I/pub -X/pub/worthless'.
+
+`-np'
+`--no-parent'
+`no_parent = on'
+ The simplest, and often very useful way of limiting directories is
+ disallowing retrieval of the links that refer to the hierarchy
+ "above" than the beginning directory, i.e. disallowing ascent to
+ the parent directory/directories.
+
+ The `--no-parent' option (short `-np') is useful in this case.
+ Using it guarantees that you will never leave the existing
+ hierarchy. Supposing you issue Wget with:
+
+ wget -r --no-parent http://somehost/~luzer/my-archive/
+
+ You may rest assured that none of the references to
+ `/~his-girls-homepage/' or `/~luzer/all-my-mpegs/' will be
+ followed. Only the archive you are interested in will be
+ downloaded. Essentially, `--no-parent' is similar to
+ `-I/~luzer/my-archive', only it handles redirections in a more
+ intelligent fashion.
+
+ *Note* that, for HTTP (and HTTPS), the trailing slash is very
+ important to `--no-parent'. HTTP has no concept of a
+ "directory"--Wget relies on you to indicate what's a directory and
+ what isn't. In `http://foo/bar/', Wget will consider `bar' to be a
+ directory, while in `http://foo/bar' (no trailing slash), `bar'
+ will be considered a filename (so `--no-parent' would be
+ meaningless, as its parent is `/').
+
+
+File: wget.info, Node: Relative Links, Next: FTP Links, Prev: Directory-Based Limits, Up: Following Links
+
+4.4 Relative Links
+==================
+
+When `-L' is turned on, only the relative links are ever followed.
+Relative links are here defined those that do not refer to the web
+server root. For example, these links are relative:
+
+ <a href="foo.gif">
+ <a href="foo/bar.gif">
+ <a href="../foo/bar.gif">
+
+ These links are not relative:
+
+ <a href="/foo.gif">
+ <a href="/foo/bar.gif">
+ <a href="http://www.server.com/foo/bar.gif">
+
+ Using this option guarantees that recursive retrieval will not span
+hosts, even without `-H'. In simple cases it also allows downloads to
+"just work" without having to convert links.
+
+ This option is probably not very useful and might be removed in a
+future release.
+
+
+File: wget.info, Node: FTP Links, Prev: Relative Links, Up: Following Links
+
+4.5 Following FTP Links
+=======================
+
+The rules for FTP are somewhat specific, as it is necessary for them to
+be. FTP links in HTML documents are often included for purposes of
+reference, and it is often inconvenient to download them by default.
+
+ To have FTP links followed from HTML documents, you need to specify
+the `--follow-ftp' option. Having done that, FTP links will span hosts
+regardless of `-H' setting. This is logical, as FTP links rarely point
+to the same host where the HTTP server resides. For similar reasons,
+the `-L' options has no effect on such downloads. On the other hand,
+domain acceptance (`-D') and suffix rules (`-A' and `-R') apply
+normally.
+
+ Also note that followed links to FTP directories will not be
+retrieved recursively further.
+
+
+File: wget.info, Node: Time-Stamping, Next: Startup File, Prev: Following Links, Up: Top
+
+5 Time-Stamping
+***************
+
+One of the most important aspects of mirroring information from the
+Internet is updating your archives.
+
+ Downloading the whole archive again and again, just to replace a few
+changed files is expensive, both in terms of wasted bandwidth and money,
+and the time to do the update. This is why all the mirroring tools
+offer the option of incremental updating.
+
+ Such an updating mechanism means that the remote server is scanned in
+search of "new" files. Only those new files will be downloaded in the
+place of the old ones.
+
+ A file is considered new if one of these two conditions are met:
+
+ 1. A file of that name does not already exist locally.
+
+ 2. A file of that name does exist, but the remote file was modified
+ more recently than the local file.
+
+ To implement this, the program needs to be aware of the time of last
+modification of both local and remote files. We call this information
+the "time-stamp" of a file.
+
+ The time-stamping in GNU Wget is turned on using `--timestamping'
+(`-N') option, or through `timestamping = on' directive in `.wgetrc'.
+With this option, for each file it intends to download, Wget will check
+whether a local file of the same name exists. If it does, and the
+remote file is not newer, Wget will not download it.
+
+ If the local file does not exist, or the sizes of the files do not
+match, Wget will download the remote file no matter what the time-stamps
+say.
+
+* Menu:
+
+* Time-Stamping Usage::
+* HTTP Time-Stamping Internals::
+* FTP Time-Stamping Internals::
+
+
+File: wget.info, Node: Time-Stamping Usage, Next: HTTP Time-Stamping Internals, Prev: Time-Stamping, Up: Time-Stamping
+
+5.1 Time-Stamping Usage
+=======================
+
+The usage of time-stamping is simple. Say you would like to download a
+file so that it keeps its date of modification.
+
+ wget -S http://www.gnu.ai.mit.edu/
+
+ A simple `ls -l' shows that the time stamp on the local file equals
+the state of the `Last-Modified' header, as returned by the server. As
+you can see, the time-stamping info is preserved locally, even without
+`-N' (at least for HTTP).
+
+ Several days later, you would like Wget to check if the remote file
+has changed, and download it if it has.
+
+ wget -N http://www.gnu.ai.mit.edu/
+
+ Wget will ask the server for the last-modified date. If the local
+file has the same timestamp as the server, or a newer one, the remote
+file will not be re-fetched. However, if the remote file is more
+recent, Wget will proceed to fetch it.
+
+ The same goes for FTP. For example:
+
+ wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*"
+
+ (The quotes around that URL are to prevent the shell from trying to
+interpret the `*'.)
+
+ After download, a local directory listing will show that the
+timestamps match those on the remote server. Reissuing the command
+with `-N' will make Wget re-fetch _only_ the files that have been
+modified since the last download.
+
+ If you wished to mirror the GNU archive every week, you would use a
+command like the following, weekly:
+
+ wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
+
+ Note that time-stamping will only work for files for which the server
+gives a timestamp. For HTTP, this depends on getting a `Last-Modified'
+header. For FTP, this depends on getting a directory listing with
+dates in a format that Wget can parse (*note FTP Time-Stamping
+Internals::).
+
+
+File: wget.info, Node: HTTP Time-Stamping Internals, Next: FTP Time-Stamping Internals, Prev: Time-Stamping Usage, Up: Time-Stamping
+
+5.2 HTTP Time-Stamping Internals
+================================
+
+Time-stamping in HTTP is implemented by checking of the `Last-Modified'
+header. If you wish to retrieve the file `foo.html' through HTTP, Wget
+will check whether `foo.html' exists locally. If it doesn't,
+`foo.html' will be retrieved unconditionally.
+
+ If the file does exist locally, Wget will first check its local
+time-stamp (similar to the way `ls -l' checks it), and then send a
+`HEAD' request to the remote server, demanding the information on the
+remote file.
+
+ The `Last-Modified' header is examined to find which file was
+modified more recently (which makes it "newer"). If the remote file is
+newer, it will be downloaded; if it is older, Wget will give up.(1)
+
+ When `--backup-converted' (`-K') is specified in conjunction with
+`-N', server file `X' is compared to local file `X.orig', if extant,
+rather than being compared to local file `X', which will always differ
+if it's been converted by `--convert-links' (`-k').
+
+ Arguably, HTTP time-stamping should be implemented using the
+`If-Modified-Since' request.
+
+ ---------- Footnotes ----------
+
+ (1) As an additional check, Wget will look at the `Content-Length'
+header, and compare the sizes; if they are not the same, the remote
+file will be downloaded no matter what the time-stamp says.
+
+
+File: wget.info, Node: FTP Time-Stamping Internals, Prev: HTTP Time-Stamping Internals, Up: Time-Stamping
+
+5.3 FTP Time-Stamping Internals
+===============================
+
+In theory, FTP time-stamping works much the same as HTTP, only FTP has
+no headers--time-stamps must be ferreted out of directory listings.
+
+ If an FTP download is recursive or uses globbing, Wget will use the
+FTP `LIST' command to get a file listing for the directory containing
+the desired file(s). It will try to analyze the listing, treating it
+like Unix `ls -l' output, extracting the time-stamps. The rest is
+exactly the same as for HTTP. Note that when retrieving individual
+files from an FTP server without using globbing or recursion, listing
+files will not be downloaded (and thus files will not be time-stamped)
+unless `-N' is specified.
+
+ Assumption that every directory listing is a Unix-style listing may
+sound extremely constraining, but in practice it is not, as many
+non-Unix FTP servers use the Unixoid listing format because most (all?)
+of the clients understand it. Bear in mind that RFC959 defines no
+standard way to get a file list, let alone the time-stamps. We can
+only hope that a future standard will define this.
+
+ Another non-standard solution includes the use of `MDTM' command
+that is supported by some FTP servers (including the popular
+`wu-ftpd'), which returns the exact time of the specified file. Wget
+may support this command in the future.
+
+
+File: wget.info, Node: Startup File, Next: Examples, Prev: Time-Stamping, Up: Top
+
+6 Startup File
+**************
+
+Once you know how to change default settings of Wget through command
+line arguments, you may wish to make some of those settings permanent.
+You can do that in a convenient way by creating the Wget startup
+file--`.wgetrc'.
+
+ Besides `.wgetrc' is the "main" initialization file, it is
+convenient to have a special facility for storing passwords. Thus Wget
+reads and interprets the contents of `$HOME/.netrc', if it finds it.
+You can find `.netrc' format in your system manuals.
+
+ Wget reads `.wgetrc' upon startup, recognizing a limited set of
+commands.
+
+* Menu:
+
+* Wgetrc Location:: Location of various wgetrc files.
+* Wgetrc Syntax:: Syntax of wgetrc.
+* Wgetrc Commands:: List of available commands.
+* Sample Wgetrc:: A wgetrc example.
+
+
+File: wget.info, Node: Wgetrc Location, Next: Wgetrc Syntax, Prev: Startup File, Up: Startup File
+
+6.1 Wgetrc Location
+===================
+
+When initializing, Wget will look for a "global" startup file,
+`/usr/local/etc/wgetrc' by default (or some prefix other than
+`/usr/local', if Wget was not installed there) and read commands from
+there, if it exists.
+
+ Then it will look for the user's file. If the environmental variable
+`WGETRC' is set, Wget will try to load that file. Failing that, no
+further attempts will be made.
+
+ If `WGETRC' is not set, Wget will try to load `$HOME/.wgetrc'.
+
+ The fact that user's settings are loaded after the system-wide ones
+means that in case of collision user's wgetrc _overrides_ the
+system-wide wgetrc (in `/usr/local/etc/wgetrc' by default). Fascist
+admins, away!
+
+
+File: wget.info, Node: Wgetrc Syntax, Next: Wgetrc Commands, Prev: Wgetrc Location, Up: Startup File
+
+6.2 Wgetrc Syntax
+=================
+
+The syntax of a wgetrc command is simple:
+
+ variable = value
+
+ The "variable" will also be called "command". Valid "values" are
+different for different commands.
+
+ The commands are case-insensitive and underscore-insensitive. Thus
+`DIr__PrefiX' is the same as `dirprefix'. Empty lines, lines beginning
+with `#' and lines containing white-space only are discarded.
+
+ Commands that expect a comma-separated list will clear the list on an
+empty command. So, if you wish to reset the rejection list specified in
+global `wgetrc', you can do it with:
+
+ reject =
+
+
+File: wget.info, Node: Wgetrc Commands, Next: Sample Wgetrc, Prev: Wgetrc Syntax, Up: Startup File
+
+6.3 Wgetrc Commands
+===================
+
+The complete set of commands is listed below. Legal values are listed
+after the `='. Simple Boolean values can be set or unset using `on'
+and `off' or `1' and `0'.
+
+ Some commands take pseudo-arbitrary values. ADDRESS values can be
+hostnames or dotted-quad IP addresses. N can be any positive integer,
+or `inf' for infinity, where appropriate. STRING values can be any
+non-empty string.
+
+ Most of these commands have direct command-line equivalents. Also,
+any wgetrc command can be specified on the command line using the
+`--execute' switch (*note Basic Startup Options::.)
+
+accept/reject = STRING
+ Same as `-A'/`-R' (*note Types of Files::).
+
+add_hostdir = on/off
+ Enable/disable host-prefixed file names. `-nH' disables it.
+
+ask_password = on/off
+ Prompt for a password for each connection established. Cannot be
+ specified when `--password' is being used, because they are
+ mutually exclusive. Equivalent to `--ask-password'.
+
+auth_no_challenge = on/off
+ If this option is given, Wget will send Basic HTTP authentication
+ information (plaintext username and password) for all requests. See
+ `--auth-no-challenge'.
+
+background = on/off
+ Enable/disable going to background--the same as `-b' (which
+ enables it).
+
+backup_converted = on/off
+ Enable/disable saving pre-converted files with the suffix
+ `.orig'--the same as `-K' (which enables it).
+
+base = STRING
+ Consider relative URLs in input files (specified via the `input'
+ command or the `--input-file'/`-i' option, together with
+ `force_html' or `--force-html') as being relative to STRING--the
+ same as `--base=STRING'.
+
+bind_address = ADDRESS
+ Bind to ADDRESS, like the `--bind-address=ADDRESS'.
+
+ca_certificate = FILE
+ Set the certificate authority bundle file to FILE. The same as
+ `--ca-certificate=FILE'.
+
+ca_directory = DIRECTORY
+ Set the directory used for certificate authorities. The same as
+ `--ca-directory=DIRECTORY'.
+
+cache = on/off
+ When set to off, disallow server-caching. See the `--no-cache'
+ option.
+
+certificate = FILE
+ Set the client certificate file name to FILE. The same as
+ `--certificate=FILE'.
+
+certificate_type = STRING
+ Specify the type of the client certificate, legal values being
+ `PEM' (the default) and `DER' (aka ASN1). The same as
+ `--certificate-type=STRING'.
+
+check_certificate = on/off
+ If this is set to off, the server certificate is not checked
+ against the specified client authorities. The default is "on".
+ The same as `--check-certificate'.
+
+connect_timeout = N
+ Set the connect timeout--the same as `--connect-timeout'.
+
+content_disposition = on/off
+ Turn on recognition of the (non-standard) `Content-Disposition'
+ HTTP header--if set to `on', the same as `--content-disposition'.
+
+trust_server_names = on/off
+ If set to on, use the last component of a redirection URL for the
+ local file name.
+
+continue = on/off
+ If set to on, force continuation of preexistent partially retrieved
+ files. See `-c' before setting it.
+
+convert_links = on/off
+ Convert non-relative links locally. The same as `-k'.
+
+cookies = on/off
+ When set to off, disallow cookies. See the `--cookies' option.
+
+cut_dirs = N
+ Ignore N remote directory components. Equivalent to
+ `--cut-dirs=N'.
+
+debug = on/off
+ Debug mode, same as `-d'.
+
+default_page = STRING
+ Default page name--the same as `--default-page=STRING'.
+
+delete_after = on/off
+ Delete after download--the same as `--delete-after'.
+
+dir_prefix = STRING
+ Top of directory tree--the same as `-P STRING'.
+
+dirstruct = on/off
+ Turning dirstruct on or off--the same as `-x' or `-nd',
+ respectively.
+
+dns_cache = on/off
+ Turn DNS caching on/off. Since DNS caching is on by default, this
+ option is normally used to turn it off and is equivalent to
+ `--no-dns-cache'.
+
+dns_timeout = N
+ Set the DNS timeout--the same as `--dns-timeout'.
+
+domains = STRING
+ Same as `-D' (*note Spanning Hosts::).
+
+dot_bytes = N
+ Specify the number of bytes "contained" in a dot, as seen
+ throughout the retrieval (1024 by default). You can postfix the
+ value with `k' or `m', representing kilobytes and megabytes,
+ respectively. With dot settings you can tailor the dot retrieval
+ to suit your needs, or you can use the predefined "styles" (*note
+ Download Options::).
+
+dot_spacing = N
+ Specify the number of dots in a single cluster (10 by default).
+
+dots_in_line = N
+ Specify the number of dots that will be printed in each line
+ throughout the retrieval (50 by default).
+
+egd_file = FILE
+ Use STRING as the EGD socket file name. The same as
+ `--egd-file=FILE'.
+
+exclude_directories = STRING
+ Specify a comma-separated list of directories you wish to exclude
+ from download--the same as `-X STRING' (*note Directory-Based
+ Limits::).
+
+exclude_domains = STRING
+ Same as `--exclude-domains=STRING' (*note Spanning Hosts::).
+
+follow_ftp = on/off
+ Follow FTP links from HTML documents--the same as `--follow-ftp'.
+
+follow_tags = STRING
+ Only follow certain HTML tags when doing a recursive retrieval,
+ just like `--follow-tags=STRING'.
+
+force_html = on/off
+ If set to on, force the input filename to be regarded as an HTML
+ document--the same as `-F'.
+
+ftp_password = STRING
+ Set your FTP password to STRING. Without this setting, the
+ password defaults to `-wget@', which is a useful default for
+ anonymous FTP access.
+
+ This command used to be named `passwd' prior to Wget 1.10.
+
+ftp_proxy = STRING
+ Use STRING as FTP proxy, instead of the one specified in
+ environment.
+
+ftp_user = STRING
+ Set FTP user to STRING.
+
+ This command used to be named `login' prior to Wget 1.10.
+
+glob = on/off
+ Turn globbing on/off--the same as `--glob' and `--no-glob'.
+
+header = STRING
+ Define a header for HTTP downloads, like using `--header=STRING'.
+
+adjust_extension = on/off
+ Add a `.html' extension to `text/html' or `application/xhtml+xml'
+ files that lack one, or a `.css' extension to `text/css' files
+ that lack one, like `-E'. Previously named `html_extension' (still
+ acceptable, but deprecated).
+
+http_keep_alive = on/off
+ Turn the keep-alive feature on or off (defaults to on). Turning it
+ off is equivalent to `--no-http-keep-alive'.
+
+http_password = STRING
+ Set HTTP password, equivalent to `--http-password=STRING'.
+
+http_proxy = STRING
+ Use STRING as HTTP proxy, instead of the one specified in
+ environment.
+
+http_user = STRING
+ Set HTTP user to STRING, equivalent to `--http-user=STRING'.
+
+https_proxy = STRING
+ Use STRING as HTTPS proxy, instead of the one specified in
+ environment.
+
+ignore_case = on/off
+ When set to on, match files and directories case insensitively; the
+ same as `--ignore-case'.
+
+ignore_length = on/off
+ When set to on, ignore `Content-Length' header; the same as
+ `--ignore-length'.
+
+ignore_tags = STRING
+ Ignore certain HTML tags when doing a recursive retrieval, like
+ `--ignore-tags=STRING'.
+
+include_directories = STRING
+ Specify a comma-separated list of directories you wish to follow
+ when downloading--the same as `-I STRING'.
+
+iri = on/off
+ When set to on, enable internationalized URI (IRI) support; the
+ same as `--iri'.
+
+inet4_only = on/off
+ Force connecting to IPv4 addresses, off by default. You can put
+ this in the global init file to disable Wget's attempts to resolve
+ and connect to IPv6 hosts. Available only if Wget was compiled
+ with IPv6 support. The same as `--inet4-only' or `-4'.
+
+inet6_only = on/off
+ Force connecting to IPv6 addresses, off by default. Available
+ only if Wget was compiled with IPv6 support. The same as
+ `--inet6-only' or `-6'.
+
+input = FILE
+ Read the URLs from STRING, like `-i FILE'.
+
+keep_session_cookies = on/off
+ When specified, causes `save_cookies = on' to also save session
+ cookies. See `--keep-session-cookies'.
+
+limit_rate = RATE
+ Limit the download speed to no more than RATE bytes per second.
+ The same as `--limit-rate=RATE'.
+
+load_cookies = FILE
+ Load cookies from FILE. See `--load-cookies FILE'.
+
+local_encoding = ENCODING
+ Force Wget to use ENCODING as the default system encoding. See
+ `--local-encoding'.
+
+logfile = FILE
+ Set logfile to FILE, the same as `-o FILE'.
+
+max_redirect = NUMBER
+ Specifies the maximum number of redirections to follow for a
+ resource. See `--max-redirect=NUMBER'.
+
+mirror = on/off
+ Turn mirroring on/off. The same as `-m'.
+
+netrc = on/off
+ Turn reading netrc on or off.
+
+no_clobber = on/off
+ Same as `-nc'.
+
+no_parent = on/off
+ Disallow retrieving outside the directory hierarchy, like
+ `--no-parent' (*note Directory-Based Limits::).
+
+no_proxy = STRING
+ Use STRING as the comma-separated list of domains to avoid in
+ proxy loading, instead of the one specified in environment.
+
+output_document = FILE
+ Set the output filename--the same as `-O FILE'.
+
+page_requisites = on/off
+ Download all ancillary documents necessary for a single HTML page
+ to display properly--the same as `-p'.
+
+passive_ftp = on/off
+ Change setting of passive FTP, equivalent to the `--passive-ftp'
+ option.
+
+password = STRING
+ Specify password STRING for both FTP and HTTP file retrieval.
+ This command can be overridden using the `ftp_password' and
+ `http_password' command for FTP and HTTP respectively.
+
+post_data = STRING
+ Use POST as the method for all HTTP requests and send STRING in
+ the request body. The same as `--post-data=STRING'.
+
+post_file = FILE
+ Use POST as the method for all HTTP requests and send the contents
+ of FILE in the request body. The same as `--post-file=FILE'.
+
+prefer_family = none/IPv4/IPv6
+ When given a choice of several addresses, connect to the addresses
+ with specified address family first. The address order returned by
+ DNS is used without change by default. The same as
+ `--prefer-family', which see for a detailed discussion of why this
+ is useful.
+
+private_key = FILE
+ Set the private key file to FILE. The same as
+ `--private-key=FILE'.
+
+private_key_type = STRING
+ Specify the type of the private key, legal values being `PEM' (the
+ default) and `DER' (aka ASN1). The same as
+ `--private-type=STRING'.
+
+progress = STRING
+ Set the type of the progress indicator. Legal types are `dot' and
+ `bar'. Equivalent to `--progress=STRING'.
+
+protocol_directories = on/off
+ When set, use the protocol name as a directory component of local
+ file names. The same as `--protocol-directories'.
+
+proxy_password = STRING
+ Set proxy authentication password to STRING, like
+ `--proxy-password=STRING'.
+
+proxy_user = STRING
+ Set proxy authentication user name to STRING, like
+ `--proxy-user=STRING'.
+
+quiet = on/off
+ Quiet mode--the same as `-q'.
+
+quota = QUOTA
+ Specify the download quota, which is useful to put in the global
+ `wgetrc'. When download quota is specified, Wget will stop
+ retrieving after the download sum has become greater than quota.
+ The quota can be specified in bytes (default), kbytes `k'
+ appended) or mbytes (`m' appended). Thus `quota = 5m' will set
+ the quota to 5 megabytes. Note that the user's startup file
+ overrides system settings.
+
+random_file = FILE
+ Use FILE as a source of randomness on systems lacking
+ `/dev/random'.
+
+random_wait = on/off
+ Turn random between-request wait times on or off. The same as
+ `--random-wait'.
+
+read_timeout = N
+ Set the read (and write) timeout--the same as `--read-timeout=N'.
+
+reclevel = N
+ Recursion level (depth)--the same as `-l N'.
+
+recursive = on/off
+ Recursive on/off--the same as `-r'.
+
+referer = STRING
+ Set HTTP `Referer:' header just like `--referer=STRING'. (Note
+ that it was the folks who wrote the HTTP spec who got the spelling
+ of "referrer" wrong.)
+
+relative_only = on/off
+ Follow only relative links--the same as `-L' (*note Relative
+ Links::).
+
+remote_encoding = ENCODING
+ Force Wget to use ENCODING as the default remote server encoding.
+ See `--remote-encoding'.
+
+remove_listing = on/off
+ If set to on, remove FTP listings downloaded by Wget. Setting it
+ to off is the same as `--no-remove-listing'.
+
+restrict_file_names = unix/windows
+ Restrict the file names generated by Wget from URLs. See
+ `--restrict-file-names' for a more detailed description.
+
+retr_symlinks = on/off
+ When set to on, retrieve symbolic links as if they were plain
+ files; the same as `--retr-symlinks'.
+
+retry_connrefused = on/off
+ When set to on, consider "connection refused" a transient
+ error--the same as `--retry-connrefused'.
+
+robots = on/off
+ Specify whether the norobots convention is respected by Wget, "on"
+ by default. This switch controls both the `/robots.txt' and the
+ `nofollow' aspect of the spec. *Note Robot Exclusion::, for more
+ details about this. Be sure you know what you are doing before
+ turning this off.
+
+save_cookies = FILE
+ Save cookies to FILE. The same as `--save-cookies FILE'.
+
+save_headers = on/off
+ Same as `--save-headers'.
+
+secure_protocol = STRING
+ Choose the secure protocol to be used. Legal values are `auto'
+ (the default), `SSLv2', `SSLv3', and `TLSv1'. The same as
+ `--secure-protocol=STRING'.
+
+server_response = on/off
+ Choose whether or not to print the HTTP and FTP server
+ responses--the same as `-S'.
+
+show_all_dns_entries = on/off
+ When a DNS name is resolved, show all the IP addresses, not just
+ the first three.
+
+span_hosts = on/off
+ Same as `-H'.
+
+spider = on/off
+ Same as `--spider'.
+
+strict_comments = on/off
+ Same as `--strict-comments'.
+
+timeout = N
+ Set all applicable timeout values to N, the same as `-T N'.
+
+timestamping = on/off
+ Turn timestamping on/off. The same as `-N' (*note
+ Time-Stamping::).
+
+use_server_timestamps = on/off
+ If set to `off', Wget won't set the local file's timestamp by the
+ one on the server (same as `--no-use-server-timestamps').
+
+tries = N
+ Set number of retries per URL--the same as `-t N'.
+
+use_proxy = on/off
+ When set to off, don't use proxy even when proxy-related
+ environment variables are set. In that case it is the same as
+ using `--no-proxy'.
+
+user = STRING
+ Specify username STRING for both FTP and HTTP file retrieval.
+ This command can be overridden using the `ftp_user' and
+ `http_user' command for FTP and HTTP respectively.
+
+user_agent = STRING
+ User agent identification sent to the HTTP Server--the same as
+ `--user-agent=STRING'.
+
+verbose = on/off
+ Turn verbose on/off--the same as `-v'/`-nv'.
+
+wait = N
+ Wait N seconds between retrievals--the same as `-w N'.
+
+wait_retry = N
+ Wait up to N seconds between retries of failed retrievals
+ only--the same as `--waitretry=N'. Note that this is turned on by
+ default in the global `wgetrc'.
+
+
+File: wget.info, Node: Sample Wgetrc, Prev: Wgetrc Commands, Up: Startup File
+
+6.4 Sample Wgetrc
+=================
+
+This is the sample initialization file, as given in the distribution.
+It is divided in two section--one for global usage (suitable for global
+startup file), and one for local usage (suitable for `$HOME/.wgetrc').
+Be careful about the things you change.
+
+ Note that almost all the lines are commented out. For a command to
+have any effect, you must remove the `#' character at the beginning of
+its line.
+
+ ###
+ ### Sample Wget initialization file .wgetrc
+ ###
+
+ ## You can use this file to change the default behaviour of wget or to
+ ## avoid having to type many many command-line options. This file does
+ ## not contain a comprehensive list of commands -- look at the manual
+ ## to find out what you can put into this file.
+ ##
+ ## Wget initialization file can reside in /usr/local/etc/wgetrc
+ ## (global, for all users) or $HOME/.wgetrc (for a single user).
+ ##
+ ## To use the settings in this file, you will have to uncomment them,
+ ## as well as change them, in most cases, as the values on the
+ ## commented-out lines are the default values (e.g. "off").
+
+
+ ##
+ ## Global settings (useful for setting up in /usr/local/etc/wgetrc).
+ ## Think well before you change them, since they may reduce wget's
+ ## functionality, and make it behave contrary to the documentation:
+ ##
+
+ # You can set retrieve quota for beginners by specifying a value
+ # optionally followed by 'K' (kilobytes) or 'M' (megabytes). The
+ # default quota is unlimited.
+ #quota = inf
+
+ # You can lower (or raise) the default number of retries when
+ # downloading a file (default is 20).
+ #tries = 20
+
+ # Lowering the maximum depth of the recursive retrieval is handy to
+ # prevent newbies from going too "deep" when they unwittingly start
+ # the recursive retrieval. The default is 5.
+ #reclevel = 5
+
+ # By default Wget uses "passive FTP" transfer where the client
+ # initiates the data connection to the server rather than the other
+ # way around. That is required on systems behind NAT where the client
+ # computer cannot be easily reached from the Internet. However, some
+ # firewalls software explicitly supports active FTP and in fact has
+ # problems supporting passive transfer. If you are in such
+ # environment, use "passive_ftp = off" to revert to active FTP.
+ #passive_ftp = off
+
+ # The "wait" command below makes Wget wait between every connection.
+ # If, instead, you want Wget to wait only between retries of failed
+ # downloads, set waitretry to maximum number of seconds to wait (Wget
+ # will use "linear backoff", waiting 1 second after the first failure
+ # on a file, 2 seconds after the second failure, etc. up to this max).
+ #waitretry = 10
+
+
+ ##
+ ## Local settings (for a user to set in his $HOME/.wgetrc). It is
+ ## *highly* undesirable to put these settings in the global file, since
+ ## they are potentially dangerous to "normal" users.
+ ##
+ ## Even when setting up your own ~/.wgetrc, you should know what you
+ ## are doing before doing so.
+ ##
+
+ # Set this to on to use timestamping by default:
+ #timestamping = off
+
+ # It is a good idea to make Wget send your email address in a `From:'
+ # header with your request (so that server administrators can contact
+ # you in case of errors). Wget does *not* send `From:' by default.
+ #header = From: Your Name <username@site.domain>
+
+ # You can set up other headers, like Accept-Language. Accept-Language
+ # is *not* sent by default.
+ #header = Accept-Language: en
+
+ # You can set the default proxies for Wget to use for http, https, and ftp.
+ # They will override the value in the environment.
+ #https_proxy = http://proxy.yoyodyne.com:18023/
+ #http_proxy = http://proxy.yoyodyne.com:18023/
+ #ftp_proxy = http://proxy.yoyodyne.com:18023/
+
+ # If you do not want to use proxy at all, set this to off.
+ #use_proxy = on
+
+ # You can customize the retrieval outlook. Valid options are default,
+ # binary, mega and micro.
+ #dot_style = default
+
+ # Setting this to off makes Wget not download /robots.txt. Be sure to
+ # know *exactly* what /robots.txt is and how it is used before changing
+ # the default!
+ #robots = on
+
+ # It can be useful to make Wget wait between connections. Set this to
+ # the number of seconds you want Wget to wait.
+ #wait = 0
+
+ # You can force creating directory structure, even if a single is being
+ # retrieved, by setting this to on.
+ #dirstruct = off
+
+ # You can turn on recursive retrieving by default (don't do this if
+ # you are not sure you know what it means) by setting this to on.
+ #recursive = off
+
+ # To always back up file X as X.orig before converting its links (due
+ # to -k / --convert-links / convert_links = on having been specified),
+ # set this variable to on:
+ #backup_converted = off
+
+ # To have Wget follow FTP links from HTML files by default, set this
+ # to on:
+ #follow_ftp = off
+
+ # To try ipv6 addresses first:
+ #prefer-family = IPv6
+
+ # Set default IRI support state
+ #iri = off
+
+ # Force the default system encoding
+ #locale = UTF-8
+
+ # Force the default remote server encoding
+ #remoteencoding = UTF-8
+
+
+File: wget.info, Node: Examples, Next: Various, Prev: Startup File, Up: Top
+
+7 Examples
+**********
+
+The examples are divided into three sections loosely based on their
+complexity.
+
+* Menu:
+
+* Simple Usage:: Simple, basic usage of the program.
+* Advanced Usage:: Advanced tips.
+* Very Advanced Usage:: The hairy stuff.
+
+
+File: wget.info, Node: Simple Usage, Next: Advanced Usage, Prev: Examples, Up: Examples
+
+7.1 Simple Usage
+================
+
+ * Say you want to download a URL. Just type:
+
+ wget http://fly.srk.fer.hr/
+
+ * But what will happen if the connection is slow, and the file is
+ lengthy? The connection will probably fail before the whole file
+ is retrieved, more than once. In this case, Wget will try getting
+ the file until it either gets the whole of it, or exceeds the
+ default number of retries (this being 20). It is easy to change
+ the number of tries to 45, to insure that the whole file will
+ arrive safely:
+
+ wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
+
+ * Now let's leave Wget to work in the background, and write its
+ progress to log file `log'. It is tiring to type `--tries', so we
+ shall use `-t'.
+
+ wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
+
+ The ampersand at the end of the line makes sure that Wget works in
+ the background. To unlimit the number of retries, use `-t inf'.
+
+ * The usage of FTP is as simple. Wget will take care of login and
+ password.
+
+ wget ftp://gnjilux.srk.fer.hr/welcome.msg
+
+ * If you specify a directory, Wget will retrieve the directory
+ listing, parse it and convert it to HTML. Try:
+
+ wget ftp://ftp.gnu.org/pub/gnu/
+ links index.html
+
+
+File: wget.info, Node: Advanced Usage, Next: Very Advanced Usage, Prev: Simple Usage, Up: Examples
+
+7.2 Advanced Usage
+==================
+
+ * You have a file that contains the URLs you want to download? Use
+ the `-i' switch:
+
+ wget -i FILE
+
+ If you specify `-' as file name, the URLs will be read from
+ standard input.
+
+ * Create a five levels deep mirror image of the GNU web site, with
+ the same directory structure the original has, with only one try
+ per document, saving the log of the activities to `gnulog':
+
+ wget -r http://www.gnu.org/ -o gnulog
+
+ * The same as the above, but convert the links in the downloaded
+ files to point to local files, so you can view the documents
+ off-line:
+
+ wget --convert-links -r http://www.gnu.org/ -o gnulog
+
+ * Retrieve only one HTML page, but make sure that all the elements
+ needed for the page to be displayed, such as inline images and
+ external style sheets, are also downloaded. Also make sure the
+ downloaded page references the downloaded links.
+
+ wget -p --convert-links http://www.server.com/dir/page.html
+
+ The HTML page will be saved to `www.server.com/dir/page.html', and
+ the images, stylesheets, etc., somewhere under `www.server.com/',
+ depending on where they were on the remote server.
+
+ * The same as the above, but without the `www.server.com/' directory.
+ In fact, I don't want to have all those random server directories
+ anyway--just save _all_ those files under a `download/'
+ subdirectory of the current directory.
+
+ wget -p --convert-links -nH -nd -Pdownload \
+ http://www.server.com/dir/page.html
+
+ * Retrieve the index.html of `www.lycos.com', showing the original
+ server headers:
+
+ wget -S http://www.lycos.com/
+
+ * Save the server headers with the file, perhaps for post-processing.
+
+ wget --save-headers http://www.lycos.com/
+ more index.html
+
+ * Retrieve the first two levels of `wuarchive.wustl.edu', saving them
+ to `/tmp'.
+
+ wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
+
+ * You want to download all the GIFs from a directory on an HTTP
+ server. You tried `wget http://www.server.com/dir/*.gif', but that
+ didn't work because HTTP retrieval does not support globbing. In
+ that case, use:
+
+ wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
+
+ More verbose, but the effect is the same. `-r -l1' means to
+ retrieve recursively (*note Recursive Download::), with maximum
+ depth of 1. `--no-parent' means that references to the parent
+ directory are ignored (*note Directory-Based Limits::), and
+ `-A.gif' means to download only the GIF files. `-A "*.gif"' would
+ have worked too.
+
+ * Suppose you were in the middle of downloading, when Wget was
+ interrupted. Now you do not want to clobber the files already
+ present. It would be:
+
+ wget -nc -r http://www.gnu.org/
+
+ * If you want to encode your own username and password to HTTP or
+ FTP, use the appropriate URL syntax (*note URL Format::).
+
+ wget ftp://hniksic:mypassword@unix.server.com/.emacs
+
+ Note, however, that this usage is not advisable on multi-user
+ systems because it reveals your password to anyone who looks at
+ the output of `ps'.
+
+ * You would like the output documents to go to standard output
+ instead of to files?
+
+ wget -O - http://jagor.srce.hr/ http://www.srce.hr/
+
+ You can also combine the two options and make pipelines to
+ retrieve the documents from remote hotlists:
+
+ wget -O - http://cool.list.com/ | wget --force-html -i -
+
+
+File: wget.info, Node: Very Advanced Usage, Prev: Advanced Usage, Up: Examples
+
+7.3 Very Advanced Usage
+=======================
+
+ * If you wish Wget to keep a mirror of a page (or FTP
+ subdirectories), use `--mirror' (`-m'), which is the shorthand for
+ `-r -l inf -N'. You can put Wget in the crontab file asking it to
+ recheck a site each Sunday:
+
+ crontab
+ 0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
+
+ * In addition to the above, you want the links to be converted for
+ local viewing. But, after having read this manual, you know that
+ link conversion doesn't play well with timestamping, so you also
+ want Wget to back up the original HTML files before the
+ conversion. Wget invocation would look like this:
+
+ wget --mirror --convert-links --backup-converted \
+ http://www.gnu.org/ -o /home/me/weeklog
+
+ * But you've also noticed that local viewing doesn't work all that
+ well when HTML files are saved under extensions other than `.html',
+ perhaps because they were served as `index.cgi'. So you'd like
+ Wget to rename all the files served with content-type `text/html'
+ or `application/xhtml+xml' to `NAME.html'.
+
+ wget --mirror --convert-links --backup-converted \
+ --html-extension -o /home/me/weeklog \
+ http://www.gnu.org/
+
+ Or, with less typing:
+
+ wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
+
+
+File: wget.info, Node: Various, Next: Appendices, Prev: Examples, Up: Top
+
+8 Various
+*********
+
+This chapter contains all the stuff that could not fit anywhere else.
+
+* Menu:
+
+* Proxies:: Support for proxy servers.
+* Distribution:: Getting the latest version.
+* Web Site:: GNU Wget's presence on the World Wide Web.
+* Mailing Lists:: Wget mailing list for announcements and discussion.
+* Internet Relay Chat:: Wget's presence on IRC.
+* Reporting Bugs:: How and where to report bugs.
+* Portability:: The systems Wget works on.
+* Signals:: Signal-handling performed by Wget.
+
+
+File: wget.info, Node: Proxies, Next: Distribution, Prev: Various, Up: Various
+
+8.1 Proxies
+===========
+
+"Proxies" are special-purpose HTTP servers designed to transfer data
+from remote servers to local clients. One typical use of proxies is
+lightening network load for users behind a slow connection. This is
+achieved by channeling all HTTP and FTP requests through the proxy
+which caches the transferred data. When a cached resource is requested
+again, proxy will return the data from cache. Another use for proxies
+is for companies that separate (for security reasons) their internal
+networks from the rest of Internet. In order to obtain information
+from the Web, their users connect and retrieve remote data using an
+authorized proxy.
+
+ Wget supports proxies for both HTTP and FTP retrievals. The
+standard way to specify proxy location, which Wget recognizes, is using
+the following environment variables:
+
+`http_proxy'
+`https_proxy'
+ If set, the `http_proxy' and `https_proxy' variables should
+ contain the URLs of the proxies for HTTP and HTTPS connections
+ respectively.
+
+`ftp_proxy'
+ This variable should contain the URL of the proxy for FTP
+ connections. It is quite common that `http_proxy' and `ftp_proxy'
+ are set to the same URL.
+
+`no_proxy'
+ This variable should contain a comma-separated list of domain
+ extensions proxy should _not_ be used for. For instance, if the
+ value of `no_proxy' is `.mit.edu', proxy will not be used to
+ retrieve documents from MIT.
+
+ In addition to the environment variables, proxy location and settings
+may be specified from within Wget itself.
+
+`--no-proxy'
+`proxy = on/off'
+ This option and the corresponding command may be used to suppress
+ the use of proxy, even if the appropriate environment variables
+ are set.
+
+`http_proxy = URL'
+`https_proxy = URL'
+`ftp_proxy = URL'
+`no_proxy = STRING'
+ These startup file variables allow you to override the proxy
+ settings specified by the environment.
+
+ Some proxy servers require authorization to enable you to use them.
+The authorization consists of "username" and "password", which must be
+sent by Wget. As with HTTP authorization, several authentication
+schemes exist. For proxy authorization only the `Basic' authentication
+scheme is currently implemented.
+
+ You may specify your username and password either through the proxy
+URL or through the command-line options. Assuming that the company's
+proxy is located at `proxy.company.com' at port 8001, a proxy URL
+location containing authorization data might look like this:
+
+ http://hniksic:mypassword@proxy.company.com:8001/
+
+ Alternatively, you may use the `proxy-user' and `proxy-password'
+options, and the equivalent `.wgetrc' settings `proxy_user' and
+`proxy_password' to set the proxy username and password.
+
+
+File: wget.info, Node: Distribution, Next: Web Site, Prev: Proxies, Up: Various
+
+8.2 Distribution
+================
+
+Like all GNU utilities, the latest version of Wget can be found at the
+master GNU archive site ftp.gnu.org, and its mirrors. For example,
+Wget 1.13.4 can be found at
+`ftp://ftp.gnu.org/pub/gnu/wget/wget-1.13.4.tar.gz'
+
+
+File: wget.info, Node: Web Site, Next: Mailing Lists, Prev: Distribution, Up: Various
+
+8.3 Web Site
+============
+
+The official web site for GNU Wget is at
+`http://www.gnu.org/software/wget/'. However, most useful information
+resides at "The Wget Wgiki", `http://wget.addictivecode.org/'.
+
+
+File: wget.info, Node: Mailing Lists, Next: Internet Relay Chat, Prev: Web Site, Up: Various
+
+8.4 Mailing Lists
+=================
+
+Primary List
+------------
+
+The primary mailinglist for discussion, bug-reports, or questions about
+GNU Wget is at <bug-wget@gnu.org>. To subscribe, send an email to
+<bug-wget-join@gnu.org>, or visit
+`http://lists.gnu.org/mailman/listinfo/bug-wget'.
+
+ You do not need to subscribe to send a message to the list; however,
+please note that unsubscribed messages are moderated, and may take a
+while before they hit the list--*usually around a day*. If you want
+your message to show up immediately, please subscribe to the list
+before posting. Archives for the list may be found at
+`http://lists.gnu.org/pipermail/bug-wget/'.
+
+ An NNTP/Usenettish gateway is also available via Gmane
+(http://gmane.org/about.php). You can see the Gmane archives at
+`http://news.gmane.org/gmane.comp.web.wget.general'. Note that the
+Gmane archives conveniently include messages from both the current
+list, and the previous one. Messages also show up in the Gmane archives
+sooner than they do at `lists.gnu.org'.
+
+Bug Notices List
+----------------
+
+Additionally, there is the <wget-notify@addictivecode.org> mailing
+list. This is a non-discussion list that receives bug report
+notifications from the bug-tracker. To subscribe to this list, send an
+email to <wget-notify-join@addictivecode.org>, or visit
+`http://addictivecode.org/mailman/listinfo/wget-notify'.
+
+Obsolete Lists
+--------------
+
+Previously, the mailing list <wget@sunsite.dk> was used as the main
+discussion list, and another list, <wget-patches@sunsite.dk> was used
+for submitting and discussing patches to GNU Wget.
+
+ Messages from <wget@sunsite.dk> are archived at
+ `http://www.mail-archive.com/wget%40sunsite.dk/' and at
+
+ `http://news.gmane.org/gmane.comp.web.wget.general' (which also
+ continues to archive the current list, <bug-wget@gnu.org>).
+
+ Messages from <wget-patches@sunsite.dk> are archived at
+ `http://news.gmane.org/gmane.comp.web.wget.patches'.
+
+
+File: wget.info, Node: Internet Relay Chat, Next: Reporting Bugs, Prev: Mailing Lists, Up: Various
+
+8.5 Internet Relay Chat
+=======================
+
+In addition to the mailinglists, we also have a support channel set up
+via IRC at `irc.freenode.org', `#wget'. Come check it out!
+
+
+File: wget.info, Node: Reporting Bugs, Next: Portability, Prev: Internet Relay Chat, Up: Various
+
+8.6 Reporting Bugs
+==================
+
+You are welcome to submit bug reports via the GNU Wget bug tracker (see
+`http://wget.addictivecode.org/BugTracker').
+
+ Before actually submitting a bug report, please try to follow a few
+simple guidelines.
+
+ 1. Please try to ascertain that the behavior you see really is a bug.
+ If Wget crashes, it's a bug. If Wget does not behave as
+ documented, it's a bug. If things work strange, but you are not
+ sure about the way they are supposed to work, it might well be a
+ bug, but you might want to double-check the documentation and the
+ mailing lists (*note Mailing Lists::).
+
+ 2. Try to repeat the bug in as simple circumstances as possible.
+ E.g. if Wget crashes while downloading `wget -rl0 -kKE -t5
+ --no-proxy http://yoyodyne.com -o /tmp/log', you should try to see
+ if the crash is repeatable, and if will occur with a simpler set
+ of options. You might even try to start the download at the page
+ where the crash occurred to see if that page somehow triggered the
+ crash.
+
+ Also, while I will probably be interested to know the contents of
+ your `.wgetrc' file, just dumping it into the debug message is
+ probably a bad idea. Instead, you should first try to see if the
+ bug repeats with `.wgetrc' moved out of the way. Only if it turns
+ out that `.wgetrc' settings affect the bug, mail me the relevant
+ parts of the file.
+
+ 3. Please start Wget with `-d' option and send us the resulting
+ output (or relevant parts thereof). If Wget was compiled without
+ debug support, recompile it--it is _much_ easier to trace bugs
+ with debug support on.
+
+ Note: please make sure to remove any potentially sensitive
+ information from the debug log before sending it to the bug
+ address. The `-d' won't go out of its way to collect sensitive
+ information, but the log _will_ contain a fairly complete
+ transcript of Wget's communication with the server, which may
+ include passwords and pieces of downloaded data. Since the bug
+ address is publically archived, you may assume that all bug
+ reports are visible to the public.
+
+ 4. If Wget has crashed, try to run it in a debugger, e.g. `gdb `which
+ wget` core' and type `where' to get the backtrace. This may not
+ work if the system administrator has disabled core files, but it is
+ safe to try.
+
+
+File: wget.info, Node: Portability, Next: Signals, Prev: Reporting Bugs, Up: Various
+
+8.7 Portability
+===============
+
+Like all GNU software, Wget works on the GNU system. However, since it
+uses GNU Autoconf for building and configuring, and mostly avoids using
+"special" features of any particular Unix, it should compile (and work)
+on all common Unix flavors.
+
+ Various Wget versions have been compiled and tested under many kinds
+of Unix systems, including GNU/Linux, Solaris, SunOS 4.x, Mac OS X, OSF
+(aka Digital Unix or Tru64), Ultrix, *BSD, IRIX, AIX, and others. Some
+of those systems are no longer in widespread use and may not be able to
+support recent versions of Wget. If Wget fails to compile on your
+system, we would like to know about it.
+
+ Thanks to kind contributors, this version of Wget compiles and works
+on 32-bit Microsoft Windows platforms. It has been compiled
+successfully using MS Visual C++ 6.0, Watcom, Borland C, and GCC
+compilers. Naturally, it is crippled of some features available on
+Unix, but it should work as a substitute for people stuck with Windows.
+Note that Windows-specific portions of Wget are not guaranteed to be
+supported in the future, although this has been the case in practice
+for many years now. All questions and problems in Windows usage should
+be reported to Wget mailing list at <wget@sunsite.dk> where the
+volunteers who maintain the Windows-related features might look at them.
+
+ Support for building on MS-DOS via DJGPP has been contributed by
+Gisle Vanem; a port to VMS is maintained by Steven Schweda, and is
+available at `http://antinode.org/'.
+
+
+File: wget.info, Node: Signals, Prev: Portability, Up: Various
+
+8.8 Signals
+===========
+
+Since the purpose of Wget is background work, it catches the hangup
+signal (`SIGHUP') and ignores it. If the output was on standard
+output, it will be redirected to a file named `wget-log'. Otherwise,
+`SIGHUP' is ignored. This is convenient when you wish to redirect the
+output of Wget after having started it.
+
+ $ wget http://www.gnus.org/dist/gnus.tar.gz &
+ ...
+ $ kill -HUP %%
+ SIGHUP received, redirecting output to `wget-log'.
+
+ Other than that, Wget will not try to interfere with signals in any
+way. `C-c', `kill -TERM' and `kill -KILL' should kill it alike.
+
+
+File: wget.info, Node: Appendices, Next: Copying this manual, Prev: Various, Up: Top
+
+9 Appendices
+************
+
+This chapter contains some references I consider useful.
+
+* Menu:
+
+* Robot Exclusion:: Wget's support for RES.
+* Security Considerations:: Security with Wget.
+* Contributors:: People who helped.
+
+
+File: wget.info, Node: Robot Exclusion, Next: Security Considerations, Prev: Appendices, Up: Appendices
+
+9.1 Robot Exclusion
+===================
+
+It is extremely easy to make Wget wander aimlessly around a web site,
+sucking all the available data in progress. `wget -r SITE', and you're
+set. Great? Not for the server admin.
+
+ As long as Wget is only retrieving static pages, and doing it at a
+reasonable rate (see the `--wait' option), there's not much of a
+problem. The trouble is that Wget can't tell the difference between the
+smallest static page and the most demanding CGI. A site I know has a
+section handled by a CGI Perl script that converts Info files to HTML on
+the fly. The script is slow, but works well enough for human users
+viewing an occasional Info file. However, when someone's recursive Wget
+download stumbles upon the index page that links to all the Info files
+through the script, the system is brought to its knees without providing
+anything useful to the user (This task of converting Info files could be
+done locally and access to Info documentation for all installed GNU
+software on a system is available from the `info' command).
+
+ To avoid this kind of accident, as well as to preserve privacy for
+documents that need to be protected from well-behaved robots, the
+concept of "robot exclusion" was invented. The idea is that the server
+administrators and document authors can specify which portions of the
+site they wish to protect from robots and those they will permit access.
+
+ The most popular mechanism, and the de facto standard supported by
+all the major robots, is the "Robots Exclusion Standard" (RES) written
+by Martijn Koster et al. in 1994. It specifies the format of a text
+file containing directives that instruct the robots which URL paths to
+avoid. To be found by the robots, the specifications must be placed in
+`/robots.txt' in the server root, which the robots are expected to
+download and parse.
+
+ Although Wget is not a web robot in the strictest sense of the word,
+it can download large parts of the site without the user's intervention
+to download an individual page. Because of that, Wget honors RES when
+downloading recursively. For instance, when you issue:
+
+ wget -r http://www.server.com/
+
+ First the index of `www.server.com' will be downloaded. If Wget
+finds that it wants to download more documents from that server, it will
+request `http://www.server.com/robots.txt' and, if found, use it for
+further downloads. `robots.txt' is loaded only once per each server.
+
+ Until version 1.8, Wget supported the first version of the standard,
+written by Martijn Koster in 1994 and available at
+`http://www.robotstxt.org/wc/norobots.html'. As of version 1.8, Wget
+has supported the additional directives specified in the internet draft
+`<draft-koster-robots-00.txt>' titled "A Method for Web Robots
+Control". The draft, which has as far as I know never made to an RFC,
+is available at `http://www.robotstxt.org/wc/norobots-rfc.txt'.
+
+ This manual no longer includes the text of the Robot Exclusion
+Standard.
+
+ The second, less known mechanism, enables the author of an individual
+document to specify whether they want the links from the file to be
+followed by a robot. This is achieved using the `META' tag, like this:
+
+ <meta name="robots" content="nofollow">
+
+ This is explained in some detail at
+`http://www.robotstxt.org/wc/meta-user.html'. Wget supports this
+method of robot exclusion in addition to the usual `/robots.txt'
+exclusion.
+
+ If you know what you are doing and really really wish to turn off the
+robot exclusion, set the `robots' variable to `off' in your `.wgetrc'.
+You can achieve the same effect from the command line using the `-e'
+switch, e.g. `wget -e robots=off URL...'.
+
+
+File: wget.info, Node: Security Considerations, Next: Contributors, Prev: Robot Exclusion, Up: Appendices
+
+9.2 Security Considerations
+===========================
+
+When using Wget, you must be aware that it sends unencrypted passwords
+through the network, which may present a security problem. Here are the
+main issues, and some solutions.
+
+ 1. The passwords on the command line are visible using `ps'. The best
+ way around it is to use `wget -i -' and feed the URLs to Wget's
+ standard input, each on a separate line, terminated by `C-d'.
+ Another workaround is to use `.netrc' to store passwords; however,
+ storing unencrypted passwords is also considered a security risk.
+
+ 2. Using the insecure "basic" authentication scheme, unencrypted
+ passwords are transmitted through the network routers and gateways.
+
+ 3. The FTP passwords are also in no way encrypted. There is no good
+ solution for this at the moment.
+
+ 4. Although the "normal" output of Wget tries to hide the passwords,
+ debugging logs show them, in all forms. This problem is avoided by
+ being careful when you send debug logs (yes, even when you send
+ them to me).
+
+
+File: wget.info, Node: Contributors, Prev: Security Considerations, Up: Appendices
+
+9.3 Contributors
+================
+
+GNU Wget was written by Hrvoje Niksic <hniksic@xemacs.org>.
+
+However, the development of Wget could never have gone as far as it
+has, were it not for the help of many people, either with bug reports,
+feature proposals, patches, or letters saying "Thanks!".
+
+ Special thanks goes to the following people (no particular order):
+
+ * Dan Harkless--contributed a lot of code and documentation of
+ extremely high quality, as well as the `--page-requisites' and
+ related options. He was the principal maintainer for some time and
+ released Wget 1.6.
+
+ * Ian Abbott--contributed bug fixes, Windows-related fixes, and
+ provided a prototype implementation of the breadth-first recursive
+ download. Co-maintained Wget during the 1.8 release cycle.
+
+ * The dotsrc.org crew, in particular Karsten Thygesen--donated system
+ resources such as the mailing list, web space, FTP space, and
+ version control repositories, along with a lot of time to make
+ these actually work. Christian Reiniger was of invaluable help
+ with setting up Subversion.
+
+ * Heiko Herold--provided high-quality Windows builds and contributed
+ bug and build reports for many years.
+
+ * Shawn McHorse--bug reports and patches.
+
+ * Kaveh R. Ghazi--on-the-fly `ansi2knr'-ization. Lots of
+ portability fixes.
+
+ * Gordon Matzigkeit--`.netrc' support.
+
+ * Zlatko Calusic, Tomislav Vujec and Drazen Kacar--feature
+ suggestions and "philosophical" discussions.
+
+ * Darko Budor--initial port to Windows.
+
+ * Antonio Rosella--help and suggestions, plus the initial Italian
+ translation.
+
+ * Tomislav Petrovic, Mario Mikocevic--many bug reports and
+ suggestions.
+
+ * Francois Pinard--many thorough bug reports and discussions.
+
+ * Karl Eichwalder--lots of help with internationalization, Makefile
+ layout and many other things.
+
+ * Junio Hamano--donated support for Opie and HTTP `Digest'
+ authentication.
+
+ * Mauro Tortonesi--improved IPv6 support, adding support for dual
+ family systems. Refactored and enhanced FTP IPv6 code. Maintained
+ GNU Wget from 2004-2007.
+
+ * Christopher G. Lewis--maintenance of the Windows version of GNU
+ WGet.
+
+ * Gisle Vanem--many helpful patches and improvements, especially for
+ Windows and MS-DOS support.
+
+ * Ralf Wildenhues--contributed patches to convert Wget to use
+ Automake as part of its build process, and various bugfixes.
+
+ * Steven Schubiger--Many helpful patches, bugfixes and improvements.
+ Notably, conversion of Wget to use the Gnulib quotes and quoteargs
+ modules, and the addition of password prompts at the console, via
+ the Gnulib getpasswd-gnu module.
+
+ * Ted Mielczarek--donated support for CSS.
+
+ * Saint Xavier--Support for IRIs (RFC 3987).
+
+ * People who provided donations for development--including Brian
+ Gough.
+
+ The following people have provided patches, bug/build reports, useful
+suggestions, beta testing services, fan mail and all the other things
+that make maintenance so much fun:
+
+ Tim Adam, Adrian Aichner, Martin Baehr, Dieter Baron, Roger Beeman,
+Dan Berger, T. Bharath, Christian Biere, Paul Bludov, Daniel Bodea,
+Mark Boyns, John Burden, Julien Buty, Wanderlei Cavassin, Gilles Cedoc,
+Tim Charron, Noel Cragg, Kristijan Conkas, John Daily, Andreas Damm,
+Ahmon Dancy, Andrew Davison, Bertrand Demiddelaer, Alexander Dergachev,
+Andrew Deryabin, Ulrich Drepper, Marc Duponcheel, Damir Dzeko, Alan
+Eldridge, Hans-Andreas Engel, Aleksandar Erkalovic, Andy Eskilsson,
+Joao Ferreira, Christian Fraenkel, David Fritz, Mike Frysinger, Charles
+C. Fu, FUJISHIMA Satsuki, Masashi Fujita, Howard Gayle, Marcel Gerrits,
+Lemble Gregory, Hans Grobler, Alain Guibert, Mathieu Guillaume, Aaron
+Hawley, Jochen Hein, Karl Heuer, Madhusudan Hosaagrahara, HIROSE
+Masaaki, Ulf Harnhammar, Gregor Hoffleit, Erik Magnus Hulthen, Richard
+Huveneers, Jonas Jensen, Larry Jones, Simon Josefsson, Mario Juric,
+Hack Kampbjorn, Const Kaplinsky, Goran Kezunovic, Igor Khristophorov,
+Robert Kleine, KOJIMA Haime, Fila Kolodny, Alexander Kourakos, Martin
+Kraemer, Sami Krank, Jay Krell, Simos KSenitellis, Christian Lackas,
+Hrvoje Lacko, Daniel S. Lewart, Nicolas Lichtmeier, Dave Love,
+Alexander V. Lukyanov, Thomas Lussnig, Andre Majorel, Aurelien Marchand,
+Matthew J. Mellon, Jordan Mendelson, Ted Mielczarek, Robert Millan, Lin
+Zhe Min, Jan Minar, Tim Mooney, Keith Moore, Adam D. Moss, Simon Munton,
+Charlie Negyesi, R. K. Owen, Jim Paris, Kenny Parnell, Leonid Petrov,
+Simone Piunno, Andrew Pollock, Steve Pothier, Jan Prikryl, Marin Purgar,
+Csaba Raduly, Keith Refson, Bill Richardson, Tyler Riddle, Tobias
+Ringstrom, Jochen Roderburg, Juan Jose Rodriguez, Maciej W. Rozycki,
+Edward J. Sabol, Heinz Salzmann, Robert Schmidt, Nicolas Schodet, Benno
+Schulenberg, Andreas Schwab, Steven M. Schweda, Chris Seawood, Pranab
+Shenoy, Dennis Smit, Toomas Soome, Tage Stabell-Kulo, Philip Stadermann,
+Daniel Stenberg, Sven Sternberger, Markus Strasser, John Summerfield,
+Szakacsits Szabolcs, Mike Thomas, Philipp Thomas, Mauro Tortonesi, Dave
+Turner, Gisle Vanem, Rabin Vincent, Russell Vincent, Zeljko Vrba,
+Charles G Waldman, Douglas E. Wegscheid, Ralf Wildenhues, Joshua David
+Williams, Benjamin Wolsey, Saint Xavier, YAMAZAKI Makoto, Jasmin Zainul,
+Bojan Zdrnja, Kristijan Zimmer, Xin Zou.
+
+ Apologies to all who I accidentally left out, and many thanks to all
+the subscribers of the Wget mailing list.
+
+
+File: wget.info, Node: Copying this manual, Next: Concept Index, Prev: Appendices, Up: Top
+
+Appendix A Copying this manual
+******************************
+
+* Menu:
+
+* GNU Free Documentation License:: Licnse for copying this manual.
+
+
+File: wget.info, Node: GNU Free Documentation License, Prev: Copying this manual, Up: Copying this manual
+
+A.1 GNU Free Documentation License
+==================================
+
+ Version 1.3, 3 November 2008
+
+ Copyright (C) 2000, 2001, 2002, 2007, 2008, 2009, 2010, 2011
+ Free Software Foundation, Inc.
+ `http://fsf.org/'
+
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ 0. PREAMBLE
+
+ The purpose of this License is to make a manual, textbook, or other
+ functional and useful document "free" in the sense of freedom: to
+ assure everyone the effective freedom to copy and redistribute it,
+ with or without modifying it, either commercially or
+ noncommercially. Secondarily, this License preserves for the
+ author and publisher a way to get credit for their work, while not
+ being considered responsible for modifications made by others.
+
+ This License is a kind of "copyleft", which means that derivative
+ works of the document must themselves be free in the same sense.
+ It complements the GNU General Public License, which is a copyleft
+ license designed for free software.
+
+ We have designed this License in order to use it for manuals for
+ free software, because free software needs free documentation: a
+ free program should come with manuals providing the same freedoms
+ that the software does. But this License is not limited to
+ software manuals; it can be used for any textual work, regardless
+ of subject matter or whether it is published as a printed book.
+ We recommend this License principally for works whose purpose is
+ instruction or reference.
+
+ 1. APPLICABILITY AND DEFINITIONS
+
+ This License applies to any manual or other work, in any medium,
+ that contains a notice placed by the copyright holder saying it
+ can be distributed under the terms of this License. Such a notice
+ grants a world-wide, royalty-free license, unlimited in duration,
+ to use that work under the conditions stated herein. The
+ "Document", below, refers to any such manual or work. Any member
+ of the public is a licensee, and is addressed as "you". You
+ accept the license if you copy, modify or distribute the work in a
+ way requiring permission under copyright law.
+
+ A "Modified Version" of the Document means any work containing the
+ Document or a portion of it, either copied verbatim, or with
+ modifications and/or translated into another language.
+
+ A "Secondary Section" is a named appendix or a front-matter section
+ of the Document that deals exclusively with the relationship of the
+ publishers or authors of the Document to the Document's overall
+ subject (or to related matters) and contains nothing that could
+ fall directly within that overall subject. (Thus, if the Document
+ is in part a textbook of mathematics, a Secondary Section may not
+ explain any mathematics.) The relationship could be a matter of
+ historical connection with the subject or with related matters, or
+ of legal, commercial, philosophical, ethical or political position
+ regarding them.
+
+ The "Invariant Sections" are certain Secondary Sections whose
+ titles are designated, as being those of Invariant Sections, in
+ the notice that says that the Document is released under this
+ License. If a section does not fit the above definition of
+ Secondary then it is not allowed to be designated as Invariant.
+ The Document may contain zero Invariant Sections. If the Document
+ does not identify any Invariant Sections then there are none.
+
+ The "Cover Texts" are certain short passages of text that are
+ listed, as Front-Cover Texts or Back-Cover Texts, in the notice
+ that says that the Document is released under this License. A
+ Front-Cover Text may be at most 5 words, and a Back-Cover Text may
+ be at most 25 words.
+
+ A "Transparent" copy of the Document means a machine-readable copy,
+ represented in a format whose specification is available to the
+ general public, that is suitable for revising the document
+ straightforwardly with generic text editors or (for images
+ composed of pixels) generic paint programs or (for drawings) some
+ widely available drawing editor, and that is suitable for input to
+ text formatters or for automatic translation to a variety of
+ formats suitable for input to text formatters. A copy made in an
+ otherwise Transparent file format whose markup, or absence of
+ markup, has been arranged to thwart or discourage subsequent
+ modification by readers is not Transparent. An image format is
+ not Transparent if used for any substantial amount of text. A
+ copy that is not "Transparent" is called "Opaque".
+
+ Examples of suitable formats for Transparent copies include plain
+ ASCII without markup, Texinfo input format, LaTeX input format,
+ SGML or XML using a publicly available DTD, and
+ standard-conforming simple HTML, PostScript or PDF designed for
+ human modification. Examples of transparent image formats include
+ PNG, XCF and JPG. Opaque formats include proprietary formats that
+ can be read and edited only by proprietary word processors, SGML or
+ XML for which the DTD and/or processing tools are not generally
+ available, and the machine-generated HTML, PostScript or PDF
+ produced by some word processors for output purposes only.
+
+ The "Title Page" means, for a printed book, the title page itself,
+ plus such following pages as are needed to hold, legibly, the
+ material this License requires to appear in the title page. For
+ works in formats which do not have any title page as such, "Title
+ Page" means the text near the most prominent appearance of the
+ work's title, preceding the beginning of the body of the text.
+
+ The "publisher" means any person or entity that distributes copies
+ of the Document to the public.
+
+ A section "Entitled XYZ" means a named subunit of the Document
+ whose title either is precisely XYZ or contains XYZ in parentheses
+ following text that translates XYZ in another language. (Here XYZ
+ stands for a specific section name mentioned below, such as
+ "Acknowledgements", "Dedications", "Endorsements", or "History".)
+ To "Preserve the Title" of such a section when you modify the
+ Document means that it remains a section "Entitled XYZ" according
+ to this definition.
+
+ The Document may include Warranty Disclaimers next to the notice
+ which states that this License applies to the Document. These
+ Warranty Disclaimers are considered to be included by reference in
+ this License, but only as regards disclaiming warranties: any other
+ implication that these Warranty Disclaimers may have is void and
+ has no effect on the meaning of this License.
+
+ 2. VERBATIM COPYING
+
+ You may copy and distribute the Document in any medium, either
+ commercially or noncommercially, provided that this License, the
+ copyright notices, and the license notice saying this License
+ applies to the Document are reproduced in all copies, and that you
+ add no other conditions whatsoever to those of this License. You
+ may not use technical measures to obstruct or control the reading
+ or further copying of the copies you make or distribute. However,
+ you may accept compensation in exchange for copies. If you
+ distribute a large enough number of copies you must also follow
+ the conditions in section 3.
+
+ You may also lend copies, under the same conditions stated above,
+ and you may publicly display copies.
+
+ 3. COPYING IN QUANTITY
+
+ If you publish printed copies (or copies in media that commonly
+ have printed covers) of the Document, numbering more than 100, and
+ the Document's license notice requires Cover Texts, you must
+ enclose the copies in covers that carry, clearly and legibly, all
+ these Cover Texts: Front-Cover Texts on the front cover, and
+ Back-Cover Texts on the back cover. Both covers must also clearly
+ and legibly identify you as the publisher of these copies. The
+ front cover must present the full title with all words of the
+ title equally prominent and visible. You may add other material
+ on the covers in addition. Copying with changes limited to the
+ covers, as long as they preserve the title of the Document and
+ satisfy these conditions, can be treated as verbatim copying in
+ other respects.
+
+ If the required texts for either cover are too voluminous to fit
+ legibly, you should put the first ones listed (as many as fit
+ reasonably) on the actual cover, and continue the rest onto
+ adjacent pages.
+
+ If you publish or distribute Opaque copies of the Document
+ numbering more than 100, you must either include a
+ machine-readable Transparent copy along with each Opaque copy, or
+ state in or with each Opaque copy a computer-network location from
+ which the general network-using public has access to download
+ using public-standard network protocols a complete Transparent
+ copy of the Document, free of added material. If you use the
+ latter option, you must take reasonably prudent steps, when you
+ begin distribution of Opaque copies in quantity, to ensure that
+ this Transparent copy will remain thus accessible at the stated
+ location until at least one year after the last time you
+ distribute an Opaque copy (directly or through your agents or
+ retailers) of that edition to the public.
+
+ It is requested, but not required, that you contact the authors of
+ the Document well before redistributing any large number of
+ copies, to give them a chance to provide you with an updated
+ version of the Document.
+
+ 4. MODIFICATIONS
+
+ You may copy and distribute a Modified Version of the Document
+ under the conditions of sections 2 and 3 above, provided that you
+ release the Modified Version under precisely this License, with
+ the Modified Version filling the role of the Document, thus
+ licensing distribution and modification of the Modified Version to
+ whoever possesses a copy of it. In addition, you must do these
+ things in the Modified Version:
+
+ A. Use in the Title Page (and on the covers, if any) a title
+ distinct from that of the Document, and from those of
+ previous versions (which should, if there were any, be listed
+ in the History section of the Document). You may use the
+ same title as a previous version if the original publisher of
+ that version gives permission.
+
+ B. List on the Title Page, as authors, one or more persons or
+ entities responsible for authorship of the modifications in
+ the Modified Version, together with at least five of the
+ principal authors of the Document (all of its principal
+ authors, if it has fewer than five), unless they release you
+ from this requirement.
+
+ C. State on the Title page the name of the publisher of the
+ Modified Version, as the publisher.
+
+ D. Preserve all the copyright notices of the Document.
+
+ E. Add an appropriate copyright notice for your modifications
+ adjacent to the other copyright notices.
+
+ F. Include, immediately after the copyright notices, a license
+ notice giving the public permission to use the Modified
+ Version under the terms of this License, in the form shown in
+ the Addendum below.
+
+ G. Preserve in that license notice the full lists of Invariant
+ Sections and required Cover Texts given in the Document's
+ license notice.
+
+ H. Include an unaltered copy of this License.
+
+ I. Preserve the section Entitled "History", Preserve its Title,
+ and add to it an item stating at least the title, year, new
+ authors, and publisher of the Modified Version as given on
+ the Title Page. If there is no section Entitled "History" in
+ the Document, create one stating the title, year, authors,
+ and publisher of the Document as given on its Title Page,
+ then add an item describing the Modified Version as stated in
+ the previous sentence.
+
+ J. Preserve the network location, if any, given in the Document
+ for public access to a Transparent copy of the Document, and
+ likewise the network locations given in the Document for
+ previous versions it was based on. These may be placed in
+ the "History" section. You may omit a network location for a
+ work that was published at least four years before the
+ Document itself, or if the original publisher of the version
+ it refers to gives permission.
+
+ K. For any section Entitled "Acknowledgements" or "Dedications",
+ Preserve the Title of the section, and preserve in the
+ section all the substance and tone of each of the contributor
+ acknowledgements and/or dedications given therein.
+
+ L. Preserve all the Invariant Sections of the Document,
+ unaltered in their text and in their titles. Section numbers
+ or the equivalent are not considered part of the section
+ titles.
+
+ M. Delete any section Entitled "Endorsements". Such a section
+ may not be included in the Modified Version.
+
+ N. Do not retitle any existing section to be Entitled
+ "Endorsements" or to conflict in title with any Invariant
+ Section.
+
+ O. Preserve any Warranty Disclaimers.
+
+ If the Modified Version includes new front-matter sections or
+ appendices that qualify as Secondary Sections and contain no
+ material copied from the Document, you may at your option
+ designate some or all of these sections as invariant. To do this,
+ add their titles to the list of Invariant Sections in the Modified
+ Version's license notice. These titles must be distinct from any
+ other section titles.
+
+ You may add a section Entitled "Endorsements", provided it contains
+ nothing but endorsements of your Modified Version by various
+ parties--for example, statements of peer review or that the text
+ has been approved by an organization as the authoritative
+ definition of a standard.
+
+ You may add a passage of up to five words as a Front-Cover Text,
+ and a passage of up to 25 words as a Back-Cover Text, to the end
+ of the list of Cover Texts in the Modified Version. Only one
+ passage of Front-Cover Text and one of Back-Cover Text may be
+ added by (or through arrangements made by) any one entity. If the
+ Document already includes a cover text for the same cover,
+ previously added by you or by arrangement made by the same entity
+ you are acting on behalf of, you may not add another; but you may
+ replace the old one, on explicit permission from the previous
+ publisher that added the old one.
+
+ The author(s) and publisher(s) of the Document do not by this
+ License give permission to use their names for publicity for or to
+ assert or imply endorsement of any Modified Version.
+
+ 5. COMBINING DOCUMENTS
+
+ You may combine the Document with other documents released under
+ this License, under the terms defined in section 4 above for
+ modified versions, provided that you include in the combination
+ all of the Invariant Sections of all of the original documents,
+ unmodified, and list them all as Invariant Sections of your
+ combined work in its license notice, and that you preserve all
+ their Warranty Disclaimers.
+
+ The combined work need only contain one copy of this License, and
+ multiple identical Invariant Sections may be replaced with a single
+ copy. If there are multiple Invariant Sections with the same name
+ but different contents, make the title of each such section unique
+ by adding at the end of it, in parentheses, the name of the
+ original author or publisher of that section if known, or else a
+ unique number. Make the same adjustment to the section titles in
+ the list of Invariant Sections in the license notice of the
+ combined work.
+
+ In the combination, you must combine any sections Entitled
+ "History" in the various original documents, forming one section
+ Entitled "History"; likewise combine any sections Entitled
+ "Acknowledgements", and any sections Entitled "Dedications". You
+ must delete all sections Entitled "Endorsements."
+
+ 6. COLLECTIONS OF DOCUMENTS
+
+ You may make a collection consisting of the Document and other
+ documents released under this License, and replace the individual
+ copies of this License in the various documents with a single copy
+ that is included in the collection, provided that you follow the
+ rules of this License for verbatim copying of each of the
+ documents in all other respects.
+
+ You may extract a single document from such a collection, and
+ distribute it individually under this License, provided you insert
+ a copy of this License into the extracted document, and follow
+ this License in all other respects regarding verbatim copying of
+ that document.
+
+ 7. AGGREGATION WITH INDEPENDENT WORKS
+
+ A compilation of the Document or its derivatives with other
+ separate and independent documents or works, in or on a volume of
+ a storage or distribution medium, is called an "aggregate" if the
+ copyright resulting from the compilation is not used to limit the
+ legal rights of the compilation's users beyond what the individual
+ works permit. When the Document is included in an aggregate, this
+ License does not apply to the other works in the aggregate which
+ are not themselves derivative works of the Document.
+
+ If the Cover Text requirement of section 3 is applicable to these
+ copies of the Document, then if the Document is less than one half
+ of the entire aggregate, the Document's Cover Texts may be placed
+ on covers that bracket the Document within the aggregate, or the
+ electronic equivalent of covers if the Document is in electronic
+ form. Otherwise they must appear on printed covers that bracket
+ the whole aggregate.
+
+ 8. TRANSLATION
+
+ Translation is considered a kind of modification, so you may
+ distribute translations of the Document under the terms of section
+ 4. Replacing Invariant Sections with translations requires special
+ permission from their copyright holders, but you may include
+ translations of some or all Invariant Sections in addition to the
+ original versions of these Invariant Sections. You may include a
+ translation of this License, and all the license notices in the
+ Document, and any Warranty Disclaimers, provided that you also
+ include the original English version of this License and the
+ original versions of those notices and disclaimers. In case of a
+ disagreement between the translation and the original version of
+ this License or a notice or disclaimer, the original version will
+ prevail.
+
+ If a section in the Document is Entitled "Acknowledgements",
+ "Dedications", or "History", the requirement (section 4) to
+ Preserve its Title (section 1) will typically require changing the
+ actual title.
+
+ 9. TERMINATION
+
+ You may not copy, modify, sublicense, or distribute the Document
+ except as expressly provided under this License. Any attempt
+ otherwise to copy, modify, sublicense, or distribute it is void,
+ and will automatically terminate your rights under this License.
+
+ However, if you cease all violation of this License, then your
+ license from a particular copyright holder is reinstated (a)
+ provisionally, unless and until the copyright holder explicitly
+ and finally terminates your license, and (b) permanently, if the
+ copyright holder fails to notify you of the violation by some
+ reasonable means prior to 60 days after the cessation.
+
+ Moreover, your license from a particular copyright holder is
+ reinstated permanently if the copyright holder notifies you of the
+ violation by some reasonable means, this is the first time you have
+ received notice of violation of this License (for any work) from
+ that copyright holder, and you cure the violation prior to 30 days
+ after your receipt of the notice.
+
+ Termination of your rights under this section does not terminate
+ the licenses of parties who have received copies or rights from
+ you under this License. If your rights have been terminated and
+ not permanently reinstated, receipt of a copy of some or all of
+ the same material does not give you any rights to use it.
+
+ 10. FUTURE REVISIONS OF THIS LICENSE
+
+ The Free Software Foundation may publish new, revised versions of
+ the GNU Free Documentation License from time to time. Such new
+ versions will be similar in spirit to the present version, but may
+ differ in detail to address new problems or concerns. See
+ `http://www.gnu.org/copyleft/'.
+
+ Each version of the License is given a distinguishing version
+ number. If the Document specifies that a particular numbered
+ version of this License "or any later version" applies to it, you
+ have the option of following the terms and conditions either of
+ that specified version or of any later version that has been
+ published (not as a draft) by the Free Software Foundation. If
+ the Document does not specify a version number of this License,
+ you may choose any version ever published (not as a draft) by the
+ Free Software Foundation. If the Document specifies that a proxy
+ can decide which future versions of this License can be used, that
+ proxy's public statement of acceptance of a version permanently
+ authorizes you to choose that version for the Document.
+
+ 11. RELICENSING
+
+ "Massive Multiauthor Collaboration Site" (or "MMC Site") means any
+ World Wide Web server that publishes copyrightable works and also
+ provides prominent facilities for anybody to edit those works. A
+ public wiki that anybody can edit is an example of such a server.
+ A "Massive Multiauthor Collaboration" (or "MMC") contained in the
+ site means any set of copyrightable works thus published on the MMC
+ site.
+
+ "CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0
+ license published by Creative Commons Corporation, a not-for-profit
+ corporation with a principal place of business in San Francisco,
+ California, as well as future copyleft versions of that license
+ published by that same organization.
+
+ "Incorporate" means to publish or republish a Document, in whole or
+ in part, as part of another Document.
+
+ An MMC is "eligible for relicensing" if it is licensed under this
+ License, and if all works that were first published under this
+ License somewhere other than this MMC, and subsequently
+ incorporated in whole or in part into the MMC, (1) had no cover
+ texts or invariant sections, and (2) were thus incorporated prior
+ to November 1, 2008.
+
+ The operator of an MMC Site may republish an MMC contained in the
+ site under CC-BY-SA on the same site at any time before August 1,
+ 2009, provided the MMC is eligible for relicensing.
+
+
+ADDENDUM: How to use this License for your documents
+====================================================
+
+To use this License in a document you have written, include a copy of
+the License in the document and put the following copyright and license
+notices just after the title page:
+
+ Copyright (C) YEAR YOUR NAME.
+ Permission is granted to copy, distribute and/or modify this document
+ under the terms of the GNU Free Documentation License, Version 1.3
+ or any later version published by the Free Software Foundation;
+ with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
+ Texts. A copy of the license is included in the section entitled ``GNU
+ Free Documentation License''.
+
+ If you have Invariant Sections, Front-Cover Texts and Back-Cover
+Texts, replace the "with...Texts." line with this:
+
+ with the Invariant Sections being LIST THEIR TITLES, with
+ the Front-Cover Texts being LIST, and with the Back-Cover Texts
+ being LIST.
+
+ If you have Invariant Sections without Cover Texts, or some other
+combination of the three, merge those two alternatives to suit the
+situation.
+
+ If your document contains nontrivial examples of program code, we
+recommend releasing these examples in parallel under your choice of
+free software license, such as the GNU General Public License, to
+permit their use in free software.
+
+
+File: wget.info, Node: Concept Index, Prev: Copying this manual, Up: Top
+
+Concept Index
+*************
+
+
+* Menu:
+
+* #wget: Internet Relay Chat. (line 6)
+* .css extension: HTTP Options. (line 10)
+* .html extension: HTTP Options. (line 10)
+* .listing files, removing: FTP Options. (line 21)
+* .netrc: Startup File. (line 6)
+* .wgetrc: Startup File. (line 6)
+* accept directories: Directory-Based Limits.
+ (line 17)
+* accept suffixes: Types of Files. (line 15)
+* accept wildcards: Types of Files. (line 15)
+* append to log: Logging and Input File Options.
+ (line 11)
+* arguments: Invoking. (line 6)
+* authentication <1>: HTTP Options. (line 39)
+* authentication: Download Options. (line 458)
+* backing up converted files: Recursive Retrieval Options.
+ (line 71)
+* bandwidth, limit: Download Options. (line 249)
+* base for relative links in input file: Logging and Input File Options.
+ (line 73)
+* bind address: Download Options. (line 6)
+* bug reports: Reporting Bugs. (line 6)
+* bugs: Reporting Bugs. (line 6)
+* cache: HTTP Options. (line 67)
+* caching of DNS lookups: Download Options. (line 334)
+* case fold: Recursive Accept/Reject Options.
+ (line 51)
+* client IP address: Download Options. (line 6)
+* clobbering, file: Download Options. (line 51)
+* command line: Invoking. (line 6)
+* comments, HTML: Recursive Retrieval Options.
+ (line 149)
+* connect timeout: Download Options. (line 232)
+* Content-Disposition: HTTP Options. (line 296)
+* Content-Length, ignore: HTTP Options. (line 156)
+* continue retrieval: Download Options. (line 87)
+* contributors: Contributors. (line 6)
+* conversion of links: Recursive Retrieval Options.
+ (line 32)
+* cookies: HTTP Options. (line 76)
+* cookies, loading: HTTP Options. (line 86)
+* cookies, saving: HTTP Options. (line 134)
+* cookies, session: HTTP Options. (line 139)
+* cut directories: Directory Options. (line 32)
+* debug: Logging and Input File Options.
+ (line 17)
+* default page name: HTTP Options. (line 6)
+* delete after retrieval: Recursive Retrieval Options.
+ (line 16)
+* directories: Directory-Based Limits.
+ (line 6)
+* directories, exclude: Directory-Based Limits.
+ (line 30)
+* directories, include: Directory-Based Limits.
+ (line 17)
+* directory limits: Directory-Based Limits.
+ (line 6)
+* directory prefix: Directory Options. (line 60)
+* DNS cache: Download Options. (line 334)
+* DNS timeout: Download Options. (line 226)
+* dot style: Download Options. (line 148)
+* downloading multiple times: Download Options. (line 51)
+* EGD: HTTPS (SSL/TLS) Options.
+ (line 101)
+* entropy, specifying source of: HTTPS (SSL/TLS) Options.
+ (line 85)
+* examples: Examples. (line 6)
+* exclude directories: Directory-Based Limits.
+ (line 30)
+* execute wgetrc command: Basic Startup Options.
+ (line 19)
+* FDL, GNU Free Documentation License: GNU Free Documentation License.
+ (line 6)
+* features: Overview. (line 6)
+* file names, restrict: Download Options. (line 353)
+* filling proxy cache: Recursive Retrieval Options.
+ (line 16)
+* follow FTP links: Recursive Accept/Reject Options.
+ (line 23)
+* following ftp links: FTP Links. (line 6)
+* following links: Following Links. (line 6)
+* force html: Logging and Input File Options.
+ (line 66)
+* ftp authentication: FTP Options. (line 6)
+* ftp password: FTP Options. (line 6)
+* ftp time-stamping: FTP Time-Stamping Internals.
+ (line 6)
+* ftp user: FTP Options. (line 6)
+* globbing, toggle: FTP Options. (line 45)
+* hangup: Signals. (line 6)
+* header, add: HTTP Options. (line 167)
+* hosts, spanning: Spanning Hosts. (line 6)
+* HTML comments: Recursive Retrieval Options.
+ (line 149)
+* http password: HTTP Options. (line 39)
+* http referer: HTTP Options. (line 208)
+* http time-stamping: HTTP Time-Stamping Internals.
+ (line 6)
+* http user: HTTP Options. (line 39)
+* idn support: Download Options. (line 471)
+* ignore case: Recursive Accept/Reject Options.
+ (line 51)
+* ignore length: HTTP Options. (line 156)
+* include directories: Directory-Based Limits.
+ (line 17)
+* incomplete downloads: Download Options. (line 87)
+* incremental updating: Time-Stamping. (line 6)
+* index.html: HTTP Options. (line 6)
+* input-file: Logging and Input File Options.
+ (line 43)
+* Internet Relay Chat: Internet Relay Chat. (line 6)
+* invoking: Invoking. (line 6)
+* IP address, client: Download Options. (line 6)
+* IPv6: Download Options. (line 404)
+* IRC: Internet Relay Chat. (line 6)
+* iri support: Download Options. (line 471)
+* Keep-Alive, turning off: HTTP Options. (line 55)
+* latest version: Distribution. (line 6)
+* limit bandwidth: Download Options. (line 249)
+* link conversion: Recursive Retrieval Options.
+ (line 32)
+* links: Following Links. (line 6)
+* list: Mailing Lists. (line 6)
+* loading cookies: HTTP Options. (line 86)
+* local encoding: Download Options. (line 479)
+* location of wgetrc: Wgetrc Location. (line 6)
+* log file: Logging and Input File Options.
+ (line 6)
+* mailing list: Mailing Lists. (line 6)
+* mirroring: Very Advanced Usage. (line 6)
+* no parent: Directory-Based Limits.
+ (line 43)
+* no-clobber: Download Options. (line 51)
+* nohup: Invoking. (line 6)
+* number of retries: Download Options. (line 12)
+* operating systems: Portability. (line 6)
+* option syntax: Option Syntax. (line 6)
+* output file: Logging and Input File Options.
+ (line 6)
+* overview: Overview. (line 6)
+* page requisites: Recursive Retrieval Options.
+ (line 84)
+* passive ftp: FTP Options. (line 61)
+* password: Download Options. (line 458)
+* pause: Download Options. (line 269)
+* Persistent Connections, disabling: HTTP Options. (line 55)
+* portability: Portability. (line 6)
+* POST: HTTP Options. (line 241)
+* progress indicator: Download Options. (line 148)
+* proxies: Proxies. (line 6)
+* proxy <1>: HTTP Options. (line 67)
+* proxy: Download Options. (line 311)
+* proxy authentication: HTTP Options. (line 199)
+* proxy filling: Recursive Retrieval Options.
+ (line 16)
+* proxy password: HTTP Options. (line 199)
+* proxy user: HTTP Options. (line 199)
+* quiet: Logging and Input File Options.
+ (line 28)
+* quota: Download Options. (line 318)
+* random wait: Download Options. (line 293)
+* randomness, specifying source of: HTTPS (SSL/TLS) Options.
+ (line 85)
+* rate, limit: Download Options. (line 249)
+* read timeout: Download Options. (line 237)
+* recursion: Recursive Download. (line 6)
+* recursive download: Recursive Download. (line 6)
+* redirect: HTTP Options. (line 193)
+* redirecting output: Advanced Usage. (line 89)
+* referer, http: HTTP Options. (line 208)
+* reject directories: Directory-Based Limits.
+ (line 30)
+* reject suffixes: Types of Files. (line 34)
+* reject wildcards: Types of Files. (line 34)
+* relative links: Relative Links. (line 6)
+* remote encoding: Download Options. (line 491)
+* reporting bugs: Reporting Bugs. (line 6)
+* required images, downloading: Recursive Retrieval Options.
+ (line 84)
+* resume download: Download Options. (line 87)
+* retries: Download Options. (line 12)
+* retries, waiting between: Download Options. (line 283)
+* retrieving: Recursive Download. (line 6)
+* robot exclusion: Robot Exclusion. (line 6)
+* robots.txt: Robot Exclusion. (line 6)
+* sample wgetrc: Sample Wgetrc. (line 6)
+* saving cookies: HTTP Options. (line 134)
+* security: Security Considerations.
+ (line 6)
+* server maintenance: Robot Exclusion. (line 6)
+* server response, print: Download Options. (line 192)
+* server response, save: HTTP Options. (line 215)
+* session cookies: HTTP Options. (line 139)
+* signal handling: Signals. (line 6)
+* spanning hosts: Spanning Hosts. (line 6)
+* specify config: Logging and Input File Options.
+ (line 86)
+* spider: Download Options. (line 197)
+* SSL: HTTPS (SSL/TLS) Options.
+ (line 6)
+* SSL certificate: HTTPS (SSL/TLS) Options.
+ (line 47)
+* SSL certificate authority: HTTPS (SSL/TLS) Options.
+ (line 73)
+* SSL certificate type, specify: HTTPS (SSL/TLS) Options.
+ (line 53)
+* SSL certificate, check: HTTPS (SSL/TLS) Options.
+ (line 23)
+* SSL protocol, choose: HTTPS (SSL/TLS) Options.
+ (line 10)
+* startup: Startup File. (line 6)
+* startup file: Startup File. (line 6)
+* suffixes, accept: Types of Files. (line 15)
+* suffixes, reject: Types of Files. (line 34)
+* symbolic links, retrieving: FTP Options. (line 73)
+* syntax of options: Option Syntax. (line 6)
+* syntax of wgetrc: Wgetrc Syntax. (line 6)
+* tag-based recursive pruning: Recursive Accept/Reject Options.
+ (line 27)
+* time-stamping: Time-Stamping. (line 6)
+* time-stamping usage: Time-Stamping Usage. (line 6)
+* timeout: Download Options. (line 208)
+* timeout, connect: Download Options. (line 232)
+* timeout, DNS: Download Options. (line 226)
+* timeout, read: Download Options. (line 237)
+* timestamping: Time-Stamping. (line 6)
+* tries: Download Options. (line 12)
+* Trust server names: HTTP Options. (line 307)
+* types of files: Types of Files. (line 6)
+* unlink: Download Options. (line 505)
+* updating the archives: Time-Stamping. (line 6)
+* URL: URL Format. (line 6)
+* URL syntax: URL Format. (line 6)
+* usage, time-stamping: Time-Stamping Usage. (line 6)
+* user: Download Options. (line 458)
+* user-agent: HTTP Options. (line 219)
+* various: Various. (line 6)
+* verbose: Logging and Input File Options.
+ (line 32)
+* wait: Download Options. (line 269)
+* wait, random: Download Options. (line 293)
+* waiting between retries: Download Options. (line 283)
+* web site: Web Site. (line 6)
+* Wget as spider: Download Options. (line 197)
+* wgetrc: Startup File. (line 6)
+* wgetrc commands: Wgetrc Commands. (line 6)
+* wgetrc location: Wgetrc Location. (line 6)
+* wgetrc syntax: Wgetrc Syntax. (line 6)
+* wildcards, accept: Types of Files. (line 15)
+* wildcards, reject: Types of Files. (line 34)
+* Windows file names: Download Options. (line 353)
+
+
+
+Tag Table:
+Node: Top805
+Node: Overview2193
+Node: Invoking5697
+Node: URL Format6550
+Ref: URL Format-Footnote-19140
+Node: Option Syntax9242
+Node: Basic Startup Options11919
+Node: Logging and Input File Options12724
+Node: Download Options16190
+Node: Directory Options40391
+Node: HTTP Options43096
+Node: HTTPS (SSL/TLS) Options58219
+Node: FTP Options63894
+Node: Recursive Retrieval Options68360
+Node: Recursive Accept/Reject Options76228
+Node: Exit Status79706
+Node: Recursive Download80729
+Node: Following Links83902
+Node: Spanning Hosts84864
+Node: Types of Files87061
+Node: Directory-Based Limits91423
+Node: Relative Links94515
+Node: FTP Links95352
+Node: Time-Stamping96219
+Node: Time-Stamping Usage97867
+Node: HTTP Time-Stamping Internals99715
+Ref: HTTP Time-Stamping Internals-Footnote-1100991
+Node: FTP Time-Stamping Internals101190
+Node: Startup File102656
+Node: Wgetrc Location103570
+Node: Wgetrc Syntax104390
+Node: Wgetrc Commands105110
+Node: Sample Wgetrc120463
+Node: Examples125986
+Node: Simple Usage126347
+Node: Advanced Usage127768
+Node: Very Advanced Usage131480
+Node: Various132975
+Node: Proxies133680
+Node: Distribution136531
+Node: Web Site136873
+Node: Mailing Lists137168
+Node: Internet Relay Chat139234
+Node: Reporting Bugs139520
+Node: Portability142039
+Node: Signals143664
+Node: Appendices144347
+Node: Robot Exclusion144693
+Node: Security Considerations148488
+Node: Contributors149672
+Node: Copying this manual155246
+Node: GNU Free Documentation License155485
+Node: Concept Index180667
+
+End Tag Table
diff --git a/doc/wget.texi b/doc/wget.texi
new file mode 100644
index 0000000..87b8f2d
--- /dev/null
+++ b/doc/wget.texi
@@ -0,0 +1,4284 @@
+\input texinfo @c -*-texinfo-*-
+
+@c %**start of header
+@setfilename wget.info
+@include version.texi
+@settitle GNU Wget @value{VERSION} Manual
+@c Disable the monstrous rectangles beside overfull hbox-es.
+@finalout
+@c Use `odd' to print double-sided.
+@setchapternewpage on
+@c %**end of header
+
+@iftex
+@c Remove this if you don't use A4 paper.
+@afourpaper
+@end iftex
+
+@c Title for man page. The weird way texi2pod.pl is written requires
+@c the preceding @set.
+@set Wget Wget
+@c man title Wget The non-interactive network downloader.
+
+@dircategory Network Applications
+@direntry
+* Wget: (wget). The non-interactive network downloader.
+@end direntry
+
+@copying
+This file documents the GNU Wget utility for downloading network
+data.
+
+@c man begin COPYRIGHT
+Copyright @copyright{} 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003,
+2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software Foundation,
+Inc.
+
+@iftex
+Permission is granted to make and distribute verbatim copies of
+this manual provided the copyright notice and this permission notice
+are preserved on all copies.
+@end iftex
+
+@ignore
+Permission is granted to process this file through TeX and print the
+results, provided the printed document carries a copying permission
+notice identical to this one except for the removal of this paragraph
+(this paragraph not being relevant to the printed manual).
+@end ignore
+Permission is granted to copy, distribute and/or modify this document
+under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no
+Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A
+copy of the license is included in the section entitled ``GNU Free
+Documentation License''.
+@c man end
+@end copying
+
+@titlepage
+@title GNU Wget @value{VERSION}
+@subtitle The non-interactive download utility
+@subtitle Updated for Wget @value{VERSION}, @value{UPDATED}
+@author by Hrvoje Nik@v{s}i@'{c} and others
+
+@ignore
+@c man begin AUTHOR
+Originally written by Hrvoje Niksic <hniksic@xemacs.org>.
+@c man end
+@c man begin SEEALSO
+This is @strong{not} the complete manual for GNU Wget.
+For more complete information, including more detailed explanations of
+some of the options, and a number of commands available
+for use with @file{.wgetrc} files and the @samp{-e} option, see the GNU
+Info entry for @file{wget}.
+@c man end
+@end ignore
+
+@page
+@vskip 0pt plus 1filll
+@insertcopying
+@end titlepage
+
+@contents
+
+@ifnottex
+@node Top, Overview, (dir), (dir)
+@top Wget @value{VERSION}
+
+@insertcopying
+@end ifnottex
+
+@menu
+* Overview:: Features of Wget.
+* Invoking:: Wget command-line arguments.
+* Recursive Download:: Downloading interlinked pages.
+* Following Links:: The available methods of chasing links.
+* Time-Stamping:: Mirroring according to time-stamps.
+* Startup File:: Wget's initialization file.
+* Examples:: Examples of usage.
+* Various:: The stuff that doesn't fit anywhere else.
+* Appendices:: Some useful references.
+* Copying this manual:: You may give out copies of this manual.
+* Concept Index:: Topics covered by this manual.
+@end menu
+
+@node Overview, Invoking, Top, Top
+@chapter Overview
+@cindex overview
+@cindex features
+
+@c man begin DESCRIPTION
+GNU Wget is a free utility for non-interactive download of files from
+the Web. It supports @sc{http}, @sc{https}, and @sc{ftp} protocols, as
+well as retrieval through @sc{http} proxies.
+
+@c man end
+This chapter is a partial overview of Wget's features.
+
+@itemize @bullet
+@item
+@c man begin DESCRIPTION
+Wget is non-interactive, meaning that it can work in the background,
+while the user is not logged on. This allows you to start a retrieval
+and disconnect from the system, letting Wget finish the work. By
+contrast, most of the Web browsers require constant user's presence,
+which can be a great hindrance when transferring a lot of data.
+@c man end
+
+@item
+@ignore
+@c man begin DESCRIPTION
+
+@c man end
+@end ignore
+@c man begin DESCRIPTION
+Wget can follow links in @sc{html}, @sc{xhtml}, and @sc{css} pages, to
+create local versions of remote web sites, fully recreating the
+directory structure of the original site. This is sometimes referred to
+as ``recursive downloading.'' While doing that, Wget respects the Robot
+Exclusion Standard (@file{/robots.txt}). Wget can be instructed to
+convert the links in downloaded files to point at the local files, for
+offline viewing.
+@c man end
+
+@item
+File name wildcard matching and recursive mirroring of directories are
+available when retrieving via @sc{ftp}. Wget can read the time-stamp
+information given by both @sc{http} and @sc{ftp} servers, and store it
+locally. Thus Wget can see if the remote file has changed since last
+retrieval, and automatically retrieve the new version if it has. This
+makes Wget suitable for mirroring of @sc{ftp} sites, as well as home
+pages.
+
+@item
+@ignore
+@c man begin DESCRIPTION
+
+@c man end
+@end ignore
+@c man begin DESCRIPTION
+Wget has been designed for robustness over slow or unstable network
+connections; if a download fails due to a network problem, it will
+keep retrying until the whole file has been retrieved. If the server
+supports regetting, it will instruct the server to continue the
+download from where it left off.
+@c man end
+
+@item
+Wget supports proxy servers, which can lighten the network load, speed
+up retrieval and provide access behind firewalls. Wget uses the passive
+@sc{ftp} downloading by default, active @sc{ftp} being an option.
+
+@item
+Wget supports IP version 6, the next generation of IP. IPv6 is
+autodetected at compile-time, and can be disabled at either build or
+run time. Binaries built with IPv6 support work well in both
+IPv4-only and dual family environments.
+
+@item
+Built-in features offer mechanisms to tune which links you wish to follow
+(@pxref{Following Links}).
+
+@item
+The progress of individual downloads is traced using a progress gauge.
+Interactive downloads are tracked using a ``thermometer''-style gauge,
+whereas non-interactive ones are traced with dots, each dot
+representing a fixed amount of data received (1KB by default). Either
+gauge can be customized to your preferences.
+
+@item
+Most of the features are fully configurable, either through command line
+options, or via the initialization file @file{.wgetrc} (@pxref{Startup
+File}). Wget allows you to define @dfn{global} startup files
+(@file{/usr/local/etc/wgetrc} by default) for site settings. You can also
+specify the location of a startup file with the --config option.
+
+
+@ignore
+@c man begin FILES
+@table @samp
+@item /usr/local/etc/wgetrc
+Default location of the @dfn{global} startup file.
+
+@item .wgetrc
+User startup file.
+@end table
+@c man end
+@end ignore
+
+@item
+Finally, GNU Wget is free software. This means that everyone may use
+it, redistribute it and/or modify it under the terms of the GNU General
+Public License, as published by the Free Software Foundation (see the
+file @file{COPYING} that came with GNU Wget, for details).
+@end itemize
+
+@node Invoking, Recursive Download, Overview, Top
+@chapter Invoking
+@cindex invoking
+@cindex command line
+@cindex arguments
+@cindex nohup
+
+By default, Wget is very simple to invoke. The basic syntax is:
+
+@example
+@c man begin SYNOPSIS
+wget [@var{option}]@dots{} [@var{URL}]@dots{}
+@c man end
+@end example
+
+Wget will simply download all the @sc{url}s specified on the command
+line. @var{URL} is a @dfn{Uniform Resource Locator}, as defined below.
+
+However, you may wish to change some of the default parameters of
+Wget. You can do it two ways: permanently, adding the appropriate
+command to @file{.wgetrc} (@pxref{Startup File}), or specifying it on
+the command line.
+
+@menu
+* URL Format::
+* Option Syntax::
+* Basic Startup Options::
+* Logging and Input File Options::
+* Download Options::
+* Directory Options::
+* HTTP Options::
+* HTTPS (SSL/TLS) Options::
+* FTP Options::
+* Recursive Retrieval Options::
+* Recursive Accept/Reject Options::
+* Exit Status::
+@end menu
+
+@node URL Format, Option Syntax, Invoking, Invoking
+@section URL Format
+@cindex URL
+@cindex URL syntax
+
+@dfn{URL} is an acronym for Uniform Resource Locator. A uniform
+resource locator is a compact string representation for a resource
+available via the Internet. Wget recognizes the @sc{url} syntax as per
+@sc{rfc1738}. This is the most widely used form (square brackets denote
+optional parts):
+
+@example
+http://host[:port]/directory/file
+ftp://host[:port]/directory/file
+@end example
+
+You can also encode your username and password within a @sc{url}:
+
+@example
+ftp://user:password@@host/path
+http://user:password@@host/path
+@end example
+
+Either @var{user} or @var{password}, or both, may be left out. If you
+leave out either the @sc{http} username or password, no authentication
+will be sent. If you leave out the @sc{ftp} username, @samp{anonymous}
+will be used. If you leave out the @sc{ftp} password, your email
+address will be supplied as a default password.@footnote{If you have a
+@file{.netrc} file in your home directory, password will also be
+searched for there.}
+
+@strong{Important Note}: if you specify a password-containing @sc{url}
+on the command line, the username and password will be plainly visible
+to all users on the system, by way of @code{ps}. On multi-user systems,
+this is a big security risk. To work around it, use @code{wget -i -}
+and feed the @sc{url}s to Wget's standard input, each on a separate
+line, terminated by @kbd{C-d}.
+
+You can encode unsafe characters in a @sc{url} as @samp{%xy}, @code{xy}
+being the hexadecimal representation of the character's @sc{ascii}
+value. Some common unsafe characters include @samp{%} (quoted as
+@samp{%25}), @samp{:} (quoted as @samp{%3A}), and @samp{@@} (quoted as
+@samp{%40}). Refer to @sc{rfc1738} for a comprehensive list of unsafe
+characters.
+
+Wget also supports the @code{type} feature for @sc{ftp} @sc{url}s. By
+default, @sc{ftp} documents are retrieved in the binary mode (type
+@samp{i}), which means that they are downloaded unchanged. Another
+useful mode is the @samp{a} (@dfn{ASCII}) mode, which converts the line
+delimiters between the different operating systems, and is thus useful
+for text files. Here is an example:
+
+@example
+ftp://host/directory/file;type=a
+@end example
+
+Two alternative variants of @sc{url} specification are also supported,
+because of historical (hysterical?) reasons and their widespreaded use.
+
+@sc{ftp}-only syntax (supported by @code{NcFTP}):
+@example
+host:/dir/file
+@end example
+
+@sc{http}-only syntax (introduced by @code{Netscape}):
+@example
+host[:port]/dir/file
+@end example
+
+These two alternative forms are deprecated, and may cease being
+supported in the future.
+
+If you do not understand the difference between these notations, or do
+not know which one to use, just use the plain ordinary format you use
+with your favorite browser, like @code{Lynx} or @code{Netscape}.
+
+@c man begin OPTIONS
+
+@node Option Syntax, Basic Startup Options, URL Format, Invoking
+@section Option Syntax
+@cindex option syntax
+@cindex syntax of options
+
+Since Wget uses GNU getopt to process command-line arguments, every
+option has a long form along with the short one. Long options are
+more convenient to remember, but take time to type. You may freely
+mix different option styles, or specify options after the command-line
+arguments. Thus you may write:
+
+@example
+wget -r --tries=10 http://fly.srk.fer.hr/ -o log
+@end example
+
+The space between the option accepting an argument and the argument may
+be omitted. Instead of @samp{-o log} you can write @samp{-olog}.
+
+You may put several options that do not require arguments together,
+like:
+
+@example
+wget -drc @var{URL}
+@end example
+
+This is completely equivalent to:
+
+@example
+wget -d -r -c @var{URL}
+@end example
+
+Since the options can be specified after the arguments, you may
+terminate them with @samp{--}. So the following will try to download
+@sc{url} @samp{-x}, reporting failure to @file{log}:
+
+@example
+wget -o log -- -x
+@end example
+
+The options that accept comma-separated lists all respect the convention
+that specifying an empty list clears its value. This can be useful to
+clear the @file{.wgetrc} settings. For instance, if your @file{.wgetrc}
+sets @code{exclude_directories} to @file{/cgi-bin}, the following
+example will first reset it, and then set it to exclude @file{/~nobody}
+and @file{/~somebody}. You can also clear the lists in @file{.wgetrc}
+(@pxref{Wgetrc Syntax}).
+
+@example
+wget -X '' -X /~nobody,/~somebody
+@end example
+
+Most options that do not accept arguments are @dfn{boolean} options,
+so named because their state can be captured with a yes-or-no
+(``boolean'') variable. For example, @samp{--follow-ftp} tells Wget
+to follow FTP links from HTML files and, on the other hand,
+@samp{--no-glob} tells it not to perform file globbing on FTP URLs. A
+boolean option is either @dfn{affirmative} or @dfn{negative}
+(beginning with @samp{--no}). All such options share several
+properties.
+
+Unless stated otherwise, it is assumed that the default behavior is
+the opposite of what the option accomplishes. For example, the
+documented existence of @samp{--follow-ftp} assumes that the default
+is to @emph{not} follow FTP links from HTML pages.
+
+Affirmative options can be negated by prepending the @samp{--no-} to
+the option name; negative options can be negated by omitting the
+@samp{--no-} prefix. This might seem superfluous---if the default for
+an affirmative option is to not do something, then why provide a way
+to explicitly turn it off? But the startup file may in fact change
+the default. For instance, using @code{follow_ftp = on} in
+@file{.wgetrc} makes Wget @emph{follow} FTP links by default, and
+using @samp{--no-follow-ftp} is the only way to restore the factory
+default from the command line.
+
+@node Basic Startup Options, Logging and Input File Options, Option Syntax, Invoking
+@section Basic Startup Options
+
+@table @samp
+@item -V
+@itemx --version
+Display the version of Wget.
+
+@item -h
+@itemx --help
+Print a help message describing all of Wget's command-line options.
+
+@item -b
+@itemx --background
+Go to background immediately after startup. If no output file is
+specified via the @samp{-o}, output is redirected to @file{wget-log}.
+
+@cindex execute wgetrc command
+@item -e @var{command}
+@itemx --execute @var{command}
+Execute @var{command} as if it were a part of @file{.wgetrc}
+(@pxref{Startup File}). A command thus invoked will be executed
+@emph{after} the commands in @file{.wgetrc}, thus taking precedence over
+them. If you need to specify more than one wgetrc command, use multiple
+instances of @samp{-e}.
+
+@end table
+
+@node Logging and Input File Options, Download Options, Basic Startup Options, Invoking
+@section Logging and Input File Options
+
+@table @samp
+@cindex output file
+@cindex log file
+@item -o @var{logfile}
+@itemx --output-file=@var{logfile}
+Log all messages to @var{logfile}. The messages are normally reported
+to standard error.
+
+@cindex append to log
+@item -a @var{logfile}
+@itemx --append-output=@var{logfile}
+Append to @var{logfile}. This is the same as @samp{-o}, only it appends
+to @var{logfile} instead of overwriting the old log file. If
+@var{logfile} does not exist, a new file is created.
+
+@cindex debug
+@item -d
+@itemx --debug
+Turn on debug output, meaning various information important to the
+developers of Wget if it does not work properly. Your system
+administrator may have chosen to compile Wget without debug support, in
+which case @samp{-d} will not work. Please note that compiling with
+debug support is always safe---Wget compiled with the debug support will
+@emph{not} print any debug info unless requested with @samp{-d}.
+@xref{Reporting Bugs}, for more information on how to use @samp{-d} for
+sending bug reports.
+
+@cindex quiet
+@item -q
+@itemx --quiet
+Turn off Wget's output.
+
+@cindex verbose
+@item -v
+@itemx --verbose
+Turn on verbose output, with all the available data. The default output
+is verbose.
+
+@item -nv
+@itemx --no-verbose
+Turn off verbose without being completely quiet (use @samp{-q} for
+that), which means that error messages and basic information still get
+printed.
+
+@cindex input-file
+@item -i @var{file}
+@itemx --input-file=@var{file}
+Read @sc{url}s from a local or external @var{file}. If @samp{-} is
+specified as @var{file}, @sc{url}s are read from the standard input.
+(Use @samp{./-} to read from a file literally named @samp{-}.)
+
+If this function is used, no @sc{url}s need be present on the command
+line. If there are @sc{url}s both on the command line and in an input
+file, those on the command lines will be the first ones to be
+retrieved. If @samp{--force-html} is not specified, then @var{file}
+should consist of a series of URLs, one per line.
+
+However, if you specify @samp{--force-html}, the document will be
+regarded as @samp{html}. In that case you may have problems with
+relative links, which you can solve either by adding @code{<base
+href="@var{url}">} to the documents or by specifying
+@samp{--base=@var{url}} on the command line.
+
+If the @var{file} is an external one, the document will be automatically
+treated as @samp{html} if the Content-Type matches @samp{text/html}.
+Furthermore, the @var{file}'s location will be implicitly used as base
+href if none was specified.
+
+@cindex force html
+@item -F
+@itemx --force-html
+When input is read from a file, force it to be treated as an @sc{html}
+file. This enables you to retrieve relative links from existing
+@sc{html} files on your local disk, by adding @code{<base
+href="@var{url}">} to @sc{html}, or using the @samp{--base} command-line
+option.
+
+@cindex base for relative links in input file
+@item -B @var{URL}
+@itemx --base=@var{URL}
+Resolves relative links using @var{URL} as the point of reference,
+when reading links from an HTML file specified via the
+@samp{-i}/@samp{--input-file} option (together with
+@samp{--force-html}, or when the input file was fetched remotely from
+a server describing it as @sc{html}). This is equivalent to the
+presence of a @code{BASE} tag in the @sc{html} input file, with
+@var{URL} as the value for the @code{href} attribute.
+
+For instance, if you specify @samp{http://foo/bar/a.html} for
+@var{URL}, and Wget reads @samp{../baz/b.html} from the input file, it
+would be resolved to @samp{http://foo/baz/b.html}.
+
+@cindex specify config
+@item --config=@var{FILE}
+Specify the location of a startup file you wish to use.
+@end table
+
+@node Download Options, Directory Options, Logging and Input File Options, Invoking
+@section Download Options
+
+@table @samp
+@cindex bind address
+@cindex client IP address
+@cindex IP address, client
+@item --bind-address=@var{ADDRESS}
+When making client TCP/IP connections, bind to @var{ADDRESS} on
+the local machine. @var{ADDRESS} may be specified as a hostname or IP
+address. This option can be useful if your machine is bound to multiple
+IPs.
+
+@cindex retries
+@cindex tries
+@cindex number of retries
+@item -t @var{number}
+@itemx --tries=@var{number}
+Set number of retries to @var{number}. Specify 0 or @samp{inf} for
+infinite retrying. The default is to retry 20 times, with the exception
+of fatal errors like ``connection refused'' or ``not found'' (404),
+which are not retried.
+
+@item -O @var{file}
+@itemx --output-document=@var{file}
+The documents will not be written to the appropriate files, but all
+will be concatenated together and written to @var{file}. If @samp{-}
+is used as @var{file}, documents will be printed to standard output,
+disabling link conversion. (Use @samp{./-} to print to a file
+literally named @samp{-}.)
+
+Use of @samp{-O} is @emph{not} intended to mean simply ``use the name
+@var{file} instead of the one in the URL;'' rather, it is
+analogous to shell redirection:
+@samp{wget -O file http://foo} is intended to work like
+@samp{wget -O - http://foo > file}; @file{file} will be truncated
+immediately, and @emph{all} downloaded content will be written there.
+
+For this reason, @samp{-N} (for timestamp-checking) is not supported
+in combination with @samp{-O}: since @var{file} is always newly
+created, it will always have a very new timestamp. A warning will be
+issued if this combination is used.
+
+Similarly, using @samp{-r} or @samp{-p} with @samp{-O} may not work as
+you expect: Wget won't just download the first file to @var{file} and
+then download the rest to their normal names: @emph{all} downloaded
+content will be placed in @var{file}. This was disabled in version
+1.11, but has been reinstated (with a warning) in 1.11.2, as there are
+some cases where this behavior can actually have some use.
+
+Note that a combination with @samp{-k} is only permitted when
+downloading a single document, as in that case it will just convert
+all relative URIs to external ones; @samp{-k} makes no sense for
+multiple URIs when they're all being downloaded to a single file;
+@samp{-k} can be used only when the output is a regular file.
+
+@cindex clobbering, file
+@cindex downloading multiple times
+@cindex no-clobber
+@item -nc
+@itemx --no-clobber
+If a file is downloaded more than once in the same directory, Wget's
+behavior depends on a few options, including @samp{-nc}. In certain
+cases, the local file will be @dfn{clobbered}, or overwritten, upon
+repeated download. In other cases it will be preserved.
+
+When running Wget without @samp{-N}, @samp{-nc}, @samp{-r}, or
+@samp{-p}, downloading the same file in the same directory will result
+in the original copy of @var{file} being preserved and the second copy
+being named @samp{@var{file}.1}. If that file is downloaded yet
+again, the third copy will be named @samp{@var{file}.2}, and so on.
+(This is also the behavior with @samp{-nd}, even if @samp{-r} or
+@samp{-p} are in effect.) When @samp{-nc} is specified, this behavior
+is suppressed, and Wget will refuse to download newer copies of
+@samp{@var{file}}. Therefore, ``@code{no-clobber}'' is actually a
+misnomer in this mode---it's not clobbering that's prevented (as the
+numeric suffixes were already preventing clobbering), but rather the
+multiple version saving that's prevented.
+
+When running Wget with @samp{-r} or @samp{-p}, but without @samp{-N},
+@samp{-nd}, or @samp{-nc}, re-downloading a file will result in the
+new copy simply overwriting the old. Adding @samp{-nc} will prevent
+this behavior, instead causing the original version to be preserved
+and any newer copies on the server to be ignored.
+
+When running Wget with @samp{-N}, with or without @samp{-r} or
+@samp{-p}, the decision as to whether or not to download a newer copy
+of a file depends on the local and remote timestamp and size of the
+file (@pxref{Time-Stamping}). @samp{-nc} may not be specified at the
+same time as @samp{-N}.
+
+Note that when @samp{-nc} is specified, files with the suffixes
+@samp{.html} or @samp{.htm} will be loaded from the local disk and
+parsed as if they had been retrieved from the Web.
+
+@cindex continue retrieval
+@cindex incomplete downloads
+@cindex resume download
+@item -c
+@itemx --continue
+Continue getting a partially-downloaded file. This is useful when you
+want to finish up a download started by a previous instance of Wget, or
+by another program. For instance:
+
+@example
+wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
+@end example
+
+If there is a file named @file{ls-lR.Z} in the current directory, Wget
+will assume that it is the first portion of the remote file, and will
+ask the server to continue the retrieval from an offset equal to the
+length of the local file.
+
+Note that you don't need to specify this option if you just want the
+current invocation of Wget to retry downloading a file should the
+connection be lost midway through. This is the default behavior.
+@samp{-c} only affects resumption of downloads started @emph{prior} to
+this invocation of Wget, and whose local files are still sitting around.
+
+Without @samp{-c}, the previous example would just download the remote
+file to @file{ls-lR.Z.1}, leaving the truncated @file{ls-lR.Z} file
+alone.
+
+Beginning with Wget 1.7, if you use @samp{-c} on a non-empty file, and
+it turns out that the server does not support continued downloading,
+Wget will refuse to start the download from scratch, which would
+effectively ruin existing contents. If you really want the download to
+start from scratch, remove the file.
+
+Also beginning with Wget 1.7, if you use @samp{-c} on a file which is of
+equal size as the one on the server, Wget will refuse to download the
+file and print an explanatory message. The same happens when the file
+is smaller on the server than locally (presumably because it was changed
+on the server since your last download attempt)---because ``continuing''
+is not meaningful, no download occurs.
+
+On the other side of the coin, while using @samp{-c}, any file that's
+bigger on the server than locally will be considered an incomplete
+download and only @code{(length(remote) - length(local))} bytes will be
+downloaded and tacked onto the end of the local file. This behavior can
+be desirable in certain cases---for instance, you can use @samp{wget -c}
+to download just the new portion that's been appended to a data
+collection or log file.
+
+However, if the file is bigger on the server because it's been
+@emph{changed}, as opposed to just @emph{appended} to, you'll end up
+with a garbled file. Wget has no way of verifying that the local file
+is really a valid prefix of the remote file. You need to be especially
+careful of this when using @samp{-c} in conjunction with @samp{-r},
+since every file will be considered as an "incomplete download" candidate.
+
+Another instance where you'll get a garbled file if you try to use
+@samp{-c} is if you have a lame @sc{http} proxy that inserts a
+``transfer interrupted'' string into the local file. In the future a
+``rollback'' option may be added to deal with this case.
+
+Note that @samp{-c} only works with @sc{ftp} servers and with @sc{http}
+servers that support the @code{Range} header.
+
+@cindex progress indicator
+@cindex dot style
+@item --progress=@var{type}
+Select the type of the progress indicator you wish to use. Legal
+indicators are ``dot'' and ``bar''.
+
+The ``bar'' indicator is used by default. It draws an @sc{ascii} progress
+bar graphics (a.k.a ``thermometer'' display) indicating the status of
+retrieval. If the output is not a TTY, the ``dot'' bar will be used by
+default.
+
+Use @samp{--progress=dot} to switch to the ``dot'' display. It traces
+the retrieval by printing dots on the screen, each dot representing a
+fixed amount of downloaded data.
+
+When using the dotted retrieval, you may also set the @dfn{style} by
+specifying the type as @samp{dot:@var{style}}. Different styles assign
+different meaning to one dot. With the @code{default} style each dot
+represents 1K, there are ten dots in a cluster and 50 dots in a line.
+The @code{binary} style has a more ``computer''-like orientation---8K
+dots, 16-dots clusters and 48 dots per line (which makes for 384K
+lines). The @code{mega} style is suitable for downloading very large
+files---each dot represents 64K retrieved, there are eight dots in a
+cluster, and 48 dots on each line (so each line contains 3M).
+
+Note that you can set the default style using the @code{progress}
+command in @file{.wgetrc}. That setting may be overridden from the
+command line. The exception is that, when the output is not a TTY, the
+``dot'' progress will be favored over ``bar''. To force the bar output,
+use @samp{--progress=bar:force}.
+
+@item -N
+@itemx --timestamping
+Turn on time-stamping. @xref{Time-Stamping}, for details.
+
+@item --no-use-server-timestamps
+Don't set the local file's timestamp by the one on the server.
+
+By default, when a file is downloaded, it's timestamps are set to
+match those from the remote file. This allows the use of
+@samp{--timestamping} on subsequent invocations of wget. However, it
+is sometimes useful to base the local file's timestamp on when it was
+actually downloaded; for that purpose, the
+@samp{--no-use-server-timestamps} option has been provided.
+
+@cindex server response, print
+@item -S
+@itemx --server-response
+Print the headers sent by @sc{http} servers and responses sent by
+@sc{ftp} servers.
+
+@cindex Wget as spider
+@cindex spider
+@item --spider
+When invoked with this option, Wget will behave as a Web @dfn{spider},
+which means that it will not download the pages, just check that they
+are there. For example, you can use Wget to check your bookmarks:
+
+@example
+wget --spider --force-html -i bookmarks.html
+@end example
+
+This feature needs much more work for Wget to get close to the
+functionality of real web spiders.
+
+@cindex timeout
+@item -T seconds
+@itemx --timeout=@var{seconds}
+Set the network timeout to @var{seconds} seconds. This is equivalent
+to specifying @samp{--dns-timeout}, @samp{--connect-timeout}, and
+@samp{--read-timeout}, all at the same time.
+
+When interacting with the network, Wget can check for timeout and
+abort the operation if it takes too long. This prevents anomalies
+like hanging reads and infinite connects. The only timeout enabled by
+default is a 900-second read timeout. Setting a timeout to 0 disables
+it altogether. Unless you know what you are doing, it is best not to
+change the default timeout settings.
+
+All timeout-related options accept decimal values, as well as
+subsecond values. For example, @samp{0.1} seconds is a legal (though
+unwise) choice of timeout. Subsecond timeouts are useful for checking
+server response times or for testing network latency.
+
+@cindex DNS timeout
+@cindex timeout, DNS
+@item --dns-timeout=@var{seconds}
+Set the DNS lookup timeout to @var{seconds} seconds. DNS lookups that
+don't complete within the specified time will fail. By default, there
+is no timeout on DNS lookups, other than that implemented by system
+libraries.
+
+@cindex connect timeout
+@cindex timeout, connect
+@item --connect-timeout=@var{seconds}
+Set the connect timeout to @var{seconds} seconds. TCP connections that
+take longer to establish will be aborted. By default, there is no
+connect timeout, other than that implemented by system libraries.
+
+@cindex read timeout
+@cindex timeout, read
+@item --read-timeout=@var{seconds}
+Set the read (and write) timeout to @var{seconds} seconds. The
+``time'' of this timeout refers to @dfn{idle time}: if, at any point in
+the download, no data is received for more than the specified number
+of seconds, reading fails and the download is restarted. This option
+does not directly affect the duration of the entire download.
+
+Of course, the remote server may choose to terminate the connection
+sooner than this option requires. The default read timeout is 900
+seconds.
+
+@cindex bandwidth, limit
+@cindex rate, limit
+@cindex limit bandwidth
+@item --limit-rate=@var{amount}
+Limit the download speed to @var{amount} bytes per second. Amount may
+be expressed in bytes, kilobytes with the @samp{k} suffix, or megabytes
+with the @samp{m} suffix. For example, @samp{--limit-rate=20k} will
+limit the retrieval rate to 20KB/s. This is useful when, for whatever
+reason, you don't want Wget to consume the entire available bandwidth.
+
+This option allows the use of decimal numbers, usually in conjunction
+with power suffixes; for example, @samp{--limit-rate=2.5k} is a legal
+value.
+
+Note that Wget implements the limiting by sleeping the appropriate
+amount of time after a network read that took less time than specified
+by the rate. Eventually this strategy causes the TCP transfer to slow
+down to approximately the specified rate. However, it may take some
+time for this balance to be achieved, so don't be surprised if limiting
+the rate doesn't work well with very small files.
+
+@cindex pause
+@cindex wait
+@item -w @var{seconds}
+@itemx --wait=@var{seconds}
+Wait the specified number of seconds between the retrievals. Use of
+this option is recommended, as it lightens the server load by making the
+requests less frequent. Instead of in seconds, the time can be
+specified in minutes using the @code{m} suffix, in hours using @code{h}
+suffix, or in days using @code{d} suffix.
+
+Specifying a large value for this option is useful if the network or the
+destination host is down, so that Wget can wait long enough to
+reasonably expect the network error to be fixed before the retry. The
+waiting interval specified by this function is influenced by
+@code{--random-wait}, which see.
+
+@cindex retries, waiting between
+@cindex waiting between retries
+@item --waitretry=@var{seconds}
+If you don't want Wget to wait between @emph{every} retrieval, but only
+between retries of failed downloads, you can use this option. Wget will
+use @dfn{linear backoff}, waiting 1 second after the first failure on a
+given file, then waiting 2 seconds after the second failure on that
+file, up to the maximum number of @var{seconds} you specify.
+
+By default, Wget will assume a value of 10 seconds.
+
+@cindex wait, random
+@cindex random wait
+@item --random-wait
+Some web sites may perform log analysis to identify retrieval programs
+such as Wget by looking for statistically significant similarities in
+the time between requests. This option causes the time between requests
+to vary between 0.5 and 1.5 * @var{wait} seconds, where @var{wait} was
+specified using the @samp{--wait} option, in order to mask Wget's
+presence from such analysis.
+
+A 2001 article in a publication devoted to development on a popular
+consumer platform provided code to perform this analysis on the fly.
+Its author suggested blocking at the class C address level to ensure
+automated retrieval programs were blocked despite changing DHCP-supplied
+addresses.
+
+The @samp{--random-wait} option was inspired by this ill-advised
+recommendation to block many unrelated users from a web site due to the
+actions of one.
+
+@cindex proxy
+@itemx --no-proxy
+Don't use proxies, even if the appropriate @code{*_proxy} environment
+variable is defined.
+
+@c man end
+For more information about the use of proxies with Wget, @xref{Proxies}.
+@c man begin OPTIONS
+
+@cindex quota
+@item -Q @var{quota}
+@itemx --quota=@var{quota}
+Specify download quota for automatic retrievals. The value can be
+specified in bytes (default), kilobytes (with @samp{k} suffix), or
+megabytes (with @samp{m} suffix).
+
+Note that quota will never affect downloading a single file. So if you
+specify @samp{wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz}, all of the
+@file{ls-lR.gz} will be downloaded. The same goes even when several
+@sc{url}s are specified on the command-line. However, quota is
+respected when retrieving either recursively, or from an input file.
+Thus you may safely type @samp{wget -Q2m -i sites}---download will be
+aborted when the quota is exceeded.
+
+Setting quota to 0 or to @samp{inf} unlimits the download quota.
+
+@cindex DNS cache
+@cindex caching of DNS lookups
+@item --no-dns-cache
+Turn off caching of DNS lookups. Normally, Wget remembers the IP
+addresses it looked up from DNS so it doesn't have to repeatedly
+contact the DNS server for the same (typically small) set of hosts it
+retrieves from. This cache exists in memory only; a new Wget run will
+contact DNS again.
+
+However, it has been reported that in some situations it is not
+desirable to cache host names, even for the duration of a
+short-running application like Wget. With this option Wget issues a
+new DNS lookup (more precisely, a new call to @code{gethostbyname} or
+@code{getaddrinfo}) each time it makes a new connection. Please note
+that this option will @emph{not} affect caching that might be
+performed by the resolving library or by an external caching layer,
+such as NSCD.
+
+If you don't understand exactly what this option does, you probably
+won't need it.
+
+@cindex file names, restrict
+@cindex Windows file names
+@item --restrict-file-names=@var{modes}
+Change which characters found in remote URLs must be escaped during
+generation of local filenames. Characters that are @dfn{restricted}
+by this option are escaped, i.e. replaced with @samp{%HH}, where
+@samp{HH} is the hexadecimal number that corresponds to the restricted
+character. This option may also be used to force all alphabetical
+cases to be either lower- or uppercase.
+
+By default, Wget escapes the characters that are not valid or safe as
+part of file names on your operating system, as well as control
+characters that are typically unprintable. This option is useful for
+changing these defaults, perhaps because you are downloading to a
+non-native partition, or because you want to disable escaping of the
+control characters, or you want to further restrict characters to only
+those in the @sc{ascii} range of values.
+
+The @var{modes} are a comma-separated set of text values. The
+acceptable values are @samp{unix}, @samp{windows}, @samp{nocontrol},
+@samp{ascii}, @samp{lowercase}, and @samp{uppercase}. The values
+@samp{unix} and @samp{windows} are mutually exclusive (one will
+override the other), as are @samp{lowercase} and
+@samp{uppercase}. Those last are special cases, as they do not change
+the set of characters that would be escaped, but rather force local
+file paths to be converted either to lower- or uppercase.
+
+When ``unix'' is specified, Wget escapes the character @samp{/} and
+the control characters in the ranges 0--31 and 128--159. This is the
+default on Unix-like operating systems.
+
+When ``windows'' is given, Wget escapes the characters @samp{\},
+@samp{|}, @samp{/}, @samp{:}, @samp{?}, @samp{"}, @samp{*}, @samp{<},
+@samp{>}, and the control characters in the ranges 0--31 and 128--159.
+In addition to this, Wget in Windows mode uses @samp{+} instead of
+@samp{:} to separate host and port in local file names, and uses
+@samp{@@} instead of @samp{?} to separate the query portion of the file
+name from the rest. Therefore, a URL that would be saved as
+@samp{www.xemacs.org:4300/search.pl?input=blah} in Unix mode would be
+saved as @samp{www.xemacs.org+4300/search.pl@@input=blah} in Windows
+mode. This mode is the default on Windows.
+
+If you specify @samp{nocontrol}, then the escaping of the control
+characters is also switched off. This option may make sense
+when you are downloading URLs whose names contain UTF-8 characters, on
+a system which can save and display filenames in UTF-8 (some possible
+byte values used in UTF-8 byte sequences fall in the range of values
+designated by Wget as ``controls'').
+
+The @samp{ascii} mode is used to specify that any bytes whose values
+are outside the range of @sc{ascii} characters (that is, greater than
+127) shall be escaped. This can be useful when saving filenames
+whose encoding does not match the one used locally.
+
+@cindex IPv6
+@itemx -4
+@itemx --inet4-only
+@itemx -6
+@itemx --inet6-only
+Force connecting to IPv4 or IPv6 addresses. With @samp{--inet4-only}
+or @samp{-4}, Wget will only connect to IPv4 hosts, ignoring AAAA
+records in DNS, and refusing to connect to IPv6 addresses specified in
+URLs. Conversely, with @samp{--inet6-only} or @samp{-6}, Wget will
+only connect to IPv6 hosts and ignore A records and IPv4 addresses.
+
+Neither options should be needed normally. By default, an IPv6-aware
+Wget will use the address family specified by the host's DNS record.
+If the DNS responds with both IPv4 and IPv6 addresses, Wget will try
+them in sequence until it finds one it can connect to. (Also see
+@code{--prefer-family} option described below.)
+
+These options can be used to deliberately force the use of IPv4 or
+IPv6 address families on dual family systems, usually to aid debugging
+or to deal with broken network configuration. Only one of
+@samp{--inet6-only} and @samp{--inet4-only} may be specified at the
+same time. Neither option is available in Wget compiled without IPv6
+support.
+
+@item --prefer-family=none/IPv4/IPv6
+When given a choice of several addresses, connect to the addresses
+with specified address family first. The address order returned by
+DNS is used without change by default.
+
+This avoids spurious errors and connect attempts when accessing hosts
+that resolve to both IPv6 and IPv4 addresses from IPv4 networks. For
+example, @samp{www.kame.net} resolves to
+@samp{2001:200:0:8002:203:47ff:fea5:3085} and to
+@samp{203.178.141.194}. When the preferred family is @code{IPv4}, the
+IPv4 address is used first; when the preferred family is @code{IPv6},
+the IPv6 address is used first; if the specified value is @code{none},
+the address order returned by DNS is used without change.
+
+Unlike @samp{-4} and @samp{-6}, this option doesn't inhibit access to
+any address family, it only changes the @emph{order} in which the
+addresses are accessed. Also note that the reordering performed by
+this option is @dfn{stable}---it doesn't affect order of addresses of
+the same family. That is, the relative order of all IPv4 addresses
+and of all IPv6 addresses remains intact in all cases.
+
+@item --retry-connrefused
+Consider ``connection refused'' a transient error and try again.
+Normally Wget gives up on a URL when it is unable to connect to the
+site because failure to connect is taken as a sign that the server is
+not running at all and that retries would not help. This option is
+for mirroring unreliable sites whose servers tend to disappear for
+short periods of time.
+
+@cindex user
+@cindex password
+@cindex authentication
+@item --user=@var{user}
+@itemx --password=@var{password}
+Specify the username @var{user} and password @var{password} for both
+@sc{ftp} and @sc{http} file retrieval. These parameters can be overridden
+using the @samp{--ftp-user} and @samp{--ftp-password} options for
+@sc{ftp} connections and the @samp{--http-user} and @samp{--http-password}
+options for @sc{http} connections.
+
+@item --ask-password
+Prompt for a password for each connection established. Cannot be specified
+when @samp{--password} is being used, because they are mutually exclusive.
+
+@cindex iri support
+@cindex idn support
+@item --no-iri
+
+Turn off internationalized URI (IRI) support. Use @samp{--iri} to
+turn it on. IRI support is activated by default.
+
+You can set the default state of IRI support using the @code{iri}
+command in @file{.wgetrc}. That setting may be overridden from the
+command line.
+
+@cindex local encoding
+@item --local-encoding=@var{encoding}
+
+Force Wget to use @var{encoding} as the default system encoding. That affects
+how Wget converts URLs specified as arguments from locale to @sc{utf-8} for
+IRI support.
+
+Wget use the function @code{nl_langinfo()} and then the @code{CHARSET}
+environment variable to get the locale. If it fails, @sc{ascii} is used.
+
+You can set the default local encoding using the @code{local_encoding}
+command in @file{.wgetrc}. That setting may be overridden from the
+command line.
+
+@cindex remote encoding
+@item --remote-encoding=@var{encoding}
+
+Force Wget to use @var{encoding} as the default remote server encoding.
+That affects how Wget converts URIs found in files from remote encoding
+to @sc{utf-8} during a recursive fetch. This options is only useful for
+IRI support, for the interpretation of non-@sc{ascii} characters.
+
+For HTTP, remote encoding can be found in HTTP @code{Content-Type}
+header and in HTML @code{Content-Type http-equiv} meta tag.
+
+You can set the default encoding using the @code{remoteencoding}
+command in @file{.wgetrc}. That setting may be overridden from the
+command line.
+
+@cindex unlink
+@item --unlink
+
+Force Wget to unlink file instead of clobbering existing file. This
+option is useful for downloading to the directory with hardlinks.
+
+@end table
+
+@node Directory Options, HTTP Options, Download Options, Invoking
+@section Directory Options
+
+@table @samp
+@item -nd
+@itemx --no-directories
+Do not create a hierarchy of directories when retrieving recursively.
+With this option turned on, all files will get saved to the current
+directory, without clobbering (if a name shows up more than once, the
+filenames will get extensions @samp{.n}).
+
+@item -x
+@itemx --force-directories
+The opposite of @samp{-nd}---create a hierarchy of directories, even if
+one would not have been created otherwise. E.g. @samp{wget -x
+http://fly.srk.fer.hr/robots.txt} will save the downloaded file to
+@file{fly.srk.fer.hr/robots.txt}.
+
+@item -nH
+@itemx --no-host-directories
+Disable generation of host-prefixed directories. By default, invoking
+Wget with @samp{-r http://fly.srk.fer.hr/} will create a structure of
+directories beginning with @file{fly.srk.fer.hr/}. This option disables
+such behavior.
+
+@item --protocol-directories
+Use the protocol name as a directory component of local file names. For
+example, with this option, @samp{wget -r http://@var{host}} will save to
+@samp{http/@var{host}/...} rather than just to @samp{@var{host}/...}.
+
+@cindex cut directories
+@item --cut-dirs=@var{number}
+Ignore @var{number} directory components. This is useful for getting a
+fine-grained control over the directory where recursive retrieval will
+be saved.
+
+Take, for example, the directory at
+@samp{ftp://ftp.xemacs.org/pub/xemacs/}. If you retrieve it with
+@samp{-r}, it will be saved locally under
+@file{ftp.xemacs.org/pub/xemacs/}. While the @samp{-nH} option can
+remove the @file{ftp.xemacs.org/} part, you are still stuck with
+@file{pub/xemacs}. This is where @samp{--cut-dirs} comes in handy; it
+makes Wget not ``see'' @var{number} remote directory components. Here
+are several examples of how @samp{--cut-dirs} option works.
+
+@example
+@group
+No options -> ftp.xemacs.org/pub/xemacs/
+-nH -> pub/xemacs/
+-nH --cut-dirs=1 -> xemacs/
+-nH --cut-dirs=2 -> .
+
+--cut-dirs=1 -> ftp.xemacs.org/xemacs/
+...
+@end group
+@end example
+
+If you just want to get rid of the directory structure, this option is
+similar to a combination of @samp{-nd} and @samp{-P}. However, unlike
+@samp{-nd}, @samp{--cut-dirs} does not lose with subdirectories---for
+instance, with @samp{-nH --cut-dirs=1}, a @file{beta/} subdirectory will
+be placed to @file{xemacs/beta}, as one would expect.
+
+@cindex directory prefix
+@item -P @var{prefix}
+@itemx --directory-prefix=@var{prefix}
+Set directory prefix to @var{prefix}. The @dfn{directory prefix} is the
+directory where all other files and subdirectories will be saved to,
+i.e. the top of the retrieval tree. The default is @samp{.} (the
+current directory).
+@end table
+
+@node HTTP Options, HTTPS (SSL/TLS) Options, Directory Options, Invoking
+@section HTTP Options
+
+@table @samp
+@cindex default page name
+@cindex index.html
+@item --default-page=@var{name}
+Use @var{name} as the default file name when it isn't known (i.e., for
+URLs that end in a slash), instead of @file{index.html}.
+
+@cindex .html extension
+@cindex .css extension
+@item -E
+@itemx --adjust-extension
+If a file of type @samp{application/xhtml+xml} or @samp{text/html} is
+downloaded and the URL does not end with the regexp
+@samp{\.[Hh][Tt][Mm][Ll]?}, this option will cause the suffix @samp{.html}
+to be appended to the local filename. This is useful, for instance, when
+you're mirroring a remote site that uses @samp{.asp} pages, but you want
+the mirrored pages to be viewable on your stock Apache server. Another
+good use for this is when you're downloading CGI-generated materials. A URL
+like @samp{http://site.com/article.cgi?25} will be saved as
+@file{article.cgi?25.html}.
+
+Note that filenames changed in this way will be re-downloaded every time
+you re-mirror a site, because Wget can't tell that the local
+@file{@var{X}.html} file corresponds to remote URL @samp{@var{X}} (since
+it doesn't yet know that the URL produces output of type
+@samp{text/html} or @samp{application/xhtml+xml}.
+
+As of version 1.12, Wget will also ensure that any downloaded files of
+type @samp{text/css} end in the suffix @samp{.css}, and the option was
+renamed from @samp{--html-extension}, to better reflect its new
+behavior. The old option name is still acceptable, but should now be
+considered deprecated.
+
+At some point in the future, this option may well be expanded to
+include suffixes for other types of content, including content types
+that are not parsed by Wget.
+
+@cindex http user
+@cindex http password
+@cindex authentication
+@item --http-user=@var{user}
+@itemx --http-password=@var{password}
+Specify the username @var{user} and password @var{password} on an
+@sc{http} server. According to the type of the challenge, Wget will
+encode them using either the @code{basic} (insecure),
+the @code{digest}, or the Windows @code{NTLM} authentication scheme.
+
+Another way to specify username and password is in the @sc{url} itself
+(@pxref{URL Format}). Either method reveals your password to anyone who
+bothers to run @code{ps}. To prevent the passwords from being seen,
+store them in @file{.wgetrc} or @file{.netrc}, and make sure to protect
+those files from other users with @code{chmod}. If the passwords are
+really important, do not leave them lying in those files either---edit
+the files and delete them after Wget has started the download.
+
+@iftex
+For more information about security issues with Wget, @xref{Security
+Considerations}.
+@end iftex
+
+@cindex Keep-Alive, turning off
+@cindex Persistent Connections, disabling
+@item --no-http-keep-alive
+Turn off the ``keep-alive'' feature for HTTP downloads. Normally, Wget
+asks the server to keep the connection open so that, when you download
+more than one document from the same server, they get transferred over
+the same TCP connection. This saves time and at the same time reduces
+the load on the server.
+
+This option is useful when, for some reason, persistent (keep-alive)
+connections don't work for you, for example due to a server bug or due
+to the inability of server-side scripts to cope with the connections.
+
+@cindex proxy
+@cindex cache
+@item --no-cache
+Disable server-side cache. In this case, Wget will send the remote
+server an appropriate directive (@samp{Pragma: no-cache}) to get the
+file from the remote service, rather than returning the cached version.
+This is especially useful for retrieving and flushing out-of-date
+documents on proxy servers.
+
+Caching is allowed by default.
+
+@cindex cookies
+@item --no-cookies
+Disable the use of cookies. Cookies are a mechanism for maintaining
+server-side state. The server sends the client a cookie using the
+@code{Set-Cookie} header, and the client responds with the same cookie
+upon further requests. Since cookies allow the server owners to keep
+track of visitors and for sites to exchange this information, some
+consider them a breach of privacy. The default is to use cookies;
+however, @emph{storing} cookies is not on by default.
+
+@cindex loading cookies
+@cindex cookies, loading
+@item --load-cookies @var{file}
+Load cookies from @var{file} before the first HTTP retrieval.
+@var{file} is a textual file in the format originally used by Netscape's
+@file{cookies.txt} file.
+
+You will typically use this option when mirroring sites that require
+that you be logged in to access some or all of their content. The login
+process typically works by the web server issuing an @sc{http} cookie
+upon receiving and verifying your credentials. The cookie is then
+resent by the browser when accessing that part of the site, and so
+proves your identity.
+
+Mirroring such a site requires Wget to send the same cookies your
+browser sends when communicating with the site. This is achieved by
+@samp{--load-cookies}---simply point Wget to the location of the
+@file{cookies.txt} file, and it will send the same cookies your browser
+would send in the same situation. Different browsers keep textual
+cookie files in different locations:
+
+@table @asis
+@item Netscape 4.x.
+The cookies are in @file{~/.netscape/cookies.txt}.
+
+@item Mozilla and Netscape 6.x.
+Mozilla's cookie file is also named @file{cookies.txt}, located
+somewhere under @file{~/.mozilla}, in the directory of your profile.
+The full path usually ends up looking somewhat like
+@file{~/.mozilla/default/@var{some-weird-string}/cookies.txt}.
+
+@item Internet Explorer.
+You can produce a cookie file Wget can use by using the File menu,
+Import and Export, Export Cookies. This has been tested with Internet
+Explorer 5; it is not guaranteed to work with earlier versions.
+
+@item Other browsers.
+If you are using a different browser to create your cookies,
+@samp{--load-cookies} will only work if you can locate or produce a
+cookie file in the Netscape format that Wget expects.
+@end table
+
+If you cannot use @samp{--load-cookies}, there might still be an
+alternative. If your browser supports a ``cookie manager'', you can use
+it to view the cookies used when accessing the site you're mirroring.
+Write down the name and value of the cookie, and manually instruct Wget
+to send those cookies, bypassing the ``official'' cookie support:
+
+@example
+wget --no-cookies --header "Cookie: @var{name}=@var{value}"
+@end example
+
+@cindex saving cookies
+@cindex cookies, saving
+@item --save-cookies @var{file}
+Save cookies to @var{file} before exiting. This will not save cookies
+that have expired or that have no expiry time (so-called ``session
+cookies''), but also see @samp{--keep-session-cookies}.
+
+@cindex cookies, session
+@cindex session cookies
+@item --keep-session-cookies
+When specified, causes @samp{--save-cookies} to also save session
+cookies. Session cookies are normally not saved because they are
+meant to be kept in memory and forgotten when you exit the browser.
+Saving them is useful on sites that require you to log in or to visit
+the home page before you can access some pages. With this option,
+multiple Wget runs are considered a single browser session as far as
+the site is concerned.
+
+Since the cookie file format does not normally carry session cookies,
+Wget marks them with an expiry timestamp of 0. Wget's
+@samp{--load-cookies} recognizes those as session cookies, but it might
+confuse other browsers. Also note that cookies so loaded will be
+treated as other session cookies, which means that if you want
+@samp{--save-cookies} to preserve them again, you must use
+@samp{--keep-session-cookies} again.
+
+@cindex Content-Length, ignore
+@cindex ignore length
+@item --ignore-length
+Unfortunately, some @sc{http} servers (@sc{cgi} programs, to be more
+precise) send out bogus @code{Content-Length} headers, which makes Wget
+go wild, as it thinks not all the document was retrieved. You can spot
+this syndrome if Wget retries getting the same document again and again,
+each time claiming that the (otherwise normal) connection has closed on
+the very same byte.
+
+With this option, Wget will ignore the @code{Content-Length} header---as
+if it never existed.
+
+@cindex header, add
+@item --header=@var{header-line}
+Send @var{header-line} along with the rest of the headers in each
+@sc{http} request. The supplied header is sent as-is, which means it
+must contain name and value separated by colon, and must not contain
+newlines.
+
+You may define more than one additional header by specifying
+@samp{--header} more than once.
+
+@example
+@group
+wget --header='Accept-Charset: iso-8859-2' \
+ --header='Accept-Language: hr' \
+ http://fly.srk.fer.hr/
+@end group
+@end example
+
+Specification of an empty string as the header value will clear all
+previous user-defined headers.
+
+As of Wget 1.10, this option can be used to override headers otherwise
+generated automatically. This example instructs Wget to connect to
+localhost, but to specify @samp{foo.bar} in the @code{Host} header:
+
+@example
+wget --header="Host: foo.bar" http://localhost/
+@end example
+
+In versions of Wget prior to 1.10 such use of @samp{--header} caused
+sending of duplicate headers.
+
+@cindex redirect
+@item --max-redirect=@var{number}
+Specifies the maximum number of redirections to follow for a resource.
+The default is 20, which is usually far more than necessary. However, on
+those occasions where you want to allow more (or fewer), this is the
+option to use.
+
+@cindex proxy user
+@cindex proxy password
+@cindex proxy authentication
+@item --proxy-user=@var{user}
+@itemx --proxy-password=@var{password}
+Specify the username @var{user} and password @var{password} for
+authentication on a proxy server. Wget will encode them using the
+@code{basic} authentication scheme.
+
+Security considerations similar to those with @samp{--http-password}
+pertain here as well.
+
+@cindex http referer
+@cindex referer, http
+@item --referer=@var{url}
+Include `Referer: @var{url}' header in HTTP request. Useful for
+retrieving documents with server-side processing that assume they are
+always being retrieved by interactive web browsers and only come out
+properly when Referer is set to one of the pages that point to them.
+
+@cindex server response, save
+@item --save-headers
+Save the headers sent by the @sc{http} server to the file, preceding the
+actual contents, with an empty line as the separator.
+
+@cindex user-agent
+@item -U @var{agent-string}
+@itemx --user-agent=@var{agent-string}
+Identify as @var{agent-string} to the @sc{http} server.
+
+The @sc{http} protocol allows the clients to identify themselves using a
+@code{User-Agent} header field. This enables distinguishing the
+@sc{www} software, usually for statistical purposes or for tracing of
+protocol violations. Wget normally identifies as
+@samp{Wget/@var{version}}, @var{version} being the current version
+number of Wget.
+
+However, some sites have been known to impose the policy of tailoring
+the output according to the @code{User-Agent}-supplied information.
+While this is not such a bad idea in theory, it has been abused by
+servers denying information to clients other than (historically)
+Netscape or, more frequently, Microsoft Internet Explorer. This
+option allows you to change the @code{User-Agent} line issued by Wget.
+Use of this option is discouraged, unless you really know what you are
+doing.
+
+Specifying empty user agent with @samp{--user-agent=""} instructs Wget
+not to send the @code{User-Agent} header in @sc{http} requests.
+
+@cindex POST
+@item --post-data=@var{string}
+@itemx --post-file=@var{file}
+Use POST as the method for all HTTP requests and send the specified
+data in the request body. @samp{--post-data} sends @var{string} as
+data, whereas @samp{--post-file} sends the contents of @var{file}.
+Other than that, they work in exactly the same way. In particular,
+they @emph{both} expect content of the form @code{key1=value1&key2=value2},
+with percent-encoding for special characters; the only difference is
+that one expects its content as a command-line parameter and the other
+accepts its content from a file. In particular, @samp{--post-file} is
+@emph{not} for transmitting files as form attachments: those must
+appear as @code{key=value} data (with appropriate percent-coding) just
+like everything else. Wget does not currently support
+@code{multipart/form-data} for transmitting POST data; only
+@code{application/x-www-form-urlencoded}. Only one of
+@samp{--post-data} and @samp{--post-file} should be specified.
+
+Please be aware that Wget needs to know the size of the POST data in
+advance. Therefore the argument to @code{--post-file} must be a regular
+file; specifying a FIFO or something like @file{/dev/stdin} won't work.
+It's not quite clear how to work around this limitation inherent in
+HTTP/1.0. Although HTTP/1.1 introduces @dfn{chunked} transfer that
+doesn't require knowing the request length in advance, a client can't
+use chunked unless it knows it's talking to an HTTP/1.1 server. And it
+can't know that until it receives a response, which in turn requires the
+request to have been completed -- a chicken-and-egg problem.
+
+Note: if Wget is redirected after the POST request is completed, it
+will not send the POST data to the redirected URL. This is because
+URLs that process POST often respond with a redirection to a regular
+page, which does not desire or accept POST. It is not completely
+clear that this behavior is optimal; if it doesn't work out, it might
+be changed in the future.
+
+This example shows how to log to a server using POST and then proceed to
+download the desired pages, presumably only accessible to authorized
+users:
+
+@example
+@group
+# @r{Log in to the server. This can be done only once.}
+wget --save-cookies cookies.txt \
+ --post-data 'user=foo&password=bar' \
+ http://server.com/auth.php
+
+# @r{Now grab the page or pages we care about.}
+wget --load-cookies cookies.txt \
+ -p http://server.com/interesting/article.php
+@end group
+@end example
+
+If the server is using session cookies to track user authentication,
+the above will not work because @samp{--save-cookies} will not save
+them (and neither will browsers) and the @file{cookies.txt} file will
+be empty. In that case use @samp{--keep-session-cookies} along with
+@samp{--save-cookies} to force saving of session cookies.
+
+@cindex Content-Disposition
+@item --content-disposition
+
+If this is set to on, experimental (not fully-functional) support for
+@code{Content-Disposition} headers is enabled. This can currently result in
+extra round-trips to the server for a @code{HEAD} request, and is known
+to suffer from a few bugs, which is why it is not currently enabled by default.
+
+This option is useful for some file-downloading CGI programs that use
+@code{Content-Disposition} headers to describe what the name of a
+downloaded file should be.
+
+@cindex Trust server names
+@item --trust-server-names
+
+If this is set to on, on a redirect the last component of the
+redirection URL will be used as the local file name. By default it is
+used the last component in the original URL.
+
+@cindex authentication
+@item --auth-no-challenge
+
+If this option is given, Wget will send Basic HTTP authentication
+information (plaintext username and password) for all requests, just
+like Wget 1.10.2 and prior did by default.
+
+Use of this option is not recommended, and is intended only to support
+some few obscure servers, which never send HTTP authentication
+challenges, but accept unsolicited auth info, say, in addition to
+form-based authentication.
+
+@end table
+
+@node HTTPS (SSL/TLS) Options, FTP Options, HTTP Options, Invoking
+@section HTTPS (SSL/TLS) Options
+
+@cindex SSL
+To support encrypted HTTP (HTTPS) downloads, Wget must be compiled
+with an external SSL library, currently OpenSSL. If Wget is compiled
+without SSL support, none of these options are available.
+
+@table @samp
+@cindex SSL protocol, choose
+@item --secure-protocol=@var{protocol}
+Choose the secure protocol to be used. Legal values are @samp{auto},
+@samp{SSLv2}, @samp{SSLv3}, and @samp{TLSv1}. If @samp{auto} is used,
+the SSL library is given the liberty of choosing the appropriate
+protocol automatically, which is achieved by sending an SSLv2 greeting
+and announcing support for SSLv3 and TLSv1. This is the default.
+
+Specifying @samp{SSLv2}, @samp{SSLv3}, or @samp{TLSv1} forces the use
+of the corresponding protocol. This is useful when talking to old and
+buggy SSL server implementations that make it hard for OpenSSL to
+choose the correct protocol version. Fortunately, such servers are
+quite rare.
+
+@cindex SSL certificate, check
+@item --no-check-certificate
+Don't check the server certificate against the available certificate
+authorities. Also don't require the URL host name to match the common
+name presented by the certificate.
+
+As of Wget 1.10, the default is to verify the server's certificate
+against the recognized certificate authorities, breaking the SSL
+handshake and aborting the download if the verification fails.
+Although this provides more secure downloads, it does break
+interoperability with some sites that worked with previous Wget
+versions, particularly those using self-signed, expired, or otherwise
+invalid certificates. This option forces an ``insecure'' mode of
+operation that turns the certificate verification errors into warnings
+and allows you to proceed.
+
+If you encounter ``certificate verification'' errors or ones saying
+that ``common name doesn't match requested host name'', you can use
+this option to bypass the verification and proceed with the download.
+@emph{Only use this option if you are otherwise convinced of the
+site's authenticity, or if you really don't care about the validity of
+its certificate.} It is almost always a bad idea not to check the
+certificates when transmitting confidential or important data.
+
+@cindex SSL certificate
+@item --certificate=@var{file}
+Use the client certificate stored in @var{file}. This is needed for
+servers that are configured to require certificates from the clients
+that connect to them. Normally a certificate is not required and this
+switch is optional.
+
+@cindex SSL certificate type, specify
+@item --certificate-type=@var{type}
+Specify the type of the client certificate. Legal values are
+@samp{PEM} (assumed by default) and @samp{DER}, also known as
+@samp{ASN1}.
+
+@item --private-key=@var{file}
+Read the private key from @var{file}. This allows you to provide the
+private key in a file separate from the certificate.
+
+@item --private-key-type=@var{type}
+Specify the type of the private key. Accepted values are @samp{PEM}
+(the default) and @samp{DER}.
+
+@item --ca-certificate=@var{file}
+Use @var{file} as the file with the bundle of certificate authorities
+(``CA'') to verify the peers. The certificates must be in PEM format.
+
+Without this option Wget looks for CA certificates at the
+system-specified locations, chosen at OpenSSL installation time.
+
+@cindex SSL certificate authority
+@item --ca-directory=@var{directory}
+Specifies directory containing CA certificates in PEM format. Each
+file contains one CA certificate, and the file name is based on a hash
+value derived from the certificate. This is achieved by processing a
+certificate directory with the @code{c_rehash} utility supplied with
+OpenSSL. Using @samp{--ca-directory} is more efficient than
+@samp{--ca-certificate} when many certificates are installed because
+it allows Wget to fetch certificates on demand.
+
+Without this option Wget looks for CA certificates at the
+system-specified locations, chosen at OpenSSL installation time.
+
+@cindex entropy, specifying source of
+@cindex randomness, specifying source of
+@item --random-file=@var{file}
+Use @var{file} as the source of random data for seeding the
+pseudo-random number generator on systems without @file{/dev/random}.
+
+On such systems the SSL library needs an external source of randomness
+to initialize. Randomness may be provided by EGD (see
+@samp{--egd-file} below) or read from an external source specified by
+the user. If this option is not specified, Wget looks for random data
+in @code{$RANDFILE} or, if that is unset, in @file{$HOME/.rnd}. If
+none of those are available, it is likely that SSL encryption will not
+be usable.
+
+If you're getting the ``Could not seed OpenSSL PRNG; disabling SSL.''
+error, you should provide random data using some of the methods
+described above.
+
+@cindex EGD
+@item --egd-file=@var{file}
+Use @var{file} as the EGD socket. EGD stands for @dfn{Entropy
+Gathering Daemon}, a user-space program that collects data from
+various unpredictable system sources and makes it available to other
+programs that might need it. Encryption software, such as the SSL
+library, needs sources of non-repeating randomness to seed the random
+number generator used to produce cryptographically strong keys.
+
+OpenSSL allows the user to specify his own source of entropy using the
+@code{RAND_FILE} environment variable. If this variable is unset, or
+if the specified file does not produce enough randomness, OpenSSL will
+read random data from EGD socket specified using this option.
+
+If this option is not specified (and the equivalent startup command is
+not used), EGD is never contacted. EGD is not needed on modern Unix
+systems that support @file{/dev/random}.
+@end table
+
+@node FTP Options, Recursive Retrieval Options, HTTPS (SSL/TLS) Options, Invoking
+@section FTP Options
+
+@table @samp
+@cindex ftp user
+@cindex ftp password
+@cindex ftp authentication
+@item --ftp-user=@var{user}
+@itemx --ftp-password=@var{password}
+Specify the username @var{user} and password @var{password} on an
+@sc{ftp} server. Without this, or the corresponding startup option,
+the password defaults to @samp{-wget@@}, normally used for anonymous
+FTP.
+
+Another way to specify username and password is in the @sc{url} itself
+(@pxref{URL Format}). Either method reveals your password to anyone who
+bothers to run @code{ps}. To prevent the passwords from being seen,
+store them in @file{.wgetrc} or @file{.netrc}, and make sure to protect
+those files from other users with @code{chmod}. If the passwords are
+really important, do not leave them lying in those files either---edit
+the files and delete them after Wget has started the download.
+
+@iftex
+For more information about security issues with Wget, @xref{Security
+Considerations}.
+@end iftex
+
+@cindex .listing files, removing
+@item --no-remove-listing
+Don't remove the temporary @file{.listing} files generated by @sc{ftp}
+retrievals. Normally, these files contain the raw directory listings
+received from @sc{ftp} servers. Not removing them can be useful for
+debugging purposes, or when you want to be able to easily check on the
+contents of remote server directories (e.g. to verify that a mirror
+you're running is complete).
+
+Note that even though Wget writes to a known filename for this file,
+this is not a security hole in the scenario of a user making
+@file{.listing} a symbolic link to @file{/etc/passwd} or something and
+asking @code{root} to run Wget in his or her directory. Depending on
+the options used, either Wget will refuse to write to @file{.listing},
+making the globbing/recursion/time-stamping operation fail, or the
+symbolic link will be deleted and replaced with the actual
+@file{.listing} file, or the listing will be written to a
+@file{.listing.@var{number}} file.
+
+Even though this situation isn't a problem, though, @code{root} should
+never run Wget in a non-trusted user's directory. A user could do
+something as simple as linking @file{index.html} to @file{/etc/passwd}
+and asking @code{root} to run Wget with @samp{-N} or @samp{-r} so the file
+will be overwritten.
+
+@cindex globbing, toggle
+@item --no-glob
+Turn off @sc{ftp} globbing. Globbing refers to the use of shell-like
+special characters (@dfn{wildcards}), like @samp{*}, @samp{?}, @samp{[}
+and @samp{]} to retrieve more than one file from the same directory at
+once, like:
+
+@example
+wget ftp://gnjilux.srk.fer.hr/*.msg
+@end example
+
+By default, globbing will be turned on if the @sc{url} contains a
+globbing character. This option may be used to turn globbing on or off
+permanently.
+
+You may have to quote the @sc{url} to protect it from being expanded by
+your shell. Globbing makes Wget look for a directory listing, which is
+system-specific. This is why it currently works only with Unix @sc{ftp}
+servers (and the ones emulating Unix @code{ls} output).
+
+@cindex passive ftp
+@item --no-passive-ftp
+Disable the use of the @dfn{passive} FTP transfer mode. Passive FTP
+mandates that the client connect to the server to establish the data
+connection rather than the other way around.
+
+If the machine is connected to the Internet directly, both passive and
+active FTP should work equally well. Behind most firewall and NAT
+configurations passive FTP has a better chance of working. However,
+in some rare firewall configurations, active FTP actually works when
+passive FTP doesn't. If you suspect this to be the case, use this
+option, or set @code{passive_ftp=off} in your init file.
+
+@cindex symbolic links, retrieving
+@item --retr-symlinks
+Usually, when retrieving @sc{ftp} directories recursively and a symbolic
+link is encountered, the linked-to file is not downloaded. Instead, a
+matching symbolic link is created on the local filesystem. The
+pointed-to file will not be downloaded unless this recursive retrieval
+would have encountered it separately and downloaded it anyway.
+
+When @samp{--retr-symlinks} is specified, however, symbolic links are
+traversed and the pointed-to files are retrieved. At this time, this
+option does not cause Wget to traverse symlinks to directories and
+recurse through them, but in the future it should be enhanced to do
+this.
+
+Note that when retrieving a file (not a directory) because it was
+specified on the command-line, rather than because it was recursed to,
+this option has no effect. Symbolic links are always traversed in this
+case.
+@end table
+
+@node Recursive Retrieval Options, Recursive Accept/Reject Options, FTP Options, Invoking
+@section Recursive Retrieval Options
+
+@table @samp
+@item -r
+@itemx --recursive
+Turn on recursive retrieving. @xref{Recursive Download}, for more
+details. The default maximum depth is 5.
+
+@item -l @var{depth}
+@itemx --level=@var{depth}
+Specify recursion maximum depth level @var{depth} (@pxref{Recursive
+Download}).
+
+@cindex proxy filling
+@cindex delete after retrieval
+@cindex filling proxy cache
+@item --delete-after
+This option tells Wget to delete every single file it downloads,
+@emph{after} having done so. It is useful for pre-fetching popular
+pages through a proxy, e.g.:
+
+@example
+wget -r -nd --delete-after http://whatever.com/~popular/page/
+@end example
+
+The @samp{-r} option is to retrieve recursively, and @samp{-nd} to not
+create directories.
+
+Note that @samp{--delete-after} deletes files on the local machine. It
+does not issue the @samp{DELE} command to remote FTP sites, for
+instance. Also note that when @samp{--delete-after} is specified,
+@samp{--convert-links} is ignored, so @samp{.orig} files are simply not
+created in the first place.
+
+@cindex conversion of links
+@cindex link conversion
+@item -k
+@itemx --convert-links
+After the download is complete, convert the links in the document to
+make them suitable for local viewing. This affects not only the visible
+hyperlinks, but any part of the document that links to external content,
+such as embedded images, links to style sheets, hyperlinks to non-@sc{html}
+content, etc.
+
+Each link will be changed in one of the two ways:
+
+@itemize @bullet
+@item
+The links to files that have been downloaded by Wget will be changed to
+refer to the file they point to as a relative link.
+
+Example: if the downloaded file @file{/foo/doc.html} links to
+@file{/bar/img.gif}, also downloaded, then the link in @file{doc.html}
+will be modified to point to @samp{../bar/img.gif}. This kind of
+transformation works reliably for arbitrary combinations of directories.
+
+@item
+The links to files that have not been downloaded by Wget will be changed
+to include host name and absolute path of the location they point to.
+
+Example: if the downloaded file @file{/foo/doc.html} links to
+@file{/bar/img.gif} (or to @file{../bar/img.gif}), then the link in
+@file{doc.html} will be modified to point to
+@file{http://@var{hostname}/bar/img.gif}.
+@end itemize
+
+Because of this, local browsing works reliably: if a linked file was
+downloaded, the link will refer to its local name; if it was not
+downloaded, the link will refer to its full Internet address rather than
+presenting a broken link. The fact that the former links are converted
+to relative links ensures that you can move the downloaded hierarchy to
+another directory.
+
+Note that only at the end of the download can Wget know which links have
+been downloaded. Because of that, the work done by @samp{-k} will be
+performed at the end of all the downloads.
+
+@cindex backing up converted files
+@item -K
+@itemx --backup-converted
+When converting a file, back up the original version with a @samp{.orig}
+suffix. Affects the behavior of @samp{-N} (@pxref{HTTP Time-Stamping
+Internals}).
+
+@item -m
+@itemx --mirror
+Turn on options suitable for mirroring. This option turns on recursion
+and time-stamping, sets infinite recursion depth and keeps @sc{ftp}
+directory listings. It is currently equivalent to
+@samp{-r -N -l inf --no-remove-listing}.
+
+@cindex page requisites
+@cindex required images, downloading
+@item -p
+@itemx --page-requisites
+This option causes Wget to download all the files that are necessary to
+properly display a given @sc{html} page. This includes such things as
+inlined images, sounds, and referenced stylesheets.
+
+Ordinarily, when downloading a single @sc{html} page, any requisite documents
+that may be needed to display it properly are not downloaded. Using
+@samp{-r} together with @samp{-l} can help, but since Wget does not
+ordinarily distinguish between external and inlined documents, one is
+generally left with ``leaf documents'' that are missing their
+requisites.
+
+For instance, say document @file{1.html} contains an @code{<IMG>} tag
+referencing @file{1.gif} and an @code{<A>} tag pointing to external
+document @file{2.html}. Say that @file{2.html} is similar but that its
+image is @file{2.gif} and it links to @file{3.html}. Say this
+continues up to some arbitrarily high number.
+
+If one executes the command:
+
+@example
+wget -r -l 2 http://@var{site}/1.html
+@end example
+
+then @file{1.html}, @file{1.gif}, @file{2.html}, @file{2.gif}, and
+@file{3.html} will be downloaded. As you can see, @file{3.html} is
+without its requisite @file{3.gif} because Wget is simply counting the
+number of hops (up to 2) away from @file{1.html} in order to determine
+where to stop the recursion. However, with this command:
+
+@example
+wget -r -l 2 -p http://@var{site}/1.html
+@end example
+
+all the above files @emph{and} @file{3.html}'s requisite @file{3.gif}
+will be downloaded. Similarly,
+
+@example
+wget -r -l 1 -p http://@var{site}/1.html
+@end example
+
+will cause @file{1.html}, @file{1.gif}, @file{2.html}, and @file{2.gif}
+to be downloaded. One might think that:
+
+@example
+wget -r -l 0 -p http://@var{site}/1.html
+@end example
+
+would download just @file{1.html} and @file{1.gif}, but unfortunately
+this is not the case, because @samp{-l 0} is equivalent to
+@samp{-l inf}---that is, infinite recursion. To download a single @sc{html}
+page (or a handful of them, all specified on the command-line or in a
+@samp{-i} @sc{url} input file) and its (or their) requisites, simply leave off
+@samp{-r} and @samp{-l}:
+
+@example
+wget -p http://@var{site}/1.html
+@end example
+
+Note that Wget will behave as if @samp{-r} had been specified, but only
+that single page and its requisites will be downloaded. Links from that
+page to external documents will not be followed. Actually, to download
+a single page and all its requisites (even if they exist on separate
+websites), and make sure the lot displays properly locally, this author
+likes to use a few options in addition to @samp{-p}:
+
+@example
+wget -E -H -k -K -p http://@var{site}/@var{document}
+@end example
+
+To finish off this topic, it's worth knowing that Wget's idea of an
+external document link is any URL specified in an @code{<A>} tag, an
+@code{<AREA>} tag, or a @code{<LINK>} tag other than @code{<LINK
+REL="stylesheet">}.
+
+@cindex @sc{html} comments
+@cindex comments, @sc{html}
+@item --strict-comments
+Turn on strict parsing of @sc{html} comments. The default is to terminate
+comments at the first occurrence of @samp{-->}.
+
+According to specifications, @sc{html} comments are expressed as @sc{sgml}
+@dfn{declarations}. Declaration is special markup that begins with
+@samp{<!} and ends with @samp{>}, such as @samp{<!DOCTYPE ...>}, that
+may contain comments between a pair of @samp{--} delimiters. @sc{html}
+comments are ``empty declarations'', @sc{sgml} declarations without any
+non-comment text. Therefore, @samp{<!--foo-->} is a valid comment, and
+so is @samp{<!--one-- --two-->}, but @samp{<!--1--2-->} is not.
+
+On the other hand, most @sc{html} writers don't perceive comments as anything
+other than text delimited with @samp{<!--} and @samp{-->}, which is not
+quite the same. For example, something like @samp{<!------------>}
+works as a valid comment as long as the number of dashes is a multiple
+of four (!). If not, the comment technically lasts until the next
+@samp{--}, which may be at the other end of the document. Because of
+this, many popular browsers completely ignore the specification and
+implement what users have come to expect: comments delimited with
+@samp{<!--} and @samp{-->}.
+
+Until version 1.9, Wget interpreted comments strictly, which resulted in
+missing links in many web pages that displayed fine in browsers, but had
+the misfortune of containing non-compliant comments. Beginning with
+version 1.9, Wget has joined the ranks of clients that implements
+``naive'' comments, terminating each comment at the first occurrence of
+@samp{-->}.
+
+If, for whatever reason, you want strict comment parsing, use this
+option to turn it on.
+@end table
+
+@node Recursive Accept/Reject Options, Exit Status, Recursive Retrieval Options, Invoking
+@section Recursive Accept/Reject Options
+
+@table @samp
+@item -A @var{acclist} --accept @var{acclist}
+@itemx -R @var{rejlist} --reject @var{rejlist}
+Specify comma-separated lists of file name suffixes or patterns to
+accept or reject (@pxref{Types of Files}). Note that if
+any of the wildcard characters, @samp{*}, @samp{?}, @samp{[} or
+@samp{]}, appear in an element of @var{acclist} or @var{rejlist},
+it will be treated as a pattern, rather than a suffix.
+
+@item -D @var{domain-list}
+@itemx --domains=@var{domain-list}
+Set domains to be followed. @var{domain-list} is a comma-separated list
+of domains. Note that it does @emph{not} turn on @samp{-H}.
+
+@item --exclude-domains @var{domain-list}
+Specify the domains that are @emph{not} to be followed
+(@pxref{Spanning Hosts}).
+
+@cindex follow FTP links
+@item --follow-ftp
+Follow @sc{ftp} links from @sc{html} documents. Without this option,
+Wget will ignore all the @sc{ftp} links.
+
+@cindex tag-based recursive pruning
+@item --follow-tags=@var{list}
+Wget has an internal table of @sc{html} tag / attribute pairs that it
+considers when looking for linked documents during a recursive
+retrieval. If a user wants only a subset of those tags to be
+considered, however, he or she should be specify such tags in a
+comma-separated @var{list} with this option.
+
+@item --ignore-tags=@var{list}
+This is the opposite of the @samp{--follow-tags} option. To skip
+certain @sc{html} tags when recursively looking for documents to download,
+specify them in a comma-separated @var{list}.
+
+In the past, this option was the best bet for downloading a single page
+and its requisites, using a command-line like:
+
+@example
+wget --ignore-tags=a,area -H -k -K -r http://@var{site}/@var{document}
+@end example
+
+However, the author of this option came across a page with tags like
+@code{<LINK REL="home" HREF="/">} and came to the realization that
+specifying tags to ignore was not enough. One can't just tell Wget to
+ignore @code{<LINK>}, because then stylesheets will not be downloaded.
+Now the best bet for downloading a single page and its requisites is the
+dedicated @samp{--page-requisites} option.
+
+@cindex case fold
+@cindex ignore case
+@item --ignore-case
+Ignore case when matching files and directories. This influences the
+behavior of -R, -A, -I, and -X options, as well as globbing
+implemented when downloading from FTP sites. For example, with this
+option, @samp{-A *.txt} will match @samp{file1.txt}, but also
+@samp{file2.TXT}, @samp{file3.TxT}, and so on.
+
+@item -H
+@itemx --span-hosts
+Enable spanning across hosts when doing recursive retrieving
+(@pxref{Spanning Hosts}).
+
+@item -L
+@itemx --relative
+Follow relative links only. Useful for retrieving a specific home page
+without any distractions, not even those from the same hosts
+(@pxref{Relative Links}).
+
+@item -I @var{list}
+@itemx --include-directories=@var{list}
+Specify a comma-separated list of directories you wish to follow when
+downloading (@pxref{Directory-Based Limits}). Elements
+of @var{list} may contain wildcards.
+
+@item -X @var{list}
+@itemx --exclude-directories=@var{list}
+Specify a comma-separated list of directories you wish to exclude from
+download (@pxref{Directory-Based Limits}). Elements of
+@var{list} may contain wildcards.
+
+@item -np
+@item --no-parent
+Do not ever ascend to the parent directory when retrieving recursively.
+This is a useful option, since it guarantees that only the files
+@emph{below} a certain hierarchy will be downloaded.
+@xref{Directory-Based Limits}, for more details.
+@end table
+
+@c man end
+
+@node Exit Status, , Recursive Accept/Reject Options, Invoking
+@section Exit Status
+
+@c man begin EXITSTATUS
+
+Wget may return one of several error codes if it encounters problems.
+
+
+@table @asis
+@item 0
+No problems occurred.
+
+@item 1
+Generic error code.
+
+@item 2
+Parse error---for instance, when parsing command-line options, the
+@samp{.wgetrc} or @samp{.netrc}...
+
+@item 3
+File I/O error.
+
+@item 4
+Network failure.
+
+@item 5
+SSL verification failure.
+
+@item 6
+Username/password authentication failure.
+
+@item 7
+Protocol errors.
+
+@item 8
+Server issued an error response.
+@end table
+
+
+With the exceptions of 0 and 1, the lower-numbered exit codes take
+precedence over higher-numbered ones, when multiple types of errors
+are encountered.
+
+In versions of Wget prior to 1.12, Wget's exit status tended to be
+unhelpful and inconsistent. Recursive downloads would virtually always
+return 0 (success), regardless of any issues encountered, and
+non-recursive fetches only returned the status corresponding to the
+most recently-attempted download.
+
+@c man end
+
+@node Recursive Download, Following Links, Invoking, Top
+@chapter Recursive Download
+@cindex recursion
+@cindex retrieving
+@cindex recursive download
+
+GNU Wget is capable of traversing parts of the Web (or a single
+@sc{http} or @sc{ftp} server), following links and directory structure.
+We refer to this as to @dfn{recursive retrieval}, or @dfn{recursion}.
+
+With @sc{http} @sc{url}s, Wget retrieves and parses the @sc{html} or
+@sc{css} from the given @sc{url}, retrieving the files the document
+refers to, through markup like @code{href} or @code{src}, or @sc{css}
+@sc{uri} values specified using the @samp{url()} functional notation.
+If the freshly downloaded file is also of type @code{text/html},
+@code{application/xhtml+xml}, or @code{text/css}, it will be parsed
+and followed further.
+
+Recursive retrieval of @sc{http} and @sc{html}/@sc{css} content is
+@dfn{breadth-first}. This means that Wget first downloads the requested
+document, then the documents linked from that document, then the
+documents linked by them, and so on. In other words, Wget first
+downloads the documents at depth 1, then those at depth 2, and so on
+until the specified maximum depth.
+
+The maximum @dfn{depth} to which the retrieval may descend is specified
+with the @samp{-l} option. The default maximum depth is five layers.
+
+When retrieving an @sc{ftp} @sc{url} recursively, Wget will retrieve all
+the data from the given directory tree (including the subdirectories up
+to the specified depth) on the remote server, creating its mirror image
+locally. @sc{ftp} retrieval is also limited by the @code{depth}
+parameter. Unlike @sc{http} recursion, @sc{ftp} recursion is performed
+depth-first.
+
+By default, Wget will create a local directory tree, corresponding to
+the one found on the remote server.
+
+Recursive retrieving can find a number of applications, the most
+important of which is mirroring. It is also useful for @sc{www}
+presentations, and any other opportunities where slow network
+connections should be bypassed by storing the files locally.
+
+You should be warned that recursive downloads can overload the remote
+servers. Because of that, many administrators frown upon them and may
+ban access from your site if they detect very fast downloads of big
+amounts of content. When downloading from Internet servers, consider
+using the @samp{-w} option to introduce a delay between accesses to the
+server. The download will take a while longer, but the server
+administrator will not be alarmed by your rudeness.
+
+Of course, recursive download may cause problems on your machine. If
+left to run unchecked, it can easily fill up the disk. If downloading
+from local network, it can also take bandwidth on the system, as well as
+consume memory and CPU.
+
+Try to specify the criteria that match the kind of download you are
+trying to achieve. If you want to download only one page, use
+@samp{--page-requisites} without any additional recursion. If you want
+to download things under one directory, use @samp{-np} to avoid
+downloading things from other directories. If you want to download all
+the files from one directory, use @samp{-l 1} to make sure the recursion
+depth never exceeds one. @xref{Following Links}, for more information
+about this.
+
+Recursive retrieval should be used with care. Don't say you were not
+warned.
+
+@node Following Links, Time-Stamping, Recursive Download, Top
+@chapter Following Links
+@cindex links
+@cindex following links
+
+When retrieving recursively, one does not wish to retrieve loads of
+unnecessary data. Most of the time the users bear in mind exactly what
+they want to download, and want Wget to follow only specific links.
+
+For example, if you wish to download the music archive from
+@samp{fly.srk.fer.hr}, you will not want to download all the home pages
+that happen to be referenced by an obscure part of the archive.
+
+Wget possesses several mechanisms that allows you to fine-tune which
+links it will follow.
+
+@menu
+* Spanning Hosts:: (Un)limiting retrieval based on host name.
+* Types of Files:: Getting only certain files.
+* Directory-Based Limits:: Getting only certain directories.
+* Relative Links:: Follow relative links only.
+* FTP Links:: Following FTP links.
+@end menu
+
+@node Spanning Hosts, Types of Files, Following Links, Following Links
+@section Spanning Hosts
+@cindex spanning hosts
+@cindex hosts, spanning
+
+Wget's recursive retrieval normally refuses to visit hosts different
+than the one you specified on the command line. This is a reasonable
+default; without it, every retrieval would have the potential to turn
+your Wget into a small version of google.
+
+However, visiting different hosts, or @dfn{host spanning,} is sometimes
+a useful option. Maybe the images are served from a different server.
+Maybe you're mirroring a site that consists of pages interlinked between
+three servers. Maybe the server has two equivalent names, and the @sc{html}
+pages refer to both interchangeably.
+
+@table @asis
+@item Span to any host---@samp{-H}
+
+The @samp{-H} option turns on host spanning, thus allowing Wget's
+recursive run to visit any host referenced by a link. Unless sufficient
+recursion-limiting criteria are applied depth, these foreign hosts will
+typically link to yet more hosts, and so on until Wget ends up sucking
+up much more data than you have intended.
+
+@item Limit spanning to certain domains---@samp{-D}
+
+The @samp{-D} option allows you to specify the domains that will be
+followed, thus limiting the recursion only to the hosts that belong to
+these domains. Obviously, this makes sense only in conjunction with
+@samp{-H}. A typical example would be downloading the contents of
+@samp{www.server.com}, but allowing downloads from
+@samp{images.server.com}, etc.:
+
+@example
+wget -rH -Dserver.com http://www.server.com/
+@end example
+
+You can specify more than one address by separating them with a comma,
+e.g. @samp{-Ddomain1.com,domain2.com}.
+
+@item Keep download off certain domains---@samp{--exclude-domains}
+
+If there are domains you want to exclude specifically, you can do it
+with @samp{--exclude-domains}, which accepts the same type of arguments
+of @samp{-D}, but will @emph{exclude} all the listed domains. For
+example, if you want to download all the hosts from @samp{foo.edu}
+domain, with the exception of @samp{sunsite.foo.edu}, you can do it like
+this:
+
+@example
+wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
+ http://www.foo.edu/
+@end example
+
+@end table
+
+@node Types of Files, Directory-Based Limits, Spanning Hosts, Following Links
+@section Types of Files
+@cindex types of files
+
+When downloading material from the web, you will often want to restrict
+the retrieval to only certain file types. For example, if you are
+interested in downloading @sc{gif}s, you will not be overjoyed to get
+loads of PostScript documents, and vice versa.
+
+Wget offers two options to deal with this problem. Each option
+description lists a short name, a long name, and the equivalent command
+in @file{.wgetrc}.
+
+@cindex accept wildcards
+@cindex accept suffixes
+@cindex wildcards, accept
+@cindex suffixes, accept
+@table @samp
+@item -A @var{acclist}
+@itemx --accept @var{acclist}
+@itemx accept = @var{acclist}
+The argument to @samp{--accept} option is a list of file suffixes or
+patterns that Wget will download during recursive retrieval. A suffix
+is the ending part of a file, and consists of ``normal'' letters,
+e.g. @samp{gif} or @samp{.jpg}. A matching pattern contains shell-like
+wildcards, e.g. @samp{books*} or @samp{zelazny*196[0-9]*}.
+
+So, specifying @samp{wget -A gif,jpg} will make Wget download only the
+files ending with @samp{gif} or @samp{jpg}, i.e. @sc{gif}s and
+@sc{jpeg}s. On the other hand, @samp{wget -A "zelazny*196[0-9]*"} will
+download only files beginning with @samp{zelazny} and containing numbers
+from 1960 to 1969 anywhere within. Look up the manual of your shell for
+a description of how pattern matching works.
+
+Of course, any number of suffixes and patterns can be combined into a
+comma-separated list, and given as an argument to @samp{-A}.
+
+@cindex reject wildcards
+@cindex reject suffixes
+@cindex wildcards, reject
+@cindex suffixes, reject
+@item -R @var{rejlist}
+@itemx --reject @var{rejlist}
+@itemx reject = @var{rejlist}
+The @samp{--reject} option works the same way as @samp{--accept}, only
+its logic is the reverse; Wget will download all files @emph{except} the
+ones matching the suffixes (or patterns) in the list.
+
+So, if you want to download a whole page except for the cumbersome
+@sc{mpeg}s and @sc{.au} files, you can use @samp{wget -R mpg,mpeg,au}.
+Analogously, to download all files except the ones beginning with
+@samp{bjork}, use @samp{wget -R "bjork*"}. The quotes are to prevent
+expansion by the shell.
+@end table
+
+@noindent
+The @samp{-A} and @samp{-R} options may be combined to achieve even
+better fine-tuning of which files to retrieve. E.g. @samp{wget -A
+"*zelazny*" -R .ps} will download all the files having @samp{zelazny} as
+a part of their name, but @emph{not} the PostScript files.
+
+Note that these two options do not affect the downloading of @sc{html}
+files (as determined by a @samp{.htm} or @samp{.html} filename
+prefix). This behavior may not be desirable for all users, and may be
+changed for future versions of Wget.
+
+Note, too, that query strings (strings at the end of a URL beginning
+with a question mark (@samp{?}) are not included as part of the
+filename for accept/reject rules, even though these will actually
+contribute to the name chosen for the local file. It is expected that
+a future version of Wget will provide an option to allow matching
+against query strings.
+
+Finally, it's worth noting that the accept/reject lists are matched
+@emph{twice} against downloaded files: once against the URL's filename
+portion, to determine if the file should be downloaded in the first
+place; then, after it has been accepted and successfully downloaded,
+the local file's name is also checked against the accept/reject lists
+to see if it should be removed. The rationale was that, since
+@samp{.htm} and @samp{.html} files are always downloaded regardless of
+accept/reject rules, they should be removed @emph{after} being
+downloaded and scanned for links, if they did match the accept/reject
+lists. However, this can lead to unexpected results, since the local
+filenames can differ from the original URL filenames in the following
+ways, all of which can change whether an accept/reject rule matches:
+
+@itemize @bullet
+@item
+If the local file already exists and @samp{--no-directories} was
+specified, a numeric suffix will be appended to the original name.
+@item
+If @samp{--adjust-extension} was specified, the local filename might have
+@samp{.html} appended to it. If Wget is invoked with @samp{-E -A.php},
+a filename such as @samp{index.php} will match be accepted, but upon
+download will be named @samp{index.php.html}, which no longer matches,
+and so the file will be deleted.
+@item
+Query strings do not contribute to URL matching, but are included in
+local filenames, and so @emph{do} contribute to filename matching.
+@end itemize
+
+@noindent
+This behavior, too, is considered less-than-desirable, and may change
+in a future version of Wget.
+
+@node Directory-Based Limits, Relative Links, Types of Files, Following Links
+@section Directory-Based Limits
+@cindex directories
+@cindex directory limits
+
+Regardless of other link-following facilities, it is often useful to
+place the restriction of what files to retrieve based on the directories
+those files are placed in. There can be many reasons for this---the
+home pages may be organized in a reasonable directory structure; or some
+directories may contain useless information, e.g. @file{/cgi-bin} or
+@file{/dev} directories.
+
+Wget offers three different options to deal with this requirement. Each
+option description lists a short name, a long name, and the equivalent
+command in @file{.wgetrc}.
+
+@cindex directories, include
+@cindex include directories
+@cindex accept directories
+@table @samp
+@item -I @var{list}
+@itemx --include @var{list}
+@itemx include_directories = @var{list}
+@samp{-I} option accepts a comma-separated list of directories included
+in the retrieval. Any other directories will simply be ignored. The
+directories are absolute paths.
+
+So, if you wish to download from @samp{http://host/people/bozo/}
+following only links to bozo's colleagues in the @file{/people}
+directory and the bogus scripts in @file{/cgi-bin}, you can specify:
+
+@example
+wget -I /people,/cgi-bin http://host/people/bozo/
+@end example
+
+@cindex directories, exclude
+@cindex exclude directories
+@cindex reject directories
+@item -X @var{list}
+@itemx --exclude @var{list}
+@itemx exclude_directories = @var{list}
+@samp{-X} option is exactly the reverse of @samp{-I}---this is a list of
+directories @emph{excluded} from the download. E.g. if you do not want
+Wget to download things from @file{/cgi-bin} directory, specify @samp{-X
+/cgi-bin} on the command line.
+
+The same as with @samp{-A}/@samp{-R}, these two options can be combined
+to get a better fine-tuning of downloading subdirectories. E.g. if you
+want to load all the files from @file{/pub} hierarchy except for
+@file{/pub/worthless}, specify @samp{-I/pub -X/pub/worthless}.
+
+@cindex no parent
+@item -np
+@itemx --no-parent
+@itemx no_parent = on
+The simplest, and often very useful way of limiting directories is
+disallowing retrieval of the links that refer to the hierarchy
+@dfn{above} than the beginning directory, i.e. disallowing ascent to the
+parent directory/directories.
+
+The @samp{--no-parent} option (short @samp{-np}) is useful in this case.
+Using it guarantees that you will never leave the existing hierarchy.
+Supposing you issue Wget with:
+
+@example
+wget -r --no-parent http://somehost/~luzer/my-archive/
+@end example
+
+You may rest assured that none of the references to
+@file{/~his-girls-homepage/} or @file{/~luzer/all-my-mpegs/} will be
+followed. Only the archive you are interested in will be downloaded.
+Essentially, @samp{--no-parent} is similar to
+@samp{-I/~luzer/my-archive}, only it handles redirections in a more
+intelligent fashion.
+
+@strong{Note} that, for HTTP (and HTTPS), the trailing slash is very
+important to @samp{--no-parent}. HTTP has no concept of a ``directory''---Wget
+relies on you to indicate what's a directory and what isn't. In
+@samp{http://foo/bar/}, Wget will consider @samp{bar} to be a
+directory, while in @samp{http://foo/bar} (no trailing slash),
+@samp{bar} will be considered a filename (so @samp{--no-parent} would be
+meaningless, as its parent is @samp{/}).
+@end table
+
+@node Relative Links, FTP Links, Directory-Based Limits, Following Links
+@section Relative Links
+@cindex relative links
+
+When @samp{-L} is turned on, only the relative links are ever followed.
+Relative links are here defined those that do not refer to the web
+server root. For example, these links are relative:
+
+@example
+<a href="foo.gif">
+<a href="foo/bar.gif">
+<a href="../foo/bar.gif">
+@end example
+
+These links are not relative:
+
+@example
+<a href="/foo.gif">
+<a href="/foo/bar.gif">
+<a href="http://www.server.com/foo/bar.gif">
+@end example
+
+Using this option guarantees that recursive retrieval will not span
+hosts, even without @samp{-H}. In simple cases it also allows downloads
+to ``just work'' without having to convert links.
+
+This option is probably not very useful and might be removed in a future
+release.
+
+@node FTP Links, , Relative Links, Following Links
+@section Following FTP Links
+@cindex following ftp links
+
+The rules for @sc{ftp} are somewhat specific, as it is necessary for
+them to be. @sc{ftp} links in @sc{html} documents are often included
+for purposes of reference, and it is often inconvenient to download them
+by default.
+
+To have @sc{ftp} links followed from @sc{html} documents, you need to
+specify the @samp{--follow-ftp} option. Having done that, @sc{ftp}
+links will span hosts regardless of @samp{-H} setting. This is logical,
+as @sc{ftp} links rarely point to the same host where the @sc{http}
+server resides. For similar reasons, the @samp{-L} options has no
+effect on such downloads. On the other hand, domain acceptance
+(@samp{-D}) and suffix rules (@samp{-A} and @samp{-R}) apply normally.
+
+Also note that followed links to @sc{ftp} directories will not be
+retrieved recursively further.
+
+@node Time-Stamping, Startup File, Following Links, Top
+@chapter Time-Stamping
+@cindex time-stamping
+@cindex timestamping
+@cindex updating the archives
+@cindex incremental updating
+
+One of the most important aspects of mirroring information from the
+Internet is updating your archives.
+
+Downloading the whole archive again and again, just to replace a few
+changed files is expensive, both in terms of wasted bandwidth and money,
+and the time to do the update. This is why all the mirroring tools
+offer the option of incremental updating.
+
+Such an updating mechanism means that the remote server is scanned in
+search of @dfn{new} files. Only those new files will be downloaded in
+the place of the old ones.
+
+A file is considered new if one of these two conditions are met:
+
+@enumerate
+@item
+A file of that name does not already exist locally.
+
+@item
+A file of that name does exist, but the remote file was modified more
+recently than the local file.
+@end enumerate
+
+To implement this, the program needs to be aware of the time of last
+modification of both local and remote files. We call this information the
+@dfn{time-stamp} of a file.
+
+The time-stamping in GNU Wget is turned on using @samp{--timestamping}
+(@samp{-N}) option, or through @code{timestamping = on} directive in
+@file{.wgetrc}. With this option, for each file it intends to download,
+Wget will check whether a local file of the same name exists. If it
+does, and the remote file is not newer, Wget will not download it.
+
+If the local file does not exist, or the sizes of the files do not
+match, Wget will download the remote file no matter what the time-stamps
+say.
+
+@menu
+* Time-Stamping Usage::
+* HTTP Time-Stamping Internals::
+* FTP Time-Stamping Internals::
+@end menu
+
+@node Time-Stamping Usage, HTTP Time-Stamping Internals, Time-Stamping, Time-Stamping
+@section Time-Stamping Usage
+@cindex time-stamping usage
+@cindex usage, time-stamping
+
+The usage of time-stamping is simple. Say you would like to download a
+file so that it keeps its date of modification.
+
+@example
+wget -S http://www.gnu.ai.mit.edu/
+@end example
+
+A simple @code{ls -l} shows that the time stamp on the local file equals
+the state of the @code{Last-Modified} header, as returned by the server.
+As you can see, the time-stamping info is preserved locally, even
+without @samp{-N} (at least for @sc{http}).
+
+Several days later, you would like Wget to check if the remote file has
+changed, and download it if it has.
+
+@example
+wget -N http://www.gnu.ai.mit.edu/
+@end example
+
+Wget will ask the server for the last-modified date. If the local file
+has the same timestamp as the server, or a newer one, the remote file
+will not be re-fetched. However, if the remote file is more recent,
+Wget will proceed to fetch it.
+
+The same goes for @sc{ftp}. For example:
+
+@example
+wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*"
+@end example
+
+(The quotes around that URL are to prevent the shell from trying to
+interpret the @samp{*}.)
+
+After download, a local directory listing will show that the timestamps
+match those on the remote server. Reissuing the command with @samp{-N}
+will make Wget re-fetch @emph{only} the files that have been modified
+since the last download.
+
+If you wished to mirror the GNU archive every week, you would use a
+command like the following, weekly:
+
+@example
+wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
+@end example
+
+Note that time-stamping will only work for files for which the server
+gives a timestamp. For @sc{http}, this depends on getting a
+@code{Last-Modified} header. For @sc{ftp}, this depends on getting a
+directory listing with dates in a format that Wget can parse
+(@pxref{FTP Time-Stamping Internals}).
+
+@node HTTP Time-Stamping Internals, FTP Time-Stamping Internals, Time-Stamping Usage, Time-Stamping
+@section HTTP Time-Stamping Internals
+@cindex http time-stamping
+
+Time-stamping in @sc{http} is implemented by checking of the
+@code{Last-Modified} header. If you wish to retrieve the file
+@file{foo.html} through @sc{http}, Wget will check whether
+@file{foo.html} exists locally. If it doesn't, @file{foo.html} will be
+retrieved unconditionally.
+
+If the file does exist locally, Wget will first check its local
+time-stamp (similar to the way @code{ls -l} checks it), and then send a
+@code{HEAD} request to the remote server, demanding the information on
+the remote file.
+
+The @code{Last-Modified} header is examined to find which file was
+modified more recently (which makes it ``newer''). If the remote file
+is newer, it will be downloaded; if it is older, Wget will give
+up.@footnote{As an additional check, Wget will look at the
+@code{Content-Length} header, and compare the sizes; if they are not the
+same, the remote file will be downloaded no matter what the time-stamp
+says.}
+
+When @samp{--backup-converted} (@samp{-K}) is specified in conjunction
+with @samp{-N}, server file @samp{@var{X}} is compared to local file
+@samp{@var{X}.orig}, if extant, rather than being compared to local file
+@samp{@var{X}}, which will always differ if it's been converted by
+@samp{--convert-links} (@samp{-k}).
+
+Arguably, @sc{http} time-stamping should be implemented using the
+@code{If-Modified-Since} request.
+
+@node FTP Time-Stamping Internals, , HTTP Time-Stamping Internals, Time-Stamping
+@section FTP Time-Stamping Internals
+@cindex ftp time-stamping
+
+In theory, @sc{ftp} time-stamping works much the same as @sc{http}, only
+@sc{ftp} has no headers---time-stamps must be ferreted out of directory
+listings.
+
+If an @sc{ftp} download is recursive or uses globbing, Wget will use the
+@sc{ftp} @code{LIST} command to get a file listing for the directory
+containing the desired file(s). It will try to analyze the listing,
+treating it like Unix @code{ls -l} output, extracting the time-stamps.
+The rest is exactly the same as for @sc{http}. Note that when
+retrieving individual files from an @sc{ftp} server without using
+globbing or recursion, listing files will not be downloaded (and thus
+files will not be time-stamped) unless @samp{-N} is specified.
+
+Assumption that every directory listing is a Unix-style listing may
+sound extremely constraining, but in practice it is not, as many
+non-Unix @sc{ftp} servers use the Unixoid listing format because most
+(all?) of the clients understand it. Bear in mind that @sc{rfc959}
+defines no standard way to get a file list, let alone the time-stamps.
+We can only hope that a future standard will define this.
+
+Another non-standard solution includes the use of @code{MDTM} command
+that is supported by some @sc{ftp} servers (including the popular
+@code{wu-ftpd}), which returns the exact time of the specified file.
+Wget may support this command in the future.
+
+@node Startup File, Examples, Time-Stamping, Top
+@chapter Startup File
+@cindex startup file
+@cindex wgetrc
+@cindex .wgetrc
+@cindex startup
+@cindex .netrc
+
+Once you know how to change default settings of Wget through command
+line arguments, you may wish to make some of those settings permanent.
+You can do that in a convenient way by creating the Wget startup
+file---@file{.wgetrc}.
+
+Besides @file{.wgetrc} is the ``main'' initialization file, it is
+convenient to have a special facility for storing passwords. Thus Wget
+reads and interprets the contents of @file{$HOME/.netrc}, if it finds
+it. You can find @file{.netrc} format in your system manuals.
+
+Wget reads @file{.wgetrc} upon startup, recognizing a limited set of
+commands.
+
+@menu
+* Wgetrc Location:: Location of various wgetrc files.
+* Wgetrc Syntax:: Syntax of wgetrc.
+* Wgetrc Commands:: List of available commands.
+* Sample Wgetrc:: A wgetrc example.
+@end menu
+
+@node Wgetrc Location, Wgetrc Syntax, Startup File, Startup File
+@section Wgetrc Location
+@cindex wgetrc location
+@cindex location of wgetrc
+
+When initializing, Wget will look for a @dfn{global} startup file,
+@file{/usr/local/etc/wgetrc} by default (or some prefix other than
+@file{/usr/local}, if Wget was not installed there) and read commands
+from there, if it exists.
+
+Then it will look for the user's file. If the environmental variable
+@code{WGETRC} is set, Wget will try to load that file. Failing that, no
+further attempts will be made.
+
+If @code{WGETRC} is not set, Wget will try to load @file{$HOME/.wgetrc}.
+
+The fact that user's settings are loaded after the system-wide ones
+means that in case of collision user's wgetrc @emph{overrides} the
+system-wide wgetrc (in @file{/usr/local/etc/wgetrc} by default).
+Fascist admins, away!
+
+@node Wgetrc Syntax, Wgetrc Commands, Wgetrc Location, Startup File
+@section Wgetrc Syntax
+@cindex wgetrc syntax
+@cindex syntax of wgetrc
+
+The syntax of a wgetrc command is simple:
+
+@example
+variable = value
+@end example
+
+The @dfn{variable} will also be called @dfn{command}. Valid
+@dfn{values} are different for different commands.
+
+The commands are case-insensitive and underscore-insensitive. Thus
+@samp{DIr__PrefiX} is the same as @samp{dirprefix}. Empty lines, lines
+beginning with @samp{#} and lines containing white-space only are
+discarded.
+
+Commands that expect a comma-separated list will clear the list on an
+empty command. So, if you wish to reset the rejection list specified in
+global @file{wgetrc}, you can do it with:
+
+@example
+reject =
+@end example
+
+@node Wgetrc Commands, Sample Wgetrc, Wgetrc Syntax, Startup File
+@section Wgetrc Commands
+@cindex wgetrc commands
+
+The complete set of commands is listed below. Legal values are listed
+after the @samp{=}. Simple Boolean values can be set or unset using
+@samp{on} and @samp{off} or @samp{1} and @samp{0}.
+
+Some commands take pseudo-arbitrary values. @var{address} values can be
+hostnames or dotted-quad IP addresses. @var{n} can be any positive
+integer, or @samp{inf} for infinity, where appropriate. @var{string}
+values can be any non-empty string.
+
+Most of these commands have direct command-line equivalents. Also, any
+wgetrc command can be specified on the command line using the
+@samp{--execute} switch (@pxref{Basic Startup Options}.)
+
+@table @asis
+@item accept/reject = @var{string}
+Same as @samp{-A}/@samp{-R} (@pxref{Types of Files}).
+
+@item add_hostdir = on/off
+Enable/disable host-prefixed file names. @samp{-nH} disables it.
+
+@item ask_password = on/off
+Prompt for a password for each connection established. Cannot be specified
+when @samp{--password} is being used, because they are mutually
+exclusive. Equivalent to @samp{--ask-password}.
+
+@item auth_no_challenge = on/off
+If this option is given, Wget will send Basic HTTP authentication
+information (plaintext username and password) for all requests. See
+@samp{--auth-no-challenge}.
+
+@item background = on/off
+Enable/disable going to background---the same as @samp{-b} (which
+enables it).
+
+@item backup_converted = on/off
+Enable/disable saving pre-converted files with the suffix
+@samp{.orig}---the same as @samp{-K} (which enables it).
+
+@c @item backups = @var{number}
+@c #### Document me!
+@c
+@item base = @var{string}
+Consider relative @sc{url}s in input files (specified via the
+@samp{input} command or the @samp{--input-file}/@samp{-i} option,
+together with @samp{force_html} or @samp{--force-html})
+as being relative to @var{string}---the same as @samp{--base=@var{string}}.
+
+@item bind_address = @var{address}
+Bind to @var{address}, like the @samp{--bind-address=@var{address}}.
+
+@item ca_certificate = @var{file}
+Set the certificate authority bundle file to @var{file}. The same
+as @samp{--ca-certificate=@var{file}}.
+
+@item ca_directory = @var{directory}
+Set the directory used for certificate authorities. The same as
+@samp{--ca-directory=@var{directory}}.
+
+@item cache = on/off
+When set to off, disallow server-caching. See the @samp{--no-cache}
+option.
+
+@item certificate = @var{file}
+Set the client certificate file name to @var{file}. The same as
+@samp{--certificate=@var{file}}.
+
+@item certificate_type = @var{string}
+Specify the type of the client certificate, legal values being
+@samp{PEM} (the default) and @samp{DER} (aka ASN1). The same as
+@samp{--certificate-type=@var{string}}.
+
+@item check_certificate = on/off
+If this is set to off, the server certificate is not checked against
+the specified client authorities. The default is ``on''. The same as
+@samp{--check-certificate}.
+
+@item connect_timeout = @var{n}
+Set the connect timeout---the same as @samp{--connect-timeout}.
+
+@item content_disposition = on/off
+Turn on recognition of the (non-standard) @samp{Content-Disposition}
+HTTP header---if set to @samp{on}, the same as @samp{--content-disposition}.
+
+@item trust_server_names = on/off
+If set to on, use the last component of a redirection URL for the local
+file name.
+
+@item continue = on/off
+If set to on, force continuation of preexistent partially retrieved
+files. See @samp{-c} before setting it.
+
+@item convert_links = on/off
+Convert non-relative links locally. The same as @samp{-k}.
+
+@item cookies = on/off
+When set to off, disallow cookies. See the @samp{--cookies} option.
+
+@item cut_dirs = @var{n}
+Ignore @var{n} remote directory components. Equivalent to
+@samp{--cut-dirs=@var{n}}.
+
+@item debug = on/off
+Debug mode, same as @samp{-d}.
+
+@item default_page = @var{string}
+Default page name---the same as @samp{--default-page=@var{string}}.
+
+@item delete_after = on/off
+Delete after download---the same as @samp{--delete-after}.
+
+@item dir_prefix = @var{string}
+Top of directory tree---the same as @samp{-P @var{string}}.
+
+@item dirstruct = on/off
+Turning dirstruct on or off---the same as @samp{-x} or @samp{-nd},
+respectively.
+
+@item dns_cache = on/off
+Turn DNS caching on/off. Since DNS caching is on by default, this
+option is normally used to turn it off and is equivalent to
+@samp{--no-dns-cache}.
+
+@item dns_timeout = @var{n}
+Set the DNS timeout---the same as @samp{--dns-timeout}.
+
+@item domains = @var{string}
+Same as @samp{-D} (@pxref{Spanning Hosts}).
+
+@item dot_bytes = @var{n}
+Specify the number of bytes ``contained'' in a dot, as seen throughout
+the retrieval (1024 by default). You can postfix the value with
+@samp{k} or @samp{m}, representing kilobytes and megabytes,
+respectively. With dot settings you can tailor the dot retrieval to
+suit your needs, or you can use the predefined @dfn{styles}
+(@pxref{Download Options}).
+
+@item dot_spacing = @var{n}
+Specify the number of dots in a single cluster (10 by default).
+
+@item dots_in_line = @var{n}
+Specify the number of dots that will be printed in each line throughout
+the retrieval (50 by default).
+
+@item egd_file = @var{file}
+Use @var{string} as the EGD socket file name. The same as
+@samp{--egd-file=@var{file}}.
+
+@item exclude_directories = @var{string}
+Specify a comma-separated list of directories you wish to exclude from
+download---the same as @samp{-X @var{string}} (@pxref{Directory-Based
+Limits}).
+
+@item exclude_domains = @var{string}
+Same as @samp{--exclude-domains=@var{string}} (@pxref{Spanning
+Hosts}).
+
+@item follow_ftp = on/off
+Follow @sc{ftp} links from @sc{html} documents---the same as
+@samp{--follow-ftp}.
+
+@item follow_tags = @var{string}
+Only follow certain @sc{html} tags when doing a recursive retrieval,
+just like @samp{--follow-tags=@var{string}}.
+
+@item force_html = on/off
+If set to on, force the input filename to be regarded as an @sc{html}
+document---the same as @samp{-F}.
+
+@item ftp_password = @var{string}
+Set your @sc{ftp} password to @var{string}. Without this setting, the
+password defaults to @samp{-wget@@}, which is a useful default for
+anonymous @sc{ftp} access.
+
+This command used to be named @code{passwd} prior to Wget 1.10.
+
+@item ftp_proxy = @var{string}
+Use @var{string} as @sc{ftp} proxy, instead of the one specified in
+environment.
+
+@item ftp_user = @var{string}
+Set @sc{ftp} user to @var{string}.
+
+This command used to be named @code{login} prior to Wget 1.10.
+
+@item glob = on/off
+Turn globbing on/off---the same as @samp{--glob} and @samp{--no-glob}.
+
+@item header = @var{string}
+Define a header for HTTP downloads, like using
+@samp{--header=@var{string}}.
+
+@item adjust_extension = on/off
+Add a @samp{.html} extension to @samp{text/html} or
+@samp{application/xhtml+xml} files that lack one, or a @samp{.css}
+extension to @samp{text/css} files that lack one, like
+@samp{-E}. Previously named @samp{html_extension} (still acceptable,
+but deprecated).
+
+@item http_keep_alive = on/off
+Turn the keep-alive feature on or off (defaults to on). Turning it
+off is equivalent to @samp{--no-http-keep-alive}.
+
+@item http_password = @var{string}
+Set @sc{http} password, equivalent to
+@samp{--http-password=@var{string}}.
+
+@item http_proxy = @var{string}
+Use @var{string} as @sc{http} proxy, instead of the one specified in
+environment.
+
+@item http_user = @var{string}
+Set @sc{http} user to @var{string}, equivalent to
+@samp{--http-user=@var{string}}.
+
+@item https_proxy = @var{string}
+Use @var{string} as @sc{https} proxy, instead of the one specified in
+environment.
+
+@item ignore_case = on/off
+When set to on, match files and directories case insensitively; the
+same as @samp{--ignore-case}.
+
+@item ignore_length = on/off
+When set to on, ignore @code{Content-Length} header; the same as
+@samp{--ignore-length}.
+
+@item ignore_tags = @var{string}
+Ignore certain @sc{html} tags when doing a recursive retrieval, like
+@samp{--ignore-tags=@var{string}}.
+
+@item include_directories = @var{string}
+Specify a comma-separated list of directories you wish to follow when
+downloading---the same as @samp{-I @var{string}}.
+
+@item iri = on/off
+When set to on, enable internationalized URI (IRI) support; the same as
+@samp{--iri}.
+
+@item inet4_only = on/off
+Force connecting to IPv4 addresses, off by default. You can put this
+in the global init file to disable Wget's attempts to resolve and
+connect to IPv6 hosts. Available only if Wget was compiled with IPv6
+support. The same as @samp{--inet4-only} or @samp{-4}.
+
+@item inet6_only = on/off
+Force connecting to IPv6 addresses, off by default. Available only if
+Wget was compiled with IPv6 support. The same as @samp{--inet6-only}
+or @samp{-6}.
+
+@item input = @var{file}
+Read the @sc{url}s from @var{string}, like @samp{-i @var{file}}.
+
+@item keep_session_cookies = on/off
+When specified, causes @samp{save_cookies = on} to also save session
+cookies. See @samp{--keep-session-cookies}.
+
+@item limit_rate = @var{rate}
+Limit the download speed to no more than @var{rate} bytes per second.
+The same as @samp{--limit-rate=@var{rate}}.
+
+@item load_cookies = @var{file}
+Load cookies from @var{file}. See @samp{--load-cookies @var{file}}.
+
+@item local_encoding = @var{encoding}
+Force Wget to use @var{encoding} as the default system encoding. See
+@samp{--local-encoding}.
+
+@item logfile = @var{file}
+Set logfile to @var{file}, the same as @samp{-o @var{file}}.
+
+@item max_redirect = @var{number}
+Specifies the maximum number of redirections to follow for a resource.
+See @samp{--max-redirect=@var{number}}.
+
+@item mirror = on/off
+Turn mirroring on/off. The same as @samp{-m}.
+
+@item netrc = on/off
+Turn reading netrc on or off.
+
+@item no_clobber = on/off
+Same as @samp{-nc}.
+
+@item no_parent = on/off
+Disallow retrieving outside the directory hierarchy, like
+@samp{--no-parent} (@pxref{Directory-Based Limits}).
+
+@item no_proxy = @var{string}
+Use @var{string} as the comma-separated list of domains to avoid in
+proxy loading, instead of the one specified in environment.
+
+@item output_document = @var{file}
+Set the output filename---the same as @samp{-O @var{file}}.
+
+@item page_requisites = on/off
+Download all ancillary documents necessary for a single @sc{html} page to
+display properly---the same as @samp{-p}.
+
+@item passive_ftp = on/off
+Change setting of passive @sc{ftp}, equivalent to the
+@samp{--passive-ftp} option.
+
+@itemx password = @var{string}
+Specify password @var{string} for both @sc{ftp} and @sc{http} file retrieval.
+This command can be overridden using the @samp{ftp_password} and
+@samp{http_password} command for @sc{ftp} and @sc{http} respectively.
+
+@item post_data = @var{string}
+Use POST as the method for all HTTP requests and send @var{string} in
+the request body. The same as @samp{--post-data=@var{string}}.
+
+@item post_file = @var{file}
+Use POST as the method for all HTTP requests and send the contents of
+@var{file} in the request body. The same as
+@samp{--post-file=@var{file}}.
+
+@item prefer_family = none/IPv4/IPv6
+When given a choice of several addresses, connect to the addresses
+with specified address family first. The address order returned by
+DNS is used without change by default. The same as @samp{--prefer-family},
+which see for a detailed discussion of why this is useful.
+
+@item private_key = @var{file}
+Set the private key file to @var{file}. The same as
+@samp{--private-key=@var{file}}.
+
+@item private_key_type = @var{string}
+Specify the type of the private key, legal values being @samp{PEM}
+(the default) and @samp{DER} (aka ASN1). The same as
+@samp{--private-type=@var{string}}.
+
+@item progress = @var{string}
+Set the type of the progress indicator. Legal types are @samp{dot}
+and @samp{bar}. Equivalent to @samp{--progress=@var{string}}.
+
+@item protocol_directories = on/off
+When set, use the protocol name as a directory component of local file
+names. The same as @samp{--protocol-directories}.
+
+@item proxy_password = @var{string}
+Set proxy authentication password to @var{string}, like
+@samp{--proxy-password=@var{string}}.
+
+@item proxy_user = @var{string}
+Set proxy authentication user name to @var{string}, like
+@samp{--proxy-user=@var{string}}.
+
+@item quiet = on/off
+Quiet mode---the same as @samp{-q}.
+
+@item quota = @var{quota}
+Specify the download quota, which is useful to put in the global
+@file{wgetrc}. When download quota is specified, Wget will stop
+retrieving after the download sum has become greater than quota. The
+quota can be specified in bytes (default), kbytes @samp{k} appended) or
+mbytes (@samp{m} appended). Thus @samp{quota = 5m} will set the quota
+to 5 megabytes. Note that the user's startup file overrides system
+settings.
+
+@item random_file = @var{file}
+Use @var{file} as a source of randomness on systems lacking
+@file{/dev/random}.
+
+@item random_wait = on/off
+Turn random between-request wait times on or off. The same as
+@samp{--random-wait}.
+
+@item read_timeout = @var{n}
+Set the read (and write) timeout---the same as
+@samp{--read-timeout=@var{n}}.
+
+@item reclevel = @var{n}
+Recursion level (depth)---the same as @samp{-l @var{n}}.
+
+@item recursive = on/off
+Recursive on/off---the same as @samp{-r}.
+
+@item referer = @var{string}
+Set HTTP @samp{Referer:} header just like
+@samp{--referer=@var{string}}. (Note that it was the folks who wrote
+the @sc{http} spec who got the spelling of ``referrer'' wrong.)
+
+@item relative_only = on/off
+Follow only relative links---the same as @samp{-L} (@pxref{Relative
+Links}).
+
+@item remote_encoding = @var{encoding}
+Force Wget to use @var{encoding} as the default remote server encoding.
+See @samp{--remote-encoding}.
+
+@item remove_listing = on/off
+If set to on, remove @sc{ftp} listings downloaded by Wget. Setting it
+to off is the same as @samp{--no-remove-listing}.
+
+@item restrict_file_names = unix/windows
+Restrict the file names generated by Wget from URLs. See
+@samp{--restrict-file-names} for a more detailed description.
+
+@item retr_symlinks = on/off
+When set to on, retrieve symbolic links as if they were plain files; the
+same as @samp{--retr-symlinks}.
+
+@item retry_connrefused = on/off
+When set to on, consider ``connection refused'' a transient
+error---the same as @samp{--retry-connrefused}.
+
+@item robots = on/off
+Specify whether the norobots convention is respected by Wget, ``on'' by
+default. This switch controls both the @file{/robots.txt} and the
+@samp{nofollow} aspect of the spec. @xref{Robot Exclusion}, for more
+details about this. Be sure you know what you are doing before turning
+this off.
+
+@item save_cookies = @var{file}
+Save cookies to @var{file}. The same as @samp{--save-cookies
+@var{file}}.
+
+@item save_headers = on/off
+Same as @samp{--save-headers}.
+
+@item secure_protocol = @var{string}
+Choose the secure protocol to be used. Legal values are @samp{auto}
+(the default), @samp{SSLv2}, @samp{SSLv3}, and @samp{TLSv1}. The same
+as @samp{--secure-protocol=@var{string}}.
+
+@item server_response = on/off
+Choose whether or not to print the @sc{http} and @sc{ftp} server
+responses---the same as @samp{-S}.
+
+@item show_all_dns_entries = on/off
+When a DNS name is resolved, show all the IP addresses, not just the first
+three.
+
+@item span_hosts = on/off
+Same as @samp{-H}.
+
+@item spider = on/off
+Same as @samp{--spider}.
+
+@item strict_comments = on/off
+Same as @samp{--strict-comments}.
+
+@item timeout = @var{n}
+Set all applicable timeout values to @var{n}, the same as @samp{-T
+@var{n}}.
+
+@item timestamping = on/off
+Turn timestamping on/off. The same as @samp{-N} (@pxref{Time-Stamping}).
+
+@item use_server_timestamps = on/off
+If set to @samp{off}, Wget won't set the local file's timestamp by the
+one on the server (same as @samp{--no-use-server-timestamps}).
+
+@item tries = @var{n}
+Set number of retries per @sc{url}---the same as @samp{-t @var{n}}.
+
+@item use_proxy = on/off
+When set to off, don't use proxy even when proxy-related environment
+variables are set. In that case it is the same as using
+@samp{--no-proxy}.
+
+@item user = @var{string}
+Specify username @var{string} for both @sc{ftp} and @sc{http} file retrieval.
+This command can be overridden using the @samp{ftp_user} and
+@samp{http_user} command for @sc{ftp} and @sc{http} respectively.
+
+@item user_agent = @var{string}
+User agent identification sent to the HTTP Server---the same as
+@samp{--user-agent=@var{string}}.
+
+@item verbose = on/off
+Turn verbose on/off---the same as @samp{-v}/@samp{-nv}.
+
+@item wait = @var{n}
+Wait @var{n} seconds between retrievals---the same as @samp{-w
+@var{n}}.
+
+@item wait_retry = @var{n}
+Wait up to @var{n} seconds between retries of failed retrievals
+only---the same as @samp{--waitretry=@var{n}}. Note that this is
+turned on by default in the global @file{wgetrc}.
+@end table
+
+@node Sample Wgetrc, , Wgetrc Commands, Startup File
+@section Sample Wgetrc
+@cindex sample wgetrc
+
+This is the sample initialization file, as given in the distribution.
+It is divided in two section---one for global usage (suitable for global
+startup file), and one for local usage (suitable for
+@file{$HOME/.wgetrc}). Be careful about the things you change.
+
+Note that almost all the lines are commented out. For a command to have
+any effect, you must remove the @samp{#} character at the beginning of
+its line.
+
+@example
+@include sample.wgetrc.munged_for_texi_inclusion
+@end example
+
+@node Examples, Various, Startup File, Top
+@chapter Examples
+@cindex examples
+
+@c man begin EXAMPLES
+The examples are divided into three sections loosely based on their
+complexity.
+
+@menu
+* Simple Usage:: Simple, basic usage of the program.
+* Advanced Usage:: Advanced tips.
+* Very Advanced Usage:: The hairy stuff.
+@end menu
+
+@node Simple Usage, Advanced Usage, Examples, Examples
+@section Simple Usage
+
+@itemize @bullet
+@item
+Say you want to download a @sc{url}. Just type:
+
+@example
+wget http://fly.srk.fer.hr/
+@end example
+
+@item
+But what will happen if the connection is slow, and the file is lengthy?
+The connection will probably fail before the whole file is retrieved,
+more than once. In this case, Wget will try getting the file until it
+either gets the whole of it, or exceeds the default number of retries
+(this being 20). It is easy to change the number of tries to 45, to
+insure that the whole file will arrive safely:
+
+@example
+wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
+@end example
+
+@item
+Now let's leave Wget to work in the background, and write its progress
+to log file @file{log}. It is tiring to type @samp{--tries}, so we
+shall use @samp{-t}.
+
+@example
+wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
+@end example
+
+The ampersand at the end of the line makes sure that Wget works in the
+background. To unlimit the number of retries, use @samp{-t inf}.
+
+@item
+The usage of @sc{ftp} is as simple. Wget will take care of login and
+password.
+
+@example
+wget ftp://gnjilux.srk.fer.hr/welcome.msg
+@end example
+
+@item
+If you specify a directory, Wget will retrieve the directory listing,
+parse it and convert it to @sc{html}. Try:
+
+@example
+wget ftp://ftp.gnu.org/pub/gnu/
+links index.html
+@end example
+@end itemize
+
+@node Advanced Usage, Very Advanced Usage, Simple Usage, Examples
+@section Advanced Usage
+
+@itemize @bullet
+@item
+You have a file that contains the URLs you want to download? Use the
+@samp{-i} switch:
+
+@example
+wget -i @var{file}
+@end example
+
+If you specify @samp{-} as file name, the @sc{url}s will be read from
+standard input.
+
+@item
+Create a five levels deep mirror image of the GNU web site, with the
+same directory structure the original has, with only one try per
+document, saving the log of the activities to @file{gnulog}:
+
+@example
+wget -r http://www.gnu.org/ -o gnulog
+@end example
+
+@item
+The same as the above, but convert the links in the downloaded files to
+point to local files, so you can view the documents off-line:
+
+@example
+wget --convert-links -r http://www.gnu.org/ -o gnulog
+@end example
+
+@item
+Retrieve only one @sc{html} page, but make sure that all the elements needed
+for the page to be displayed, such as inline images and external style
+sheets, are also downloaded. Also make sure the downloaded page
+references the downloaded links.
+
+@example
+wget -p --convert-links http://www.server.com/dir/page.html
+@end example
+
+The @sc{html} page will be saved to @file{www.server.com/dir/page.html}, and
+the images, stylesheets, etc., somewhere under @file{www.server.com/},
+depending on where they were on the remote server.
+
+@item
+The same as the above, but without the @file{www.server.com/} directory.
+In fact, I don't want to have all those random server directories
+anyway---just save @emph{all} those files under a @file{download/}
+subdirectory of the current directory.
+
+@example
+wget -p --convert-links -nH -nd -Pdownload \
+ http://www.server.com/dir/page.html
+@end example
+
+@item
+Retrieve the index.html of @samp{www.lycos.com}, showing the original
+server headers:
+
+@example
+wget -S http://www.lycos.com/
+@end example
+
+@item
+Save the server headers with the file, perhaps for post-processing.
+
+@example
+wget --save-headers http://www.lycos.com/
+more index.html
+@end example
+
+@item
+Retrieve the first two levels of @samp{wuarchive.wustl.edu}, saving them
+to @file{/tmp}.
+
+@example
+wget -r -l2 -P/tmp ftp://wuarchive.wustl.edu/
+@end example
+
+@item
+You want to download all the @sc{gif}s from a directory on an @sc{http}
+server. You tried @samp{wget http://www.server.com/dir/*.gif}, but that
+didn't work because @sc{http} retrieval does not support globbing. In
+that case, use:
+
+@example
+wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
+@end example
+
+More verbose, but the effect is the same. @samp{-r -l1} means to
+retrieve recursively (@pxref{Recursive Download}), with maximum depth
+of 1. @samp{--no-parent} means that references to the parent directory
+are ignored (@pxref{Directory-Based Limits}), and @samp{-A.gif} means to
+download only the @sc{gif} files. @samp{-A "*.gif"} would have worked
+too.
+
+@item
+Suppose you were in the middle of downloading, when Wget was
+interrupted. Now you do not want to clobber the files already present.
+It would be:
+
+@example
+wget -nc -r http://www.gnu.org/
+@end example
+
+@item
+If you want to encode your own username and password to @sc{http} or
+@sc{ftp}, use the appropriate @sc{url} syntax (@pxref{URL Format}).
+
+@example
+wget ftp://hniksic:mypassword@@unix.server.com/.emacs
+@end example
+
+Note, however, that this usage is not advisable on multi-user systems
+because it reveals your password to anyone who looks at the output of
+@code{ps}.
+
+@cindex redirecting output
+@item
+You would like the output documents to go to standard output instead of
+to files?
+
+@example
+wget -O - http://jagor.srce.hr/ http://www.srce.hr/
+@end example
+
+You can also combine the two options and make pipelines to retrieve the
+documents from remote hotlists:
+
+@example
+wget -O - http://cool.list.com/ | wget --force-html -i -
+@end example
+@end itemize
+
+@node Very Advanced Usage, , Advanced Usage, Examples
+@section Very Advanced Usage
+
+@cindex mirroring
+@itemize @bullet
+@item
+If you wish Wget to keep a mirror of a page (or @sc{ftp}
+subdirectories), use @samp{--mirror} (@samp{-m}), which is the shorthand
+for @samp{-r -l inf -N}. You can put Wget in the crontab file asking it
+to recheck a site each Sunday:
+
+@example
+crontab
+0 0 * * 0 wget --mirror http://www.gnu.org/ -o /home/me/weeklog
+@end example
+
+@item
+In addition to the above, you want the links to be converted for local
+viewing. But, after having read this manual, you know that link
+conversion doesn't play well with timestamping, so you also want Wget to
+back up the original @sc{html} files before the conversion. Wget invocation
+would look like this:
+
+@example
+wget --mirror --convert-links --backup-converted \
+ http://www.gnu.org/ -o /home/me/weeklog
+@end example
+
+@item
+But you've also noticed that local viewing doesn't work all that well
+when @sc{html} files are saved under extensions other than @samp{.html},
+perhaps because they were served as @file{index.cgi}. So you'd like
+Wget to rename all the files served with content-type @samp{text/html}
+or @samp{application/xhtml+xml} to @file{@var{name}.html}.
+
+@example
+wget --mirror --convert-links --backup-converted \
+ --html-extension -o /home/me/weeklog \
+ http://www.gnu.org/
+@end example
+
+Or, with less typing:
+
+@example
+wget -m -k -K -E http://www.gnu.org/ -o /home/me/weeklog
+@end example
+@end itemize
+@c man end
+
+@node Various, Appendices, Examples, Top
+@chapter Various
+@cindex various
+
+This chapter contains all the stuff that could not fit anywhere else.
+
+@menu
+* Proxies:: Support for proxy servers.
+* Distribution:: Getting the latest version.
+* Web Site:: GNU Wget's presence on the World Wide Web.
+* Mailing Lists:: Wget mailing list for announcements and discussion.
+* Internet Relay Chat:: Wget's presence on IRC.
+* Reporting Bugs:: How and where to report bugs.
+* Portability:: The systems Wget works on.
+* Signals:: Signal-handling performed by Wget.
+@end menu
+
+@node Proxies, Distribution, Various, Various
+@section Proxies
+@cindex proxies
+
+@dfn{Proxies} are special-purpose @sc{http} servers designed to transfer
+data from remote servers to local clients. One typical use of proxies
+is lightening network load for users behind a slow connection. This is
+achieved by channeling all @sc{http} and @sc{ftp} requests through the
+proxy which caches the transferred data. When a cached resource is
+requested again, proxy will return the data from cache. Another use for
+proxies is for companies that separate (for security reasons) their
+internal networks from the rest of Internet. In order to obtain
+information from the Web, their users connect and retrieve remote data
+using an authorized proxy.
+
+Wget supports proxies for both @sc{http} and @sc{ftp} retrievals. The
+standard way to specify proxy location, which Wget recognizes, is using
+the following environment variables:
+
+@table @code
+@item http_proxy
+@itemx https_proxy
+If set, the @code{http_proxy} and @code{https_proxy} variables should
+contain the @sc{url}s of the proxies for @sc{http} and @sc{https}
+connections respectively.
+
+@item ftp_proxy
+This variable should contain the @sc{url} of the proxy for @sc{ftp}
+connections. It is quite common that @code{http_proxy} and
+@code{ftp_proxy} are set to the same @sc{url}.
+
+@item no_proxy
+This variable should contain a comma-separated list of domain extensions
+proxy should @emph{not} be used for. For instance, if the value of
+@code{no_proxy} is @samp{.mit.edu}, proxy will not be used to retrieve
+documents from MIT.
+@end table
+
+In addition to the environment variables, proxy location and settings
+may be specified from within Wget itself.
+
+@table @samp
+@itemx --no-proxy
+@itemx proxy = on/off
+This option and the corresponding command may be used to suppress the
+use of proxy, even if the appropriate environment variables are set.
+
+@item http_proxy = @var{URL}
+@itemx https_proxy = @var{URL}
+@itemx ftp_proxy = @var{URL}
+@itemx no_proxy = @var{string}
+These startup file variables allow you to override the proxy settings
+specified by the environment.
+@end table
+
+Some proxy servers require authorization to enable you to use them. The
+authorization consists of @dfn{username} and @dfn{password}, which must
+be sent by Wget. As with @sc{http} authorization, several
+authentication schemes exist. For proxy authorization only the
+@code{Basic} authentication scheme is currently implemented.
+
+You may specify your username and password either through the proxy
+@sc{url} or through the command-line options. Assuming that the
+company's proxy is located at @samp{proxy.company.com} at port 8001, a
+proxy @sc{url} location containing authorization data might look like
+this:
+
+@example
+http://hniksic:mypassword@@proxy.company.com:8001/
+@end example
+
+Alternatively, you may use the @samp{proxy-user} and
+@samp{proxy-password} options, and the equivalent @file{.wgetrc}
+settings @code{proxy_user} and @code{proxy_password} to set the proxy
+username and password.
+
+@node Distribution, Web Site, Proxies, Various
+@section Distribution
+@cindex latest version
+
+Like all GNU utilities, the latest version of Wget can be found at the
+master GNU archive site ftp.gnu.org, and its mirrors. For example,
+Wget @value{VERSION} can be found at
+@url{ftp://ftp.gnu.org/pub/gnu/wget/wget-@value{VERSION}.tar.gz}
+
+@node Web Site, Mailing Lists, Distribution, Various
+@section Web Site
+@cindex web site
+
+The official web site for GNU Wget is at
+@url{http://www.gnu.org/software/wget/}. However, most useful
+information resides at ``The Wget Wgiki'',
+@url{http://wget.addictivecode.org/}.
+
+@node Mailing Lists, Internet Relay Chat, Web Site, Various
+@section Mailing Lists
+@cindex mailing list
+@cindex list
+
+@unnumberedsubsec Primary List
+
+The primary mailinglist for discussion, bug-reports, or questions
+about GNU Wget is at @email{bug-wget@@gnu.org}. To subscribe, send an
+email to @email{bug-wget-join@@gnu.org}, or visit
+@url{http://lists.gnu.org/mailman/listinfo/bug-wget}.
+
+You do not need to subscribe to send a message to the list; however,
+please note that unsubscribed messages are moderated, and may take a
+while before they hit the list---@strong{usually around a day}. If
+you want your message to show up immediately, please subscribe to the
+list before posting. Archives for the list may be found at
+@url{http://lists.gnu.org/pipermail/bug-wget/}.
+
+An NNTP/Usenettish gateway is also available via
+@uref{http://gmane.org/about.php,Gmane}. You can see the Gmane
+archives at
+@url{http://news.gmane.org/gmane.comp.web.wget.general}. Note that the
+Gmane archives conveniently include messages from both the current
+list, and the previous one. Messages also show up in the Gmane
+archives sooner than they do at @url{lists.gnu.org}.
+
+@unnumberedsubsec Bug Notices List
+
+Additionally, there is the @email{wget-notify@@addictivecode.org} mailing
+list. This is a non-discussion list that receives bug report
+notifications from the bug-tracker. To subscribe to this list,
+send an email to @email{wget-notify-join@@addictivecode.org},
+or visit @url{http://addictivecode.org/mailman/listinfo/wget-notify}.
+
+@unnumberedsubsec Obsolete Lists
+
+Previously, the mailing list @email{wget@@sunsite.dk} was used as the
+main discussion list, and another list,
+@email{wget-patches@@sunsite.dk} was used for submitting and
+discussing patches to GNU Wget.
+
+Messages from @email{wget@@sunsite.dk} are archived at
+@itemize @tie{}
+@item
+@url{http://www.mail-archive.com/wget%40sunsite.dk/} and at
+@item
+@url{http://news.gmane.org/gmane.comp.web.wget.general} (which also
+continues to archive the current list, @email{bug-wget@@gnu.org}).
+@end itemize
+
+Messages from @email{wget-patches@@sunsite.dk} are archived at
+@itemize @tie{}
+@item
+@url{http://news.gmane.org/gmane.comp.web.wget.patches}.
+@end itemize
+
+@node Internet Relay Chat, Reporting Bugs, Mailing Lists, Various
+@section Internet Relay Chat
+@cindex Internet Relay Chat
+@cindex IRC
+@cindex #wget
+
+In addition to the mailinglists, we also have a support channel set up
+via IRC at @code{irc.freenode.org}, @code{#wget}. Come check it out!
+
+@node Reporting Bugs, Portability, Internet Relay Chat, Various
+@section Reporting Bugs
+@cindex bugs
+@cindex reporting bugs
+@cindex bug reports
+
+@c man begin BUGS
+You are welcome to submit bug reports via the GNU Wget bug tracker (see
+@url{http://wget.addictivecode.org/BugTracker}).
+
+Before actually submitting a bug report, please try to follow a few
+simple guidelines.
+
+@enumerate
+@item
+Please try to ascertain that the behavior you see really is a bug. If
+Wget crashes, it's a bug. If Wget does not behave as documented,
+it's a bug. If things work strange, but you are not sure about the way
+they are supposed to work, it might well be a bug, but you might want to
+double-check the documentation and the mailing lists (@pxref{Mailing
+Lists}).
+
+@item
+Try to repeat the bug in as simple circumstances as possible. E.g. if
+Wget crashes while downloading @samp{wget -rl0 -kKE -t5 --no-proxy
+http://yoyodyne.com -o /tmp/log}, you should try to see if the crash is
+repeatable, and if will occur with a simpler set of options. You might
+even try to start the download at the page where the crash occurred to
+see if that page somehow triggered the crash.
+
+Also, while I will probably be interested to know the contents of your
+@file{.wgetrc} file, just dumping it into the debug message is probably
+a bad idea. Instead, you should first try to see if the bug repeats
+with @file{.wgetrc} moved out of the way. Only if it turns out that
+@file{.wgetrc} settings affect the bug, mail me the relevant parts of
+the file.
+
+@item
+Please start Wget with @samp{-d} option and send us the resulting
+output (or relevant parts thereof). If Wget was compiled without
+debug support, recompile it---it is @emph{much} easier to trace bugs
+with debug support on.
+
+Note: please make sure to remove any potentially sensitive information
+from the debug log before sending it to the bug address. The
+@code{-d} won't go out of its way to collect sensitive information,
+but the log @emph{will} contain a fairly complete transcript of Wget's
+communication with the server, which may include passwords and pieces
+of downloaded data. Since the bug address is publically archived, you
+may assume that all bug reports are visible to the public.
+
+@item
+If Wget has crashed, try to run it in a debugger, e.g. @code{gdb `which
+wget` core} and type @code{where} to get the backtrace. This may not
+work if the system administrator has disabled core files, but it is
+safe to try.
+@end enumerate
+@c man end
+
+@node Portability, Signals, Reporting Bugs, Various
+@section Portability
+@cindex portability
+@cindex operating systems
+
+Like all GNU software, Wget works on the GNU system. However, since it
+uses GNU Autoconf for building and configuring, and mostly avoids using
+``special'' features of any particular Unix, it should compile (and
+work) on all common Unix flavors.
+
+Various Wget versions have been compiled and tested under many kinds of
+Unix systems, including GNU/Linux, Solaris, SunOS 4.x, Mac OS X, OSF
+(aka Digital Unix or Tru64), Ultrix, *BSD, IRIX, AIX, and others. Some
+of those systems are no longer in widespread use and may not be able to
+support recent versions of Wget. If Wget fails to compile on your
+system, we would like to know about it.
+
+Thanks to kind contributors, this version of Wget compiles and works
+on 32-bit Microsoft Windows platforms. It has been compiled
+successfully using MS Visual C++ 6.0, Watcom, Borland C, and GCC
+compilers. Naturally, it is crippled of some features available on
+Unix, but it should work as a substitute for people stuck with
+Windows. Note that Windows-specific portions of Wget are not
+guaranteed to be supported in the future, although this has been the
+case in practice for many years now. All questions and problems in
+Windows usage should be reported to Wget mailing list at
+@email{wget@@sunsite.dk} where the volunteers who maintain the
+Windows-related features might look at them.
+
+Support for building on MS-DOS via DJGPP has been contributed by Gisle
+Vanem; a port to VMS is maintained by Steven Schweda, and is available
+at @url{http://antinode.org/}.
+
+@node Signals, , Portability, Various
+@section Signals
+@cindex signal handling
+@cindex hangup
+
+Since the purpose of Wget is background work, it catches the hangup
+signal (@code{SIGHUP}) and ignores it. If the output was on standard
+output, it will be redirected to a file named @file{wget-log}.
+Otherwise, @code{SIGHUP} is ignored. This is convenient when you wish
+to redirect the output of Wget after having started it.
+
+@example
+$ wget http://www.gnus.org/dist/gnus.tar.gz &
+...
+$ kill -HUP %%
+SIGHUP received, redirecting output to `wget-log'.
+@end example
+
+Other than that, Wget will not try to interfere with signals in any way.
+@kbd{C-c}, @code{kill -TERM} and @code{kill -KILL} should kill it alike.
+
+@node Appendices, Copying this manual, Various, Top
+@chapter Appendices
+
+This chapter contains some references I consider useful.
+
+@menu
+* Robot Exclusion:: Wget's support for RES.
+* Security Considerations:: Security with Wget.
+* Contributors:: People who helped.
+@end menu
+
+@node Robot Exclusion, Security Considerations, Appendices, Appendices
+@section Robot Exclusion
+@cindex robot exclusion
+@cindex robots.txt
+@cindex server maintenance
+
+It is extremely easy to make Wget wander aimlessly around a web site,
+sucking all the available data in progress. @samp{wget -r @var{site}},
+and you're set. Great? Not for the server admin.
+
+As long as Wget is only retrieving static pages, and doing it at a
+reasonable rate (see the @samp{--wait} option), there's not much of a
+problem. The trouble is that Wget can't tell the difference between the
+smallest static page and the most demanding CGI. A site I know has a
+section handled by a CGI Perl script that converts Info files to @sc{html} on
+the fly. The script is slow, but works well enough for human users
+viewing an occasional Info file. However, when someone's recursive Wget
+download stumbles upon the index page that links to all the Info files
+through the script, the system is brought to its knees without providing
+anything useful to the user (This task of converting Info files could be
+done locally and access to Info documentation for all installed GNU
+software on a system is available from the @code{info} command).
+
+To avoid this kind of accident, as well as to preserve privacy for
+documents that need to be protected from well-behaved robots, the
+concept of @dfn{robot exclusion} was invented. The idea is that
+the server administrators and document authors can specify which
+portions of the site they wish to protect from robots and those
+they will permit access.
+
+The most popular mechanism, and the @i{de facto} standard supported by
+all the major robots, is the ``Robots Exclusion Standard'' (RES) written
+by Martijn Koster et al. in 1994. It specifies the format of a text
+file containing directives that instruct the robots which URL paths to
+avoid. To be found by the robots, the specifications must be placed in
+@file{/robots.txt} in the server root, which the robots are expected to
+download and parse.
+
+Although Wget is not a web robot in the strictest sense of the word, it
+can download large parts of the site without the user's intervention to
+download an individual page. Because of that, Wget honors RES when
+downloading recursively. For instance, when you issue:
+
+@example
+wget -r http://www.server.com/
+@end example
+
+First the index of @samp{www.server.com} will be downloaded. If Wget
+finds that it wants to download more documents from that server, it will
+request @samp{http://www.server.com/robots.txt} and, if found, use it
+for further downloads. @file{robots.txt} is loaded only once per each
+server.
+
+Until version 1.8, Wget supported the first version of the standard,
+written by Martijn Koster in 1994 and available at
+@url{http://www.robotstxt.org/wc/norobots.html}. As of version 1.8,
+Wget has supported the additional directives specified in the internet
+draft @samp{<draft-koster-robots-00.txt>} titled ``A Method for Web
+Robots Control''. The draft, which has as far as I know never made to
+an @sc{rfc}, is available at
+@url{http://www.robotstxt.org/wc/norobots-rfc.txt}.
+
+This manual no longer includes the text of the Robot Exclusion Standard.
+
+The second, less known mechanism, enables the author of an individual
+document to specify whether they want the links from the file to be
+followed by a robot. This is achieved using the @code{META} tag, like
+this:
+
+@example
+<meta name="robots" content="nofollow">
+@end example
+
+This is explained in some detail at
+@url{http://www.robotstxt.org/wc/meta-user.html}. Wget supports this
+method of robot exclusion in addition to the usual @file{/robots.txt}
+exclusion.
+
+If you know what you are doing and really really wish to turn off the
+robot exclusion, set the @code{robots} variable to @samp{off} in your
+@file{.wgetrc}. You can achieve the same effect from the command line
+using the @code{-e} switch, e.g. @samp{wget -e robots=off @var{url}...}.
+
+@node Security Considerations, Contributors, Robot Exclusion, Appendices
+@section Security Considerations
+@cindex security
+
+When using Wget, you must be aware that it sends unencrypted passwords
+through the network, which may present a security problem. Here are the
+main issues, and some solutions.
+
+@enumerate
+@item
+The passwords on the command line are visible using @code{ps}. The best
+way around it is to use @code{wget -i -} and feed the @sc{url}s to
+Wget's standard input, each on a separate line, terminated by @kbd{C-d}.
+Another workaround is to use @file{.netrc} to store passwords; however,
+storing unencrypted passwords is also considered a security risk.
+
+@item
+Using the insecure @dfn{basic} authentication scheme, unencrypted
+passwords are transmitted through the network routers and gateways.
+
+@item
+The @sc{ftp} passwords are also in no way encrypted. There is no good
+solution for this at the moment.
+
+@item
+Although the ``normal'' output of Wget tries to hide the passwords,
+debugging logs show them, in all forms. This problem is avoided by
+being careful when you send debug logs (yes, even when you send them to
+me).
+@end enumerate
+
+@node Contributors, , Security Considerations, Appendices
+@section Contributors
+@cindex contributors
+
+@iftex
+GNU Wget was written by Hrvoje Nik@v{s}i@'{c} @email{hniksic@@xemacs.org},
+@end iftex
+@ifnottex
+GNU Wget was written by Hrvoje Niksic @email{hniksic@@xemacs.org}.
+@end ifnottex
+
+However, the development of Wget could never have gone as far as it has, were
+it not for the help of many people, either with bug reports, feature proposals,
+patches, or letters saying ``Thanks!''.
+
+Special thanks goes to the following people (no particular order):
+
+@itemize @bullet
+@item Dan Harkless---contributed a lot of code and documentation of
+extremely high quality, as well as the @code{--page-requisites} and
+related options. He was the principal maintainer for some time and
+released Wget 1.6.
+
+@item Ian Abbott---contributed bug fixes, Windows-related fixes, and
+provided a prototype implementation of the breadth-first recursive
+download. Co-maintained Wget during the 1.8 release cycle.
+
+@item
+The dotsrc.org crew, in particular Karsten Thygesen---donated system
+resources such as the mailing list, web space, @sc{ftp} space, and
+version control repositories, along with a lot of time to make these
+actually work. Christian Reiniger was of invaluable help with setting
+up Subversion.
+
+@item
+Heiko Herold---provided high-quality Windows builds and contributed
+bug and build reports for many years.
+
+@item
+Shawn McHorse---bug reports and patches.
+
+@item
+Kaveh R. Ghazi---on-the-fly @code{ansi2knr}-ization. Lots of
+portability fixes.
+
+@item
+Gordon Matzigkeit---@file{.netrc} support.
+
+@item
+@iftex
+Zlatko @v{C}alu@v{s}i@'{c}, Tomislav Vujec and Dra@v{z}en
+Ka@v{c}ar---feature suggestions and ``philosophical'' discussions.
+@end iftex
+@ifnottex
+Zlatko Calusic, Tomislav Vujec and Drazen Kacar---feature suggestions
+and ``philosophical'' discussions.
+@end ifnottex
+
+@item
+Darko Budor---initial port to Windows.
+
+@item
+Antonio Rosella---help and suggestions, plus the initial Italian
+translation.
+
+@item
+@iftex
+Tomislav Petrovi@'{c}, Mario Miko@v{c}evi@'{c}---many bug reports and
+suggestions.
+@end iftex
+@ifnottex
+Tomislav Petrovic, Mario Mikocevic---many bug reports and suggestions.
+@end ifnottex
+
+@item
+@iftex
+Fran@,{c}ois Pinard---many thorough bug reports and discussions.
+@end iftex
+@ifnottex
+Francois Pinard---many thorough bug reports and discussions.
+@end ifnottex
+
+@item
+Karl Eichwalder---lots of help with internationalization, Makefile
+layout and many other things.
+
+@item
+Junio Hamano---donated support for Opie and @sc{http} @code{Digest}
+authentication.
+
+@item
+Mauro Tortonesi---improved IPv6 support, adding support for dual
+family systems. Refactored and enhanced FTP IPv6 code. Maintained GNU
+Wget from 2004--2007.
+
+@item
+Christopher G.@: Lewis---maintenance of the Windows version of GNU WGet.
+
+@item
+Gisle Vanem---many helpful patches and improvements, especially for
+Windows and MS-DOS support.
+
+@item
+Ralf Wildenhues---contributed patches to convert Wget to use Automake as
+part of its build process, and various bugfixes.
+
+@item
+Steven Schubiger---Many helpful patches, bugfixes and improvements.
+Notably, conversion of Wget to use the Gnulib quotes and quoteargs
+modules, and the addition of password prompts at the console, via the
+Gnulib getpasswd-gnu module.
+
+@item
+Ted Mielczarek---donated support for CSS.
+
+@item
+Saint Xavier---Support for IRIs (RFC 3987).
+
+@item
+People who provided donations for development---including Brian Gough.
+@end itemize
+
+The following people have provided patches, bug/build reports, useful
+suggestions, beta testing services, fan mail and all the other things
+that make maintenance so much fun:
+
+Tim Adam,
+Adrian Aichner,
+Martin Baehr,
+Dieter Baron,
+Roger Beeman,
+Dan Berger,
+T.@: Bharath,
+Christian Biere,
+Paul Bludov,
+Daniel Bodea,
+Mark Boyns,
+John Burden,
+Julien Buty,
+Wanderlei Cavassin,
+Gilles Cedoc,
+Tim Charron,
+Noel Cragg,
+@iftex
+Kristijan @v{C}onka@v{s},
+@end iftex
+@ifnottex
+Kristijan Conkas,
+@end ifnottex
+John Daily,
+Andreas Damm,
+Ahmon Dancy,
+Andrew Davison,
+Bertrand Demiddelaer,
+Alexander Dergachev,
+Andrew Deryabin,
+Ulrich Drepper,
+Marc Duponcheel,
+@iftex
+Damir D@v{z}eko,
+@end iftex
+@ifnottex
+Damir Dzeko,
+@end ifnottex
+Alan Eldridge,
+Hans-Andreas Engel,
+@iftex
+Aleksandar Erkalovi@'{c},
+@end iftex
+@ifnottex
+Aleksandar Erkalovic,
+@end ifnottex
+Andy Eskilsson,
+@iftex
+Jo@~{a}o Ferreira,
+@end iftex
+@ifnottex
+Joao Ferreira,
+@end ifnottex
+Christian Fraenkel,
+David Fritz,
+Mike Frysinger,
+Charles C.@: Fu,
+FUJISHIMA Satsuki,
+Masashi Fujita,
+Howard Gayle,
+Marcel Gerrits,
+Lemble Gregory,
+Hans Grobler,
+Alain Guibert,
+Mathieu Guillaume,
+Aaron Hawley,
+Jochen Hein,
+Karl Heuer,
+Madhusudan Hosaagrahara,
+HIROSE Masaaki,
+Ulf Harnhammar,
+Gregor Hoffleit,
+Erik Magnus Hulthen,
+Richard Huveneers,
+Jonas Jensen,
+Larry Jones,
+Simon Josefsson,
+@iftex
+Mario Juri@'{c},
+@end iftex
+@ifnottex
+Mario Juric,
+@end ifnottex
+@iftex
+Hack Kampbj@o rn,
+@end iftex
+@ifnottex
+Hack Kampbjorn,
+@end ifnottex
+Const Kaplinsky,
+@iftex
+Goran Kezunovi@'{c},
+@end iftex
+@ifnottex
+Goran Kezunovic,
+@end ifnottex
+Igor Khristophorov,
+Robert Kleine,
+KOJIMA Haime,
+Fila Kolodny,
+Alexander Kourakos,
+Martin Kraemer,
+Sami Krank,
+Jay Krell,
+@tex
+$\Sigma\acute{\iota}\mu o\varsigma\;
+\Xi\varepsilon\nu\iota\tau\acute{\epsilon}\lambda\lambda\eta\varsigma$
+(Simos KSenitellis),
+@end tex
+@ifnottex
+Simos KSenitellis,
+@end ifnottex
+Christian Lackas,
+Hrvoje Lacko,
+Daniel S.@: Lewart,
+@iftex
+Nicol@'{a}s Lichtmeier,
+@end iftex
+@ifnottex
+Nicolas Lichtmeier,
+@end ifnottex
+Dave Love,
+Alexander V.@: Lukyanov,
+@iftex
+Thomas Lu@ss{}nig,
+@end iftex
+@ifnottex
+Thomas Lussnig,
+@end ifnottex
+Andre Majorel,
+Aurelien Marchand,
+Matthew J.@: Mellon,
+Jordan Mendelson,
+Ted Mielczarek,
+Robert Millan,
+Lin Zhe Min,
+Jan Minar,
+Tim Mooney,
+Keith Moore,
+Adam D.@: Moss,
+Simon Munton,
+Charlie Negyesi,
+R.@: K.@: Owen,
+Jim Paris,
+Kenny Parnell,
+Leonid Petrov,
+Simone Piunno,
+Andrew Pollock,
+Steve Pothier,
+@iftex
+Jan P@v{r}ikryl,
+@end iftex
+@ifnottex
+Jan Prikryl,
+@end ifnottex
+Marin Purgar,
+@iftex
+Csaba R@'{a}duly,
+@end iftex
+@ifnottex
+Csaba Raduly,
+@end ifnottex
+Keith Refson,
+Bill Richardson,
+Tyler Riddle,
+Tobias Ringstrom,
+Jochen Roderburg,
+@c Texinfo doesn't grok @'{@i}, so we have to use TeX itself.
+@tex
+Juan Jos\'{e} Rodr\'{\i}guez,
+@end tex
+@ifnottex
+Juan Jose Rodriguez,
+@end ifnottex
+Maciej W.@: Rozycki,
+Edward J.@: Sabol,
+Heinz Salzmann,
+Robert Schmidt,
+Nicolas Schodet,
+Benno Schulenberg,
+Andreas Schwab,
+Steven M.@: Schweda,
+Chris Seawood,
+Pranab Shenoy,
+Dennis Smit,
+Toomas Soome,
+Tage Stabell-Kulo,
+Philip Stadermann,
+Daniel Stenberg,
+Sven Sternberger,
+Markus Strasser,
+John Summerfield,
+Szakacsits Szabolcs,
+Mike Thomas,
+Philipp Thomas,
+Mauro Tortonesi,
+Dave Turner,
+Gisle Vanem,
+Rabin Vincent,
+Russell Vincent,
+@iftex
+@v{Z}eljko Vrba,
+@end iftex
+@ifnottex
+Zeljko Vrba,
+@end ifnottex
+Charles G Waldman,
+Douglas E.@: Wegscheid,
+Ralf Wildenhues,
+Joshua David Williams,
+Benjamin Wolsey,
+Saint Xavier,
+YAMAZAKI Makoto,
+Jasmin Zainul,
+@iftex
+Bojan @v{Z}drnja,
+@end iftex
+@ifnottex
+Bojan Zdrnja,
+@end ifnottex
+Kristijan Zimmer,
+Xin Zou.
+
+Apologies to all who I accidentally left out, and many thanks to all the
+subscribers of the Wget mailing list.
+
+@node Copying this manual, Concept Index, Appendices, Top
+@appendix Copying this manual
+
+@menu
+* GNU Free Documentation License:: Licnse for copying this manual.
+@end menu
+
+@node GNU Free Documentation License, , Copying this manual, Copying this manual
+@appendixsec GNU Free Documentation License
+@cindex FDL, GNU Free Documentation License
+
+@include fdl.texi
+
+
+@node Concept Index, , Copying this manual, Top
+@unnumbered Concept Index
+@printindex cp
+
+@contents
+
+@bye