Signup/Sign In

Wget Command in Linux with Examples


Wget is a non-interactive network downloader that can download files from a server even if the user isn't signed in and can function in the background without interfering with the current process.

GNU wget is a free program that allows you to download files from the Internet without having to interact with them. It can retrieve data through HTTP, HTTPS, and FTP protocols, as well as HTTP proxies.

wget is a non-interactive program that can run in the background even if the user is not signed in. This enables you to begin a retrieval and then detach from the system, allowing wget to complete the task. Most Web browsers, on the other hand, need continual human presence, which may be a significant stumbling block when sending large amounts of data.

wget can construct local replicas of external web sites by following links in HTML and XHTML pages and completely duplicating the original site's directory structure. Recursive downloading is a term used to describe this process. Wget follows the Robot Exclusion Standard (/robots.txt) when doing so. wget may be told to convert links in downloaded HTML pages to local files so they can be viewed offline.

Wget was built to perform reliably across sluggish or unreliable network connections; if a download fails due to a network fault, it will keep retrying until the whole file is downloaded. It will ask the server to resume the download from where it left off if the host allows restarting.

The syntax is as follows:

[option] wget [URL]

1. As an example: To just download a website, follow these steps:


2. To download the file in the background, go here.

wget -b

3. Using the wget command to rewrite the log.

wget -o /path/filename.txt

4. To resume a file that has been halfway downloaded

wget -c

5. to attempt a certain number of times

wget --tries=10


1. -v / –version: This shows the version of wget is installed on your system.


$wget -v

2. -h / –help:

This is used to create a help message that lists all of the command line options available with the wget command line options.


$wget -h [URL]

3. -o logfile: This option directs all system messages to the logfile supplied by the option, and after the procedure is finished, all messages created by the system are accessible in the log file. The output messages are forwarded to the default log file, wget -log, if no log file is supplied.


$wget -o logfile [URL]

4. –background / –b – This option is used to send a process to the background as soon as it starts, allowing other processes to run in the meanwhile. If the -o option is not supplied, output is automatically forwarded to wget-log.


5. -a: This option is used to append the output messages to the current output log file without overwriting it. When using the -o option, the output log file is rewritten, but with this option, the previous command's log is saved and the current log is written after it.


$wget -a logfile [URL]

6. -i: To read URLs from a file, use this option. URLs are read from the standard input if -i is set to file. No URLs are required on the command line if this function is utilized. If you have URLs on the command line and in an input file, the ones on the command line will be fetched first. If the URLs are just listed in order, the file does not need to be an HTML page.


$wget -i inputfile
$wget -i inputfile [URL]

7. –tries: This option is used to limit the number of retries to a certain amount. For limitless retrying, use 0 or inf. With the exception of fatal problems such as connection rejected or link not found, which are not retried after the error has occurred, the default is to retry 20 times.


$wget -t number [URL]

8. -c: If the resume capability of the supplied file is yes, this option is used to continue a partly downloaded file; otherwise, if the resume capability of the given file is no or not specified, the downloading of the file cannot be resumed.


$wget -c [URL]

9. -w: The system will wait the given number of seconds between retrievals if this option is applied. This option is suggested since it reduces server load by reducing the frequency of requests. Instead of seconds, you may specify the time in minutes with the m suffix, hours with the h suffix, or days with the d suffix. If the network or the target host is down, using a big number for this option allows wget to wait long enough for the network fault to be rectified before retrying.


$wget -w number in seconds [URL]

10. -r: This option enables recursive retrieval of the supplied link in the event of catastrophic failures. This option is a recursive call to the command line link specified.


$wget -r [URL]