varnishd (1) - Linux Manuals

varnishd: HTTP accelerator daemon

NAME

varnishd - HTTP accelerator daemon

SYNOPSIS

varnishd [-a address[:port]] [-b host[:port]] [-C] [-d] [-f config]
[-F] [-g group] [-h type[,options]] [-i identity] [-l shl[,free[,fill]]] [-M address:port] [-n name] [-P file] [-p param=value] [-r param[,param...] [-s [name=]kind[,options]] [-S secret-file] [-T address[:port]] [-t ttl] [-u user] [-V]

DESCRIPTION

The varnishd daemon accepts HTTP requests from clients, passes them on to a backend server and caches the returned documents to better satisfy future requests for the same document.

OPTIONS

-a address[:port][,address[:port][...]
Listen for client requests on the specified address and port. The address can be a host name (“localhost”), an IPv4 dotted-quad (“127.0.0.1”), or an IPv6 address enclosed in square brackets (“[::1]”). If address is not specified, varnishd will listen on all available IPv4 and IPv6 interfaces. If port is not specified, the default HTTP port as listed in /etc/services is used. Multiple listening addresses and ports can be specified as a whitespace or comma -separated list.
-b host[:port]
Use the specified host as backend server. If port is not specified, the default is 8080.
-C
Print VCL code compiled to C language and exit. Specify the VCL file to compile with the -f option.
-d
Enables debugging mode: The parent process runs in the foreground with a CLI connection on stdin/stdout, and the child process must be started explicitly with a CLI command. Terminating the parent process will also terminate the child.
-f config
Use the specified VCL configuration file instead of the builtin default. See vcl(7) for details on VCL syntax. When no configuration is supplied varnishd will not start the cache process.
-F
Run in the foreground.
-g group
Specifies the name of an unprivileged group to which the child process should switch before it starts accepting connections. This is a shortcut for specifying the group run-time parameter.
-h type[,options]
Specifies the hash algorithm. See Hash Algorithms for a list of supported algorithms.
-i identity
Specify the identity of the Varnish server. This can be accessed using server.identity from VCL
-l shl[,free[,fill]]
Specifies size of shmlog file. shl is the store for the shared memory log records [80M], free is the store for other allocations [1M] and fill determines how the log is [+]. Scaling suffixes like 'k', 'M' can be used up to (E)xabytes. Default is 80 Megabytes.
-M address:port
Connect to this port and offer the command line interface. Think of it as a reverse shell. When running with -M and there is no backend defined the child process (the cache) will not start initially.
-n name
Specify the name for this instance. Amonst other things, this name is used to construct the name of the directory in which varnishd keeps temporary files and persistent state. If the specified name begins with a forward slash, it is interpreted as the absolute path to the directory which should be used for this purpose.
-P file
Write the process's PID to the specified file.
-p param=value
Set the parameter specified by param to the specified value. See Run-Time Parameters for a list of parameters. This option can be used multiple times to specify multiple parameters.
-r param[,param...]
Make the listed parameters read only. This gives the system administrator a way to limit what the Varnish CLI can do. Consider making parameters such as user, group, cc_command, vcc_allow_inline_c read only as these can potentially be used to escalate privileges from the CLI. Protecting listen_address may also be a good idea.
-s [name=]type[,options]
Use the specified storage backend. The storage backends can be one of the following:
malloc[,size]
file[,path[,size[,granularity]]]
persistent,path,size

See Storage Types in the Users Guide for more information on the various storage backends. This option can be used multiple times to specify multiple storage files. Names are referenced in logs, vcl, statistics, etc.

-S file
Path to a file containing a secret used for authorizing access to the management port.
-T address[:port]
Offer a management interface on the specified address and port. See Management Interface for a list of management commands.
-t ttl
Specifies a hard minimum time to live for cached documents. This is a shortcut for specifying the default_ttl run-time parameter.
-u user
Specifies the name of an unprivileged user to which the child process should switch before it starts accepting connections. This is a shortcut for specifying the user runtime parameter.

If specifying both a user and a group, the user should be specified first.

-V
Display the version number and exit.

Hash Algorithms

The following hash algorithms are available:

critbit
A self-scaling tree structure. The default hash algorithm in
Varnish Cache 2.1 and onwards. In comparison to a more traditional B tree the critbit tree is almost completely lockless. Do not change this unless you are certain what you're doing.
simple_list
A simple doubly-linked list. Not recommended for production use.
classic[,buckets]
A standard hash table. The hash key is the CRC32 of the object's URL modulo the size of the hash table. Each table entry points to a list of elements which share the same hash key. The buckets parameter specifies the number of entries in the hash table. The default is 16383.

Storage Types

The following storage types are available:

malloc

syntax: malloc[,size]

malloc is a memory based backend.

file

syntax: file[,path[,size[,granularity]]]

The file backend stores data in a file on disk. The file will be accessed using mmap.

persistent (experimental)

syntax: persistent,path,size

Persistent storage. Varnish will store objects in a file in a manner that will secure the survival of most of the objects in the event of a planned or unplanned shutdown of Varnish. The persistent storage backend has multiple issues with it and will likely be removed from a future version of Varnish.

Management Interface

If the -T option was specified, varnishd will offer a command-line management interface on the specified address and port. The recommended way of connecting to the command-line management interface is through varnishadm(1).

The commands available are documented in varnish(7).

Run-Time Parameters

Runtime parameters are marked with shorthand flags to avoid repeating the same text over and over in the table below. The meaning of the flags are:

experimental
We have no solid information about good/bad/optimal values for this parameter. Feedback with experience and observations are most welcome.
delayed
This parameter can be changed on the fly, but will not take effect immediately.
restart
The worker process must be stopped and restarted, before this parameter takes effect.
reload
The VCL programs must be reloaded for this parameter to take effect.
experimental
We're not really sure about this parameter, tell us what you find.
wizard
Do not touch unless you really know what you're doing.
only_root
Only works if varnishd is running as root.

Here is a list of all parameters, current as of last time we remembered to update the manual page. This text is produced from the same text you will find in the CLI if you use the param.show command, so should there be a new parameter which is not listed here, you can find the description using the CLI commands.

Be aware that on 32 bit systems, certain default values, such as workspace_client (=16k), thread_pool_workspace (=16k), http_resp_size (=8k), http_req_size (=12k), gzip_stack_buffer (=4k) and thread_pool_stack (=64k) are reduced relative to the values listed here, in order to conserve VM space.

acceptor_sleep_decay

Default: 0.9
Minimum: 0
Maximum: 1
Flags: experimental

If we run out of resources, such as file descriptors or worker threads, the acceptor will sleep between accepts. This parameter (multiplicatively) reduce the sleep duration for each successful accept. (ie: 0.9 = reduce by 10%)

acceptor_sleep_incr

Units: seconds
Default: 0.001
Minimum: 0.000
Maximum: 1.000
Flags: experimental

If we run out of resources, such as file descriptors or worker threads, the acceptor will sleep between accepts. This parameter control how much longer we sleep, each time we fail to accept a new connection.

acceptor_sleep_max

Units: seconds
Default: 0.050
Minimum: 0.000
Maximum: 10.000
Flags: experimental

If we run out of resources, such as file descriptors or worker threads, the acceptor will sleep between accepts. This parameter limits how long it can sleep between attempts to accept new connections.

auto_restart

Units: bool
Default: on

Restart child process automatically if it dies.

ban_dups

Units: bool
Default: on

Eliminate older identical bans when new bans are created. This test is CPU intensive and scales with the number and complexity of active (non-Gone) bans. If identical bans are frequent, the amount of CPU needed to actually test the bans will be similarly reduced.

ban_lurker_age

Units: seconds
Default: 60.000
Minimum: 0.000

The ban lurker does not process bans until they are this old. Right when a ban is added, the most frequently hit objects will get tested against it as part of object lookup. This parameter prevents the ban-lurker from kicking in, until the rush is over.

ban_lurker_batch

Default: 1000
Minimum: 1

How many objects the ban lurker examines before taking a ban_lurker_sleep. Use this to pace the ban lurker so it does not eat too much CPU.

ban_lurker_sleep

Units: seconds
Default: 0.010
Minimum: 0.000

The ban lurker thread sleeps between work batches, in order to not monopolize CPU power. When nothing is done, it sleeps a fraction of a second before looking for new work to do. A value of zero disables the ban lurker.

between_bytes_timeout

Units: seconds
Default: 60.000
Minimum: 0.000

Default timeout between bytes when receiving data from backend. We only wait for this many seconds between bytes before giving up. A value of 0 means it will never time out. VCL can override this default value for each backend request and backend request. This parameter does not apply to pipe.

busyobj_worker_cache

Units: bool
Default: off

Cache free busyobj per worker thread. Disable this if you have very high hitrates and want to save the memory of one busyobj per worker thread.

cc_command

Default: "exec gcc -std=gnu99 -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic -Wall -Werror -Wno-error=unused-result -pthread -fpic -shared -Wl,-x -o %o %s"
Flags: must_reload

Command used for compiling the C source code to a dlopen(3) loadable object. Any occurrence of %s in the string will be replaced with the source file name, and %o will be replaced with the output file name.

cli_buffer

Units: bytes
Default: 8k
Minimum: 4k

Size of buffer for CLI command input. You may need to increase this if you have big VCL files and use the vcl.inline CLI command. NB: Must be specified with -p to have effect.

cli_limit

Units: bytes
Default: 48k
Minimum: 128b
Maximum: 99999999b

Maximum size of CLI response. If the response exceeds this limit, the response code will be 201 instead of 200 and the last line will indicate the truncation.

cli_timeout

Units: seconds
Default: 60.000
Minimum: 0.000

Timeout for the childs replies to CLI requests from the mgt_param.

clock_skew

Units: seconds
Default: 10
Minimum: 0

How much clockskew we are willing to accept between the backend and our own clock.

connect_timeout

Units: seconds
Default: 3.500
Minimum: 0.000

Default connection timeout for backend connections. We only try to connect to the backend for this many seconds before giving up. VCL can override this default value for each backend and backend request.

critbit_cooloff

Units: seconds
Default: 180.000
Minimum: 60.000
Maximum: 254.000
Flags: wizard

How long the critbit hasher keeps deleted objheads on the cooloff list.

debug

Default: none

Enable/Disable various kinds of debugging.

none
Disable all debugging

Use +/- prefix to set/reset individual bits:

req_state
VSL Request state engine
workspace
VSL Workspace operations
waiter
VSL Waiter internals
waitinglist
VSL Waitinglist events
syncvsl
Make VSL synchronous
hashedge
Edge cases in Hash
vclrel
Rapid VCL release
lurker
VSL Ban lurker
esi_chop
Chop ESI fetch to bits
flush_head
Flush after http1 head

default_grace

Units: seconds
Default: 10.000
Minimum: 0.000
Flags:

Default grace period. We will deliver an object this long after it has expired, provided another thread is attempting to get a new copy.

default_keep

Units: seconds
Default: 0.000
Minimum: 0.000
Flags:

Default keep period. We will keep a useless object around this long, making it available for conditional backend fetches. That means that the object will be removed from the cache at the end of ttl+grace+keep.

default_ttl

Units: seconds
Default: 120.000
Minimum: 0.000
Flags:

The TTL assigned to objects if neither the backend nor the VCL code assigns one.

feature

Default: none

Enable/Disable various minor features.

none
Disable all features.

Use +/- prefix to enable/disable individual feature:

short_panic
Short panic message.
wait_silo
Wait for persistent silo.
no_coredump
No coredumps.
esi_ignore_https
Treat HTTPS as HTTP in ESI:includes
esi_disable_xml_check
Don't check of body looks like XML
esi_ignore_other_elements
Ignore non-esi XML-elements
esi_remove_bom
Remove UTF-8 BOM

fetch_chunksize

Units: bytes
Default: 16k
Minimum: 4k
Flags: experimental

The default chunksize used by fetcher. This should be bigger than the majority of objects with short TTLs. Internal limits in the storage_file module makes increases above 128kb a dubious idea.

fetch_maxchunksize

Units: bytes
Default: 0.25G
Minimum: 64k
Flags: experimental

The maximum chunksize we attempt to allocate from storage. Making this too large may cause delays and storage fragmentation.

first_byte_timeout

Units: seconds
Default: 60.000
Minimum: 0.000

Default timeout for receiving first byte from backend. We only wait for this many seconds for the first byte before giving up. A value of 0 means it will never time out. VCL can override this default value for each backend and backend request. This parameter does not apply to pipe.

group

Default: GID 425
Flags: must_restart, only_root

The unprivileged group to run as.

group_cc

Default: <not set>
Flags: only_root

On some systems the C-compiler is restricted so not everybody can run it. This parameter makes it possible to add an extra group to the sandbox process which runs the cc_command, in order to gain access to such a restricted C-compiler.

gzip_buffer

Units: bytes
Default: 32k
Minimum: 2k
Flags: experimental

Size of malloc buffer used for gzip processing. These buffers are used for in-transit data, for instance gunzip'ed data being sent to a client.Making this space to small results in more overhead, writes to sockets etc, making it too big is probably just a waste of memory.

gzip_level

Default: 6
Minimum: 0
Maximum: 9

Gzip compression level: 0=debug, 1=fast, 9=best

gzip_memlevel

Default: 8
Minimum: 1
Maximum: 9

Gzip memory level 1=slow/least, 9=fast/most compression. Memory impact is 1=1k, 2=2k, ... 9=256k.

http_gzip_support

Units: bool
Default: on
Enable gzip support. When enabled Varnish request compressed objects from the backend and store them compressed. If a client does not support gzip encoding Varnish will uncompress compressed objects on demand. Varnish will also rewrite the Accept-Encoding header of clients indicating support for gzip to:
Accept-Encoding: gzip

Clients that do not support gzip will have their Accept-Encoding header removed. For more information on how gzip is implemented please see the chapter on gzip in the Varnish reference.

http_max_hdr

Units: header lines
Default: 64
Minimum: 32
Maximum: 65535

Maximum number of HTTP header lines we allow in {req|resp|bereq|beresp}.http (obj.http is autosized to the exact number of headers). Cheap, ~20 bytes, in terms of workspace memory. Note that the first line occupies five header lines.

http_range_support

Units: bool
Default: on

Enable support for HTTP Range headers.

http_req_hdr_len

Units: bytes
Default: 8k
Minimum: 40b

Maximum length of any HTTP client request header we will allow. The limit is inclusive its continuation lines.

http_req_size

Units: bytes
Default: 32k
Minimum: 0.25k

Maximum number of bytes of HTTP client request we will deal with. This is a limit on all bytes up to the double blank line which ends the HTTP request. The memory for the request is allocated from the client workspace (param: workspace_client) and this parameter limits how much of that the request is allowed to take up.

http_resp_hdr_len

Units: bytes
Default: 8k
Minimum: 40b

Maximum length of any HTTP backend response header we will allow. The limit is inclusive its continuation lines.

http_resp_size

Units: bytes
Default: 32k
Minimum: 0.25k

Maximum number of bytes of HTTP backend response we will deal with. This is a limit on all bytes up to the double blank line which ends the HTTP request. The memory for the request is allocated from the worker workspace (param: thread_pool_workspace) and this parameter limits how much of that the request is allowed to take up.

idle_send_timeout

Units: seconds
Default: 60.000
Minimum: 0.000
Flags: delayed

Time to wait with no data sent. If no data has been transmitted in this many seconds the session is closed. See setsockopt(2) under SO_SNDTIMEO for more information.

listen_address

Default: :80
Flags: must_restart

Whitespace separated list of network endpoints where Varnish will accept requests. Possible formats: host, host:port, :port

listen_depth

Units: connections
Default: 1024
Minimum: 0
Flags: must_restart

Listen queue depth.

lru_interval

Units: seconds
Default: 2.000
Minimum: 0.000
Flags: experimental

Grace period before object moves on LRU list. Objects are only moved to the front of the LRU list if they have not been moved there already inside this timeout period. This reduces the amount of lock operations necessary for LRU list access.

max_esi_depth

Units: levels
Default: 5
Minimum: 0

Maximum depth of esi:include processing.

max_restarts

Units: restarts
Default: 4
Minimum: 0

Upper limit on how many times a request can restart. Be aware that restarts are likely to cause a hit against the backend, so don't increase thoughtlessly.

max_retries

Units: retries
Default: 4
Minimum: 0

Upper limit on how many times a backend fetch can retry.

nuke_limit

Units: allocations
Default: 50
Minimum: 0
Flags: experimental

Maximum number of objects we attempt to nuke in orderto make space for a object body.

pcre_match_limit

Default: 10000
Minimum: 1

The limit for the number of internal matching function calls in a pcre_exec() execution.

pcre_match_limit_recursion

Default: 10000
Minimum: 1

The limit for the number of internal matching function recursions in a pcre_exec() execution.

ping_interval

Units: seconds
Default: 3
Minimum: 0
Flags: must_restart

Interval between pings from parent to child. Zero will disable pinging entirely, which makes it possible to attach a debugger to the child.

pipe_timeout

Units: seconds
Default: 60.000
Minimum: 0.000

Idle timeout for PIPE sessions. If nothing have been received in either direction for this many seconds, the session is closed.

pool_req

Default: 10,100,10

Parameters for per worker pool request memory pool. The three numbers are:

min_pool
minimum size of free pool.
max_pool
maximum size of free pool.
max_age
max age of free element.

pool_sess

Default: 10,100,10

Parameters for per worker pool session memory pool. The three numbers are:

min_pool
minimum size of free pool.
max_pool
maximum size of free pool.
max_age
max age of free element.

pool_vbc

Default: 10,100,10

Parameters for backend connection memory pool. The three numbers are:

min_pool
minimum size of free pool.
max_pool
maximum size of free pool.
max_age
max age of free element.

pool_vbo

Default: 10,100,10

Parameters for backend object fetch memory pool. The three numbers are:

min_pool
minimum size of free pool.
max_pool
maximum size of free pool.
max_age
max age of free element.

prefer_ipv6

Units: bool
Default: off

Prefer IPv6 address when connecting to backends which have both IPv4 and IPv6 addresses.

rush_exponent

Units: requests per request
Default: 3
Minimum: 2
Flags: experimental

How many parked request we start for each completed request on the object. NB: Even with the implict delay of delivery, this parameter controls an exponential increase in number of worker threads.

send_timeout

Units: seconds
Default: 600.000
Minimum: 0.000
Flags: delayed

Send timeout for client connections. If the HTTP response hasn't been transmitted in this many seconds the session is closed. See setsockopt(2) under SO_SNDTIMEO for more information.

session_max

Units: sessions
Default: 100000
Minimum: 1000

Maximum number of sessions we will allocate from one pool before just dropping connections. This is mostly an anti-DoS measure, and setting it plenty high should not hurt, as long as you have the memory for it.

shm_reclen

Units: bytes
Default: 255b
Minimum: 16b
Maximum: 4084

Old name for vsl_reclen, use that instead.

shortlived

Units: seconds
Default: 10.000
Minimum: 0.000

Objects created with (ttl+grace+keep) shorter than this are always put in transient storage.

sigsegv_handler

Units: bool
Default: off
Flags: must_restart

Install a signal handler which tries to dump debug information on segmentation faults.

syslog_cli_traffic

Units: bool
Default: on

Log all CLI traffic to syslog(LOG_INFO).

tcp_keepalive_intvl

Units: seconds
Default: 75.000
Minimum: 1.000
Maximum: 100.000
Flags: experimental

The number of seconds between TCP keep-alive probes.

tcp_keepalive_probes

Units: probes
Default: 9
Minimum: 1
Maximum: 100
Flags: experimental

The maximum number of TCP keep-alive probes to send before giving up and killing the connection if no response is obtained from the other end.

tcp_keepalive_time

Units: seconds
Default: 7200.000
Minimum: 1.000
Maximum: 7200.000
Flags: experimental

The number of seconds a connection needs to be idle before TCP begins sending out keep-alive probes.

thread_pool_add_delay

Units: seconds
Default: 0.000
Minimum: 0.000
Flags: experimental

Wait at least this long after creating a thread.

Some (buggy) systems may need a short (sub-second) delay between creating threads. Set this to a few milliseconds if you see the 'threads_failed' counter grow too much.

Setting this too high results in insuffient worker threads.

thread_pool_destroy_delay

Units: seconds
Default: 1.000
Minimum: 0.010
Flags: delayed, experimental

Wait this long after destroying a thread.

This controls the decay of thread pools when idle(-ish).

Minimum is 0.01 seconds.

thread_pool_fail_delay

Units: seconds
Default: 0.200
Minimum: 0.010
Flags: experimental

Wait at least this long after a failed thread creation before trying to create another thread.

Failure to create a worker thread is often a sign that the end is near, because the process is running out of some resource. This delay tries to not rush the end on needlessly.

If thread creation failures are a problem, check that thread_pool_max is not too high.

It may also help to increase thread_pool_timeout and thread_pool_min, to reduce the rate at which treads are destroyed and later recreated.

thread_pool_max

Units: threads
Default: 5000
Minimum: 100
Flags: delayed

The maximum number of worker threads in each pool.

Do not set this higher than you have to, since excess worker threads soak up RAM and CPU and generally just get in the way of getting work done.

Minimum is 10 threads.

thread_pool_min

Units: threads
Default: 100
Maximum: 5000
Flags: delayed

The minimum number of worker threads in each pool.

Increasing this may help ramp up faster from low load situations or when threads have expired.

Minimum is 10 threads.

thread_pool_stack

Units: bytes
Default: 48k
Minimum: 16k
Flags: experimental

Worker thread stack size. This will likely be rounded up to a multiple of 4k (or whatever the page_size might be) by the kernel.

thread_pool_timeout

Units: seconds
Default: 300.000
Minimum: 10.000
Flags: delayed, experimental

Thread idle threshold.

Threads in excess of thread_pool_min, which have been idle for at least this long, will be destroyed.

Minimum is 10 seconds.

thread_pools

Units: pools
Default: 2
Minimum: 1
Flags: delayed, experimental

Number of worker thread pools.

Increasing number of worker pools decreases lock contention.

Too many pools waste CPU and RAM resources, and more than one pool for each CPU is probably detrimal to performance.

Can be increased on the fly, but decreases require a restart to take effect.

thread_queue_limit

Default: 20
Minimum: 0
Flags: experimental

Permitted queue length per thread-pool.

This sets the number of requests we will queue, waiting for an available thread. Above this limit sessions will be dropped instead of queued.

thread_stats_rate

Units: requests
Default: 10
Minimum: 0
Flags: experimental

Worker threads accumulate statistics, and dump these into the global stats counters if the lock is free when they finish a job (request/fetch etc.) This parameters defines the maximum number of jobs a worker thread may handle, before it is forced to dump its accumulated stats into the global counters.

timeout_idle

Units: seconds
Default: 5.000
Minimum: 0.000

Idle timeout for client connections. A connection is considered idle, until we receive a non-white-space character on it.

timeout_linger

Units: seconds
Default: 0.050
Minimum: 0.000
Flags: experimental

How long the worker thread lingers on an idle session before handing it over to the waiter. When sessions are reused, as much as half of all reuses happen within the first 100 msec of the previous request completing. Setting this too high results in worker threads not doing anything for their keep, setting it too low just means that more sessions take a detour around the waiter.

timeout_req

Units: seconds
Default: 2.000
Minimum: 0.000

Max time to receive clients request headers, measured from first non-white-space character to double CRNL.

user

Default: nobody (99)
Flags: must_restart, only_root

The unprivileged user to run as.

vcc_allow_inline_c

Units: bool
Default: off

Allow inline C code in VCL.

vcc_err_unref

Units: bool
Default: on

Unreferenced VCL objects result in error.

vcc_unsafe_path

Units: bool
Default: on

Allow '/' in vmod & include paths. Allow 'import ... from ...'.

vcl_dir

Default: /etc/varnish

Directory from which relative VCL filenames (vcl.load and include) are opened.

vmod_dir

Default: /usr/lib64/varnish/vmods

Directory where VCL modules are to be found.

vsl_buffer

Units: bytes
Default: 4k
Minimum: 267

Bytes of (req-/backend-)workspace dedicated to buffering VSL records. Setting this too high costs memory, setting it too low will cause more VSL flushes and likely increase lock-contention on the VSL mutex.

The minimum tracks the vsl_reclen parameter + 12 bytes.

vsl_mask

Default: -VCL_trace,-WorkThread,-Hash

Mask individual VSL messages from being logged.

default
Set default value

Use +/- prefixe in front of VSL tag name, to mask/unmask individual VSL messages.

vsl_reclen

Units: bytes
Default: 255b
Minimum: 16b
Maximum: 4084b

Maximum number of bytes in SHM log record.

The maximum tracks the vsl_buffer parameter - 12 bytes.

vsl_space

Units: bytes
Default: 80M
Minimum: 1M
Flags: must_restart

The amount of space to allocate for the VSL fifo buffer in the VSM memory segment. If you make this too small, varnish{ncsa|log} etc will not be able to keep up. Making it too large just costs memory resources.

vsm_space

Units: bytes
Default: 1M
Minimum: 1M
Flags: must_restart

The amount of space to allocate for stats counters in the VSM memory segment. If you make this too small, some counters will be invisible. Making it too large just costs memory resources.

waiter

Default: epoll (possible values: epoll, poll)
Flags: must_restart, wizard

Select the waiter kernel interface.

workspace_backend

Units: bytes
Default: 64k
Minimum: 1k
Flags: delayed

Bytes of HTTP protocol workspace for backend HTTP req/resp. If larger than 4k, use a multiple of 4k for VM efficiency.

workspace_client

Units: bytes
Default: 64k
Minimum: 9k
Flags: delayed

Bytes of HTTP protocol workspace for clients HTTP req/resp. If larger than 4k, use a multiple of 4k for VM efficiency.

workspace_session

Units: bytes
Default: 384b
Minimum: 0.25k
Flags: delayed

Bytes of workspace for session and TCP connection addresses. If larger than 4k, use a multiple of 4k for VM efficiency.

workspace_thread

Units: bytes
Default: 2k
Minimum: 0.25k
Maximum: 8k
Flags: delayed

Bytes of auxiliary workspace per thread. This workspace is used for certain temporary data structures during the operation of a worker thread. One use is for the io-vectors for writing requests and responses to sockets, having too little space will result in more writev(2) system calls, having too much just wastes the space.

EXIT CODES

Varnish and bundled tools will, in most cases, exit with one of the following codes

0 OK
1 Some error which could be system-dependend and/or transient
2 Serious configuration / parameter error - retrying with the same configuration / parameters is most likely useless

The varnishd master process may also OR its exit code

with 0x20 when the varnishd child process died,
with 0x40 when the varnishd child process was terminated by a signal and
with 0x80 when a core was dumped.

HISTORY

The varnishd daemon was developed by Poul-Henning Kamp in cooperation with Verdens Gang AS and Varnish Software.

This manual page was written by Dag-Erling Smørgrav with updates by Stig Sandbeck Mathisen <ssm [at] debian.org>.

COPYRIGHT

This document is licensed under the same licence as Varnish itself. See LICENCE for details.

Copyright (c) 2007-2014 Varnish Software AS

SEE ALSO

varnish-cli(7)
varnishlog(1)
varnishhist(1)
varnishncsa(1)
varnishstat(1)
varnishtop(1)
vcl(7)