I am running a test against my application running on Netty server (4.0.28.Final)
On OSX
Without setting the below code it does not support concurrent client (1024) connections
option(ChannelOption.SO_BACKLOG, 2048)
After this setting I am able to make 1000s of concurrent client connections.
Q1 Having ulimit set to 2048 and somaxcon set to 2048 didn't help, Why?
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 10000
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
$ sysctl -a | grep files
kern.maxfiles = 10000
kern.maxfilesperproc = 10000
kern.maxfiles: 10000
kern.maxfilesperproc: 10000
kern.num_files: 2098
$ sysctl -a | grep somax
kern.ipc.somaxconn: 2048
Reference: http://stackoverflow.com/questions/5377450/maximum-number-of-open-filehandles-per-process-on-osx-and-how-to-increase
Q2 netstat on the port gives me the right count of connections(1024) but lsof gives a much lesser value(175).
On Centos
The following setting works for huge client connections
option(ChannelOption.SO_BACKLOG, 128)
os level
Ulimit = 65336
# sysctl -a | grep somax
net.core.somaxconn = 128
Q3 I am able to connect multiple clients. Why/How? Note: All values except ulimit are high.
Q4 What does setting backlog do? Why is it required on OSX and no effect on CentOS?
Q4 Netty on my OSX logs this message
23:17:09.060 [main] DEBUG io.netty.util.NetUtil - /proc/sys/net/core/somaxconn: 128 (non-existent)
I created /proc/sys/net/core/somaxconn. The log started showing the new value but it had no effect on number of concurrent thread.