Eric Auge
2005-05-10 20:32:14 UTC
Hello,
I'm actually trying to get performance informations/params about
a UDP/TCP and way/params to ensure constant receiving rate
without packet dropping from the system itself.
A quick UDP server (poll(1 socket)/read(~ 60 bytes)/checkpacket(XOR))/
client code and some tryout, I actually get an average 60000 UDP
datagrams/s.
first does this number make sense ?
After some "(GNU)plotting" around numbers, I saw some strange down pikes
regulary, the rate just drop down to 20000 packet/s and then comes back
almost immediately to 60000 p/s (average), compared to "missing" packets
during my test, spikes are happening at almost the same period, and
numbers between "missing" packets and "full socket buffer" drops are
identical.
I have tryed to increase :
sysctls :
net.inet.udp.recvspace=65K
kern.sbmax=1M
Kernel options
options NMBCLUSTERS=16384
to reduce those "full socket buffer" drops.
Is there any other tuning params that I can tweak ?
or is it just too much traffic ?
does kqueue(2) will provide more reliability? performance?
during high load on udp? tcp ? should I prefer it to poll(2),
the quick testing I have made (by replacing poll() by kevent())
doesn't show much difference.
I hope, I haven't misunderstood things or have wrong numbers,
I'm still doing bench/plotting, I can give out the results +
client/server tools once they are more readable/usable, and if
it interest people.
Regards,
Eric.
I'm actually trying to get performance informations/params about
a UDP/TCP and way/params to ensure constant receiving rate
without packet dropping from the system itself.
A quick UDP server (poll(1 socket)/read(~ 60 bytes)/checkpacket(XOR))/
client code and some tryout, I actually get an average 60000 UDP
datagrams/s.
first does this number make sense ?
After some "(GNU)plotting" around numbers, I saw some strange down pikes
regulary, the rate just drop down to 20000 packet/s and then comes back
almost immediately to 60000 p/s (average), compared to "missing" packets
during my test, spikes are happening at almost the same period, and
numbers between "missing" packets and "full socket buffer" drops are
identical.
I have tryed to increase :
sysctls :
net.inet.udp.recvspace=65K
kern.sbmax=1M
Kernel options
options NMBCLUSTERS=16384
to reduce those "full socket buffer" drops.
Is there any other tuning params that I can tweak ?
or is it just too much traffic ?
does kqueue(2) will provide more reliability? performance?
during high load on udp? tcp ? should I prefer it to poll(2),
the quick testing I have made (by replacing poll() by kevent())
doesn't show much difference.
I hope, I haven't misunderstood things or have wrong numbers,
I'm still doing bench/plotting, I can give out the results +
client/server tools once they are more readable/usable, and if
it interest people.
Regards,
Eric.