Discussion:
UDP(/TCP?) bench/tuning ?
Eric Auge
2005-05-10 20:32:14 UTC
Permalink
Hello,

I'm actually trying to get performance informations/params about
a UDP/TCP and way/params to ensure constant receiving rate
without packet dropping from the system itself.

A quick UDP server (poll(1 socket)/read(~ 60 bytes)/checkpacket(XOR))/
client code and some tryout, I actually get an average 60000 UDP
datagrams/s.

first does this number make sense ?

After some "(GNU)plotting" around numbers, I saw some strange down pikes
regulary, the rate just drop down to 20000 packet/s and then comes back
almost immediately to 60000 p/s (average), compared to "missing" packets
during my test, spikes are happening at almost the same period, and
numbers between "missing" packets and "full socket buffer" drops are
identical.

I have tryed to increase :

sysctls :
net.inet.udp.recvspace=65K
kern.sbmax=1M

Kernel options
options NMBCLUSTERS=16384

to reduce those "full socket buffer" drops.

Is there any other tuning params that I can tweak ?
or is it just too much traffic ?

does kqueue(2) will provide more reliability? performance?
during high load on udp? tcp ? should I prefer it to poll(2),
the quick testing I have made (by replacing poll() by kevent())
doesn't show much difference.

I hope, I haven't misunderstood things or have wrong numbers,
I'm still doing bench/plotting, I can give out the results +
client/server tools once they are more readable/usable, and if
it interest people.

Regards,
Eric.
Hubert Feyrer
2005-05-11 01:38:45 UTC
Permalink
Post by Eric Auge
Is there any other tuning params that I can tweak ?
Sorry for not having this documented in the proper place
(http://www.netbsd.org/guide/en/chap-tuning.html), but you may find the
following link useful: http://proj.sunet.se/LSR2/


- Hubert
--
Ihr, nicht ich! -> http://spreeblick.com/blog/index.php?p=841
d***@aol.com
2005-05-11 06:34:04 UTC
Permalink
I'm seeing an occasional uvm_fault/trap when 'quickly'
pulling/re-inserting a
USB cable from/to a SuperMicro Xeon motherboard.
The USB cable is a Prolific 2302 host-to-host bridge.

The panic is in usbd_alloc_xfer(), at the line:

xfer = dev->bus->methods->allocx(dev->bus);

It appears as though 'bus' in invalid (however 'methods' generates a
trap when dereferenced).
'bus' is a valid pointer but appears to point to an invalid usbd_bus
structure.

The UPL driver is used as the device driver for the Prolific 2302
bridge cable.
It is actually in the backtrace (upl_init() -> upl_tx_list_init() ->
usbd_alloc_xfer() ).

For some reason, 'quickly' pulling/re-inserting the USB cables is
causing the USB subsystem
to repeatedly detach/re-attach the UPL driver (& USB transport of
course)

Can anybody help me with this issue?

thanks,
Dave



-----Original Message-----
From: Hubert Feyrer <***@feyrer.de>
To: Eric Auge <***@phear.org>
Cc: tech-***@NetBSD.org; tech-***@NetBSD.org
Sent: Wed, 11 May 2005 03:38:45 +0200 (CEST)
Subject: Re: UDP(/TCP?) bench/tuning ?
Post by Eric Auge
Is there any other tuning params that I can tweak ?
Sorry for not having this documented in the proper place
(http://www.netbsd.org/guide/en/chap-tuning.html), but you may find the
following link useful: http://proj.sunet.se/LSR2/

- Hubert

-- Ihr, nicht ich! -> http://spreeblick.com/blog/index.php?p=841
Martin Husemann
2005-05-11 06:49:03 UTC
Permalink
Post by d***@aol.com
I'm seeing an occasional uvm_fault/trap when 'quickly'
pulling/re-inserting a
USB cable from/to a SuperMicro Xeon motherboard.
Can you please file a PR with this information?

Thanks,

Martin
Eric Auge
2005-05-11 16:01:45 UTC
Permalink
Post by Hubert Feyrer
Post by Eric Auge
Is there any other tuning params that I can tweak ?
Sorry for not having this documented in the proper place
(http://www.netbsd.org/guide/en/chap-tuning.html), but you may find the
following link useful: http://proj.sunet.se/LSR2/
Thanks for the link, after reading these infos, I have been able to
"stabilize" the rate (~ 30 000 UDP datagrams/s) and reduce
"full socket buffer" errors to 0.

sender :
10 000 000 UDP datagrams sent at ~ 40 000 p/s rate.

the receiver :
receives (poll/read/quickcheck) all those in 330 seconds,
which give an average of 30303 UDP p/s.
I also step each 10 000 packets received to store average
receiving speed infos of the previous 10 000 packets for plotting
at the end of the test.

No more errors but I still have these spikes, going down to ~14000 p/s
then coming back to full throughput (~30000 p/s)

Where could these spikes comes from ?

I reached this by increasing a lot more on the following,
as done in the http://proj.sunet.se/LSR2/ :

net.inet.ip.ifq.maxlen=500
kern.sbmax=67108864 (67M)
kern.somaxkva=67108864 (67M)
net.inet.udp.recvspace=4194304 (4M)
(I will prefer setsockopt(SO_RECVBUF) instead of system-wide setup for
the last one)

What's the relationships between those numbers ?
I wish to not encounter any out of memory problems and do/understand the
right calculation (w/ vm.*{min|max}, etc..).

net.inet.udp.recvspace :
From what I've caught there is mbufs pools (NMBCLUSTERS sets their
numbers right?!) *.{recv|send}space define the default system-wide
numbers of bytes used in *each* socket's "socket buffer" (which peek
into mbufs pools) for receive or sending. is that right ?

kern.sbmax :
as the description says :
Maximum socket buffer size (one socket)

kern.somaxkva :
Maximum amount of kernel memory to be used for socket buffers
(all sockets)

net.inet.ip.ifq.maxlen :
Maximum allowed input queue length
(is that the number of mbufs that can be used for input queues ?)

Hope you(or somebody) can enlighten me :)
Post by Hubert Feyrer
- Hubert
Eric.

Loading...