Yep, already did that search, and a search for NFS limitations.
And I already new about setting the block size.
But ..... I had not set the block size, even though I knew about it.
(Maybe I'm a little ignorant) Transferring files over NFS on client individually gives the full 100Mbps (10MB/sec) to that client.
And transferring files over NFS on two clients simultaneously still gives 100Mbps to each client.
So, I automatically conclude that because the clients are recieving the max. throughput already then there is no need to specify the block size.
DOH! How wrong was I?
Anyways, I searched for some benchmarks for network cards and came up with this article Testing Gigabit Network Adapters on platform TYAN Trinity GC-SL
I'm not sure exactly what the default block size is for NFS on FC3, but I have an inkly it may only be 1024.
Which coresponds roughly with the performance the graphs show at digit-life.
So, anyways, after studying the charts I immediately tried block sizes of 8192.
And hey presto - all clients are receiving 100Mbps each - simultaneously.
Unfortunately I can't set the MTU any higher on the server as the clients 10/100 NICs don't support jumboframes.
So, .... I've learnt a lesson here!
Though, out of interest, does anyone one know what the default block size is for NFS?