Just a follow on -
If you think about it, each file access takes an NFSD to process it. This is why finds saturate so fast. It also depends on what the find is doing - just identifying files by name pattern is not so bad, but a find based on date causes each file inode to be accessed, that that takes another for each inode... Then there are the cache manipulations involved on the server after returning the data...
I'm not sure why the nfsd processes multiply.. unless there is some overload feature I hadn't run into before. Maybe there is another limit (something like a maxnfsd count).
I do understand that a build could acess a lot of files relatively quickly (and possibly in parallel too) and that would lock the thing up.
BTW, the client locks should be interruptable (unless a non-interupptable option is used), allowing them to continue, though that would abort any builds going on.