In reaction to a SC_ACCEPTED response new http clients and streams are continuously created without closing them.
The trusted http client handling should be made more secure in order to avoid these kind of errors.
We've been facing the following problem in distributed installations multiple times. A file has been written to the workspace. The write call returns then in turn another service tries to access the previously written file but gets a "not found" error. Then, some time later the file appears.
Since similar issues can be observed even in other projects like Sakai we assume that this is somehow caused by the underlying NFS. NFS has-due to its architecture-latency problems.
The idea is to ask the working file repository if it is able to see the file after it has been written. This approach is not completely safe but a first step. Actually all workspaces on all nodes have to be asked if they are able to see the modification.
Which issues? No one can review it unless they know what it's fixing
The NFS latency issues should be solved at the NFS level. Especifically, there are two options that prevent NFS to cache the existence or absence of a file. Disabling such caches will increase the traffic but avoid false positives or negatives. These are the ones that just came to my mind, but NFS has many options to fine-tune the file integrity vs. access time ratio.
Not that the Workspace / WorkingFileRepository don't need a refactoring, but not for this reason especifically.
Merged PR 605 as 5983852.