Steps to reproduce:
1. Ingest a screen recording of 2-3 hours containing lots of slides (>240) which will create this number of player-slides with a "standard" workflow
2. Wait for the processing to come to the operation "distribute-downloads"
(Tested with 5.5GB media.zip (->Camera.mpg,Screen.mpg,Audio.mp2); 4 Cores, 6GB Memory, 3GB JavaHeap)
The distribution stops suddenly, Java is at 100%
The memory consumption not to suddenly peak (is this related to the jetty "memory leak" error?)
The distribution to be finished.
Workaround (if any):
Increasing Java Heap Space on startup seems to help a bit (this will catch some of the peaks before they become critical).
Decrease the number of player-slides detected with the videosegmenter configuration options (was not successful with that)
Josh Holtzmann wrote on this:
>Matterhorn runs all distribution jobs for a single recording/workflow in
>parallel. With 250+ objects to distribute, I could see this being a problem
>without a sufficiently large cluster of servers capable of running
>If there is no configuration option available to limit the number of
>concurrent distribution jobs, there probably should be. At least there
>should be a sensible default limit. Apparently, infinite is not a sensible
dont know enough to say it, but it looks like some doublicate of http://opencast.jira.com/browse/MH-8205, since memory consumption rises over the limits and java will be busy 100% after the call to distribution.
was a big pita
is and was some sort of dublicate of "memory leak"
anyway, it is resolved with MH 1.3.1, thanks a lot