For cases like this one I'd recommend using dataflow tasks started from within a sequential for loop on a sorted collection of "features", instead of parallel collections: PGroup group = ... for(f in features) group.task {runtest(it)} This would guarantee the startup order that you intend....
multithreading,groovy,actor,gpars
To make it run as you expect there are few changes to make: import groovyx.gpars.group.DefaultPGroup import groovyx.gpars.scheduler.DefaultPool def poolGroup = new DefaultPGroup(new DefaultPool(true, 5)) def closure = { when {Integer msg -> println("I'm number ${msg} on thread ${Thread.currentThread().name}") Thread.sleep(1000) stop() } } def integers = [1, 2, 3, 4, 5,...
Here is the current solution we have to our issue. It should be noted that we followed this route due to our requirements Work is grouped by some context Work within a given context is ordered Work within a given context is synchronous Additional work for a context should execute...