[Scons-dev] multi-thread tasks

James Corey jc-scons at neniam.net
Fri Sep 16 02:47:26 EDT 2016


OK, I finished the code.  User controls behavior by specifying
threads_needed attribute on Action.  Value is clipped at num_jobs.
Tasks continue to be run in order--no attempt is made to re-order or
schedule smaller jobs while waiting for available slots, since we
don't know how long they would take.  It is fair, and we can't do
better without complicated packing and prediction.  I think it won't
be an issue in practice.
Patch includes two tests and updated documentation and CHANGES.txt.

Pull request is https://bitbucket.org/scons/scons/pull-requests/355

Thanks for feedback.

On Mon, Sep 12, 2016 at 9:22 PM, William Blevins <wblevins001 at gmail.com> wrote:
> James,
>
> As a fellow user, I think SCons could use something like this. I have had
> similar issues in the past, but I generally just let the O/S handle the
> additional threads. To my knowledge, jobs and builder/action invocations are
> 1:1.
>
> I would think that a simple property is appropriate, but there may be some
> edge cases to consider.
>
> What happens if the action requires more threads than are available? Does it
> never execute?
> How will SCons optimize executing a task that requires less threads versus
> waiting for X-threads to be available?
>
> V/R,
> William
>
> On Mon, Sep 12, 2016 at 6:06 PM, James Corey <jc-scons at neniam.net> wrote:
>>
>> I have a need to allow some external process actions to be scheduled which
>> themselves run with multiple threads, and I would like to adjust scons
>> to take this into account when running jobs in parallel (e.g. num_jobs=4).
>>
>> Initial tests suggest that the following crude hack is effective at
>> limiting
>> the load:
>>
>> diff -r 6b54eedc08ac src/engine/SCons/Job.py
>> --- a/src/engine/SCons/Job.py   Sun Mar 02 13:52:56 2014 -0500
>> +++ b/src/engine/SCons/Job.py   Mon Sep 12 14:39:50 2016 -0700
>> @@ -391,9 +391,14 @@
>>                          task.postprocess()
>>                      else:
>>                          if task.needs_execute():
>> +                            try:
>> +                                nthreads = task.node.threads_needed
>> +                            except (NameError, AttributeError):
>> +                                nthreads = 1
>> +                            task.threads_used = nthreads
>>                              # dispatch task
>>                              self.tp.put(task)
>> -                            jobs = jobs + 1
>> +                            jobs = jobs + task.threads_used
>>                          else:
>>                              task.executed()
>>                              task.postprocess()
>> @@ -404,7 +409,7 @@
>>                  # back and put the next batch of tasks on the queue.
>>                  while True:
>>                      task, ok = self.tp.get()
>> -                    jobs = jobs - 1
>> +                    jobs = jobs - task.threads_used
>>
>>                      if ok:
>>                          task.executed()
>>
>> I would like to clean this up, limit threads_used to the value of
>> maxjobs, and optionally wait if necessary for the given number of job
>> slots to become available, as well as establish a proper method for an
>> action to set this value.  Naturally I'd base it on the tip and write
>> tests, with the intention that the quality of the patch be sufficient
>> for consideration into upstream, should there be interest.
>>
>> Since I am a beginner at scons, I ask the following specific
>> questions:
>>
>> * Would this type of functionality be welcome?
>> * Should I generalize it to allow for other resource metering (e.g.
>> memory)?
>> * Is a simple attribute appropriate, or should it be a callback?
>> * Is it more proper to put such on the node or the task?
>> * How do the tasks of Job.py relate to the Actions of the SConscript? 1:1?
>> * How should a user override the default of 1 thread needed, from a
>> SConscript?
>> * Any other advice?
>>
>> Assuming there is any interest, should I put together a speculative
>> patch to a fork and submit a pull request?  Or should I try to design it
>> via discussion here first, and then submit a pull request?
>>
>> Thanks in advance.
>> _______________________________________________
>> Scons-dev mailing list
>> Scons-dev at scons.org
>> https://pairlist2.pair.net/mailman/listinfo/scons-dev
>
>
>
> _______________________________________________
> Scons-dev mailing list
> Scons-dev at scons.org
> https://pairlist2.pair.net/mailman/listinfo/scons-dev
>


More information about the Scons-dev mailing list