Does anyone have a suggestion on how to que up a s...
# suitescript
c
Does anyone have a suggestion on how to que up a scheduled task? I recently learned if a scheduled script is running, you cannot que up another instance via N/task… Use case — when one of our custom record types is created, after submit we need to run a script in the background (execution time can exceed 1 minute, so we can’t do it in the user event script iteself) — I instead set it up so it uses N/task to fire off a scheduled script, thinking if another one is already running it would que up this second instance and run once the first completes. Instead the user gets an error that the script is already IN PROGRESS. Is there any way to que up the task to execute once the scheduled script is available?
m
You can create a number of deployments and then submit the task without specifying a deployment for NetSuite to select a free one.
c
True but that still seems like a poor plan for scalability. For example, a CSV upload may create 100 records. That would require 100 deployments to handle queing up the tasks
t
make the creation of the deployment dynamic. I usually handle the message returned by the call to the task. It will return 'NO_DEPLOYMENTS_AVAILABLE' if you don't specify the deployment id and all deployments are running. if that value is returned then do a record.copy of your initial deployment then resubmit task again using that newly created deployment
s
maybe process that CSV via M/R script instead and adjust concurrency as needed.
m
Another option is to use a workflow with a workflow action script instead of a scheduled script. Your user event can trigger the WF to run asynchronously which will automatically queue executions
b
you can use N/task in the scheduled script to put itself back in the queue
make sure to have a guard to not reschedule infinitely if there are no more records to process
s
FWIW what @battk recommends is so common for us that we have 'auto rescheduling itself' built into our standard scheduled script template.
c
So when you schedule the task, do you add a record to some kind of intermediary processing table? How would the scheduled script know that it needs to kick off another instance and the parameters?
s
our 80% use case is where a search provides the data for a scheduled script to process. The script is done when there are no search results left. So the only requirement for subsequent runs is that search results fall off the list after they've been processed. i.e. the Search Criteria should filter our records already processed.
for reference, most of our schedule scripts are as simple as this
Copy code
/**
   * main script entrypoint
   */
  export function execute () {
    getEligibleTransactions() // start with search of arbitrary result length
      .map(nsSearchResult2obj<SearchResult>()) // convert results into nice strong-typed object
      .takeWhile(autoReschedule()) // Automatically reschedule the current task if governance is exhausted.
      .forEach(result => {
        const finalResult = _.attempt(X.mainLogic, result) // execute the main logic for each result, capturing exceptions
        if (_.isError(finalResult)) {
          log.warn('unexpected failure while processing', { recordid: result!.id, finalResult })
        } else <http://log.info|log.info>('final result', finalResult)
      })
  }
whoah, looks like slack has taken a step backwards on code formatting
this was more clicks to add to slack then I'd like.
c
Interesting. I’m not sure that would work for my use case as I need it to run specifically on record create, right after it is created. I suppose I could wire it up to check at the end if any other records have been created and remain unprocessed. For the example you gave I’d normally use a map reduce script but that’s a pretty clever solution
s
Aye, the simplicity of our scheduled script template negates some of the benefits of M/R. We typically only use M/R when it's the right fit (e.g. embarrassingly parallel processing friendly and high volume)
with the template, often we only implement a simple`mainLogic` function and we're done. It's hard to argue for a multi-stage M/R script by comparison
sorry for the digression.
b
probably want to expand your definition of immediately after
if you receive 100 records to process at the same time and each takes a minute, its going to take 100 minutes to process them all if you use a scheduled script with 1 processor
you should have at least 2, so you can probably get 2 processors for 50 minutes of processing if you figure out how to partition your records
it tends to be more brainless to use a map/reduce at that point
210 Views