The `map` and `reduce` stages are parallelized acr...
# suitescript
e
The
map
and
reduce
stages are parallelized across all the processors you set on the Deployment. Your
map
calls must be independent of other
map
calls
a
When you say independent, I don't mean they are dependent on each other per se, but are dependent on updates the other one makes to the database. So this would be an issue?
e
Depends on how you've structured the
map
, but yes it would be an issue you have to design for.
If you are, for instance, just reading data from a search result retrieved by
getInputData
in your
map
, that would not contain any database updates from other `map`s
but if you were instead loading new data or running a new search, you'd have updated data. The
map
stages are still running in parallel though
If you need every single invocation to be sequential, then M/R is probably not the right solution.
s
note that a map reduce script isn't the best tool for every job
a
So what would be an alternative? Mass update? Essentially what I am trying to do is find all non built work orders, and see if they are still not buildable or if they are now buildable I get all the unbuilt WO ids in the getInputData and in the map I am loading each work order dynamically with the id from the search. and checking based on quantities available if it is now buildable. My concern is that if they run parallel, that from the time the data is entered, until it is actually saved the next work order might have the same items and together they wouldn't both be buildable
e
Scheduled Script or Mass Update.
a
I thought scheduled scripts are obsolete with Map Reduce? That would solve this issue?
e
They are not obsolete, no. M/R are just an additional tool.
Scheduled Script would do the same logic in serial instead of parallel.
a
Interesting. I suppose I could do all the work manually in the getInputData stage as log as the result set isn't too large for the governance
If I were to use a map reduce
e
If you're going to leave it in a M/R, you can just reduce it to use 1 processor, but then there is no point in using the M/R
a
Except that I already wrote the code in a map reduce....
by one processer you mean the buffer size?
e
No, the concurrency limit
a
So if I set that to 1, than it would solve this issue, just might take a long time?
e
Except that I already wrote the code in a map reduce....
A good case for always maintaining separate modules for your business logic and keeping your entry points slim
💯 1
Correct, if your M/R only has 1 queue available, it cannot process anything in parallel
a
Okay. Thank you
e
I just want to add that I see a point using a M/R using a single processor since it has a lot of more governance than a scheduled script, @erictgrubaugh out of curiosity, how do you handle the governance limit in Scheduled script in 2.0 since they cannot be yielded as in 1.0?
e
You have to keep track of the records you've processed - perhaps with a script parameter or value on the record - monitor the governance usage with
N/runtime
, then reschedule whenever appropriate
s
we have a pattern for scheduled scripts which is about 10 lines of code total and handles almost all use cases (including error handling, automatic governance tracking and rescheduling, etc.)
e
I see, thanks both for commenting their approaches, since I switched to 2.0. I only have used a Scheduled Script for testing purposes, then the M/R is my to-go approach