Is there a way for a map reduce script to redeploy...
# suitescript
j
Is there a way for a map reduce script to redeploy once it hits its limit? Or based on line count queue a second deployment?
a
not really? due to the concurrency model in the map and reduce stages you have multiple threads, they didn't ALL hit a limit, so you can't/shouldn't terminate and retrigger, its NOT a scheduled script
depending what you're actually doing, if you're only using map and not reduce, you can skip the map and do things in the reduce instead. each reduce instance has 5k governance vs. 1k in the map.
j
I have the summarize stage loading records and saving them, we hit the limit in that stage.
The getInputData stage gets records from a saved search then loops through a sublist in them to get a list of all records to be modified. The map stage just parses the data and passes it to the summarize to be worked on. Is there a better approach for this?
a
.... yeah, you shouldn't be doing any work in the summariaze really
j
The map/reduce stages have a much lower limit, that is why I went with the work in the summarize stage
a
yeah each search result is processed in its OWN map stage
so each search result gets 1k governance
j
Ohhh, that is where I misunderstood
a
👍
yeah so properly implemented MR rarely have governance issues
(its actually awesome)
this 1
j
Got it, I will rewrite the script to utilize the map stage properly, thank you I appreciate the insight.
👍 1
s
NS M/R scripts are far from awesome
a
... compared to scheduled scripts for large amounts of data processing. idk I think they're pretty awesome compared to what was available in SS1.0
s
that's actually a good point - I often see people using MR scripts where a Scheduled script would have been much simpler and perform just as well.
I fear there is this misguided sense in the community that MR is intended to replace scheduled scripts.
a
I'm guilty of that, I don't really write scheduled scripts anymore. I'd argue MRs ARE always more performant though, just that its often not really required to get that extra performance in many use cases... but you never know if what you're handling today is going to be the same in 12 months, so just make it an MR from the get-go
📢 1
💯 1
s
that is, unfortunate.
a
I'm not sure why it is though? I don't feel like MRs are particularly complicated, especially when your using them for essentially a scheduled script use-case, you're just leveraging the concurrency model for performance and future proofing
j
Updated the script and it works great, we can process over 2000 records now (I tested with 5000 and it worked fine). Is there a way to speed it up somehow? It took 74 minutes for 5000 records.
👍 1
a
umm well you can throw more concurrency at the problem by buying more suitecloud plus licenses from NetSuite. or you can try to eek out some better performance on your code. reduce the # of db reads/writes is almost always the best place to start
j
Is it better to have a userevent script run on a custom record being modified to modify an item record or just have the m/r script modify both the custom record and item record?
Right now I have a UE script that runs when the custom record is modified by the M/R script, then changes some fields on the item record related to the custom record.
a
what you're doing should be the more performant way of doing it, the UE aftersubmit actions shouldn't be slowing down the MR at all, so that's good... they ARE in the aftersubmit, right?
j
Yes, it is an afterSubmit script. afterSubmit for sure does not prolong processes that trigger it? I have always wondered that.
a
nope, the submit is happening in the MR, and then its done, the aftersubmit is triggered but the MR isn't waiting on it
j
Great to hear, thank you.
s
I think after submit is executed before control is handed back to the calling script.
also, if 'after submit' needs to load the edited record then YES it slows the system down because you have to reload the record in aftersubmit (contrast with beforesubmit which already has the record reference from the context)
j
I tried 2 concurrency rather than 1 and did not notice any performance improvement.
a
... generally you would expect to take 1/2 the time
unless there's another process that's using the other concurrency? so even though you set it 2, it can't actually utilize both?
j
I do not think another process was running, but I will have to test again and double check.