How can we stop a running map/reduce?
# suitescript
a
How can we stop a running map/reduce?
e
My understanding is that you cannot. It should timeout after 60 minutes as long as that was left as the default.
a
Thanks
s
unfortunately it doesn't necessarily time out after 60 minutes either. 😞
e
@stalbert Do you have to get NetSuite support involved at that point? What has been your experience?
s
you can delete and re-create/deploy, taht usually kills it i beleive
👍 1
my advice would be to not run MR that have a chance to go on forever
build them with small data inputs and put the the larger input in place once you are confident it is doing what it should do
s
I've tried deleting - NS will not allow deleting the deployment or record in my case since it was already executing.
k
Remove everything from the map/ reduce end points in script and undeploy. It will complete soon then
That’s how I deal with it.
s
This is sort of thing is part of why I still like scheduled scripts 🙂
@eblackey I waited several hours and it eventually gave up. In my case it was just a search that was taking too long -
getInputData()
ran for 3 or 4 hours
👍 1
c
"runaway MRs" are the worst.
if its governance heavy, you can upload a version that does nothing and when it restarts, itll stop almost instantly
💯 1
s
undeploy I think results in similar stopping behavior (once it decides to restart/check)
c
i never tried that one so ill have to take your word on it. Sounds reasonable.
s
From experience I have found that if you exceed 40k - 50k search results, the getInputData time starts to increase disproportionately. 60k results might take twice as long as 40k, for example. I try to limit mine to around 25-30k max per execution, and just run the M/R multiple times if needed
It helps avoid the getInputData step from running for hours as well.
c
getInputData is single threaded so that makes sense. You don't get the real benefits until it starts processing in parallel
s
what I think is strange is it seems like getInputData() must pull ALL the search results before it sends ANY to map(). I presume that's a limitation of the underlying technology?
s
I think it does get all the data at once, and that’s probably why the performance falls off sharply when the resultset gets too large. It’s also capped at 200MB in terms of size, too.
c
Yes, it gets all the data then moves on to map/reduce
679 Views