Can any one suggest me please? How to handle searc...
# suitescript
r
Can any one suggest me please? How to handle search results in map/reduce script if I have large amount of data.. thanks in advance
b
you probably need to describe more
for example large number of search results from the search used for the getInputData stage is not a problem
large number of search results in map probably means a redesign
r
I have use getInputData stage after that i use map stage
b
and the search occurs where?
r
In getInputData stage
r
No I have execute the search in getInputData stage and passing to the map
b
then you have 10000 points to do it
if thats not enough, then you are onto redesign
r
How to redesign can u provide some samples
b
the only information ive been given is that you need to run a search in getInputData and cant simply return the search'
my personal guess is that you need to manipulate the search results
in which case my recommendation is to do the manipulation in the map stage and do the actual work in the reduce
r
Thank u so much @battk
d
When working with large datasets, always use the return search object reference method in getInputData in the Netsuite docs battk linked. I've had another developer's script which ran the search within getInputData run away and keep throwing errors in a permanent loop. NS support couldn't stop it and I spent a very stressful weekend trying to stop it blocking script queues needed for other critical business processes.
r
@Dominic B you couldn't delete the deployment(s)?
a
@Dominic B For next time: • Delete Script Record. • Delete Script Deployment. • Delete Script File. One of those will work and is going to stop the Map Reduce.
d
@alien4u all of those, plus introduction of a deliberate error into the script on Netsuite's recommendation. If it fails to load a search in the getInputData stage, it seems to keep restarting without re-fetching the script source or deployment info. I suspect the 10k governance limit eventually stopped it, but when all you're doing in getInputData is consuming 5 units, on a search which takes minutes to run as a persist result, that equates to a long time before it fails. In our case, it was something like 3+ days.