Is there any limitation in Sandbox instance with r...
# suitescript
a
Is there any limitation in Sandbox instance with respect to how much lines you can add to Journal Entry using Map Reduce script ?
I am getting usage limit excced with 7000 lines in Sandbox and same thing is working in PROD
e
Interesting. As per the Oracle docs, "For journal entry transactions submitted through CSV import, the limit is 10,000 lines per transaction." Same for asynchronous SOAP web services. No mention on map/reduce.
👍 1
c
What's the exact error?
a
"type":"error.SuiteScriptError","name":"SSS_USAGE_LIMIT_EXCEEDED","message":"Script Execution Usage Limit Exceeded","
c
Looks like you've consumed the governance limit.
Are you breaking each journal entry into it's own key?
a
I did not understand the question
but script will run to create the one JE
c
If you've designed the M/R well, you won't hit a governance limit.
Feel free to post the script.
a
yes , you are right
though it is working in PROD now
c
I bet it's close to breaking in prod too - there are always differences between SB and Prod
a
true , I would try to explain the script flow , since script is very long
c
1. Cool - how're you getting the data to populate the journal? 2. How are you passing that data to the reduce function?
a
this is async process, I will get the request through restlet and I am storing that request in custom record as file
from restlet , I will call the map reduce script with custom record ID
getInputStage - read the parameter and pass it is to map
Map - load the file and pass the processing lines to reduce
Reduce - pass that on to summarize
c
the input stage would ideally return an iterable data structure
Are you trying to create the entire 7000 line journal in teh reduce stage?
and where are you getting the journal data from?
a
In Summarize
journal data is in file in JSON format
c
Where is the data coming from?
A saved search?
a
It is stored in custom record
custom record is being generated in restlet ,
restlet will actually have the payload
from restlet I am calling map reduce - so this is async process
since data is huge I already taken or segrated load using map reduce and it seems I have to further optimize the code
c
Without seeing the code it's hard to imagine
How is the json held in custom record? One large text field of JSON?
a
It is stored as file , since it is in MB's
Do you have any similar situation where you have to create the JE with this number of lines , also I do have 4 saved searched to validate some of opetation
I could try to distribute the search in different staging instead of doing all that in summarize section maybe
since I have the file , I guess I can not use the getInput Stage as iterable
c
Which governance consuming function are you doing in a loop?
a
from log it seems all the lines being processed , whole submitting / saving the record to netsuite , throwing this error
c
M/R really excels when you can break the data down into smaller chunks - if you have to create the entire journal in one go, a scheduled script might be better assuming you can keep it under 10,000 gov points
How many times are you saving the record? That only takes about 20 points
a
only once
c
That's not where your script is running into performance issues then.
Paste the script somewhere.
You could also just break this into separate journals - consolidated journals are nice but i doubt it matters, it's all the same on the reports.
👍 1
1
a
thanks Craig , Great input , I will exlplore the idea of splitting the JE's