Hi Guys, How can i update 10K items fast, i am usi...
# suitescript
m
Hi Guys, How can i update 10K items fast, i am using restlet its taking 10seconds for 10items, anyidea how can get better performance
d
If you're updating the items with default values, use a CSV import
n
If CSV doesn't work for your use case, such as if you have to use any logic, you should be doing it with a map/reduce script and likely record.submitFields
s
If you have one or more SC+ licenses, you can use multiple CSV import threads (up to 5) and/or multi-thread the imports to improve the speeds. If you don’t have SC+, you still get a second Map/Reduce queue for free, so you can at least split the work into two parallel queues. However, the simplest thing if you already have a working Restlet is just to make multiple calls to it in parallel, making sure you don’t go over your max concurrent call limit.
👍 2
Also, record.submitFields can sometimes be faster than record.load/edit/save, if you aren’t already using it. It’s not alway a huge improvement, but over tens of thousands of records, it might make a noticeable difference
Also, is your restlet handling single requests (one update per call) or a batch of updates?
p
Even if you are using a little logic, it's sometimes faster to do a few quick formulas in excel/sheets and then CSV it
✔️ 1
b
the desperate answer is to split up your items and use mass updates, csv import, map/reduce, and the restlet at the same time
might also be able to throw in a scheduled workflow, im not sure if that shares a queue with the others
🤭 1
m
@scottvonduhn this is what i am doing
Copy code
function updateRecord(context) {
            var response = [];
            //var i;
            for (i = 0; i < context.length; i++) {
                
            try {
                var id = context[i].recordId;
                var type = context[i].type;
                var values = context[i].values;
                var options = context[i].options;

                var _internalId = record.submitFields({
                    id: id,
                    type: type,
                    values: values,
                    options: options || {}
                });

                response.push({ status: { isSuccess: true }, internalId: _internalId });

            } catch (err) {

                log.debug({
                    title: 'PUT',
                    details: JSON.stringify(err)
                });
                var _statusDetail = err.prop && err.prop.constructor === Array ? err : [err];
                response.push({ status: { isSuccess: false, statusDetail: _statusDetail }, internalId: context[i].recordId });
               }
            }
            return response;
        }
s
That’s good. What is the size of your batches? If it’s only 10 at a time, try increasing to 20, 40, or even up to 100. Just be careful to not put too many items in a single request, as restlets will timeout after 5 minutes and stop processing
there is some overhead in handling the requests and starting the restlet script, so slightly larger batches can help minimize that overhead.
b
you will also want to send more requests at the same time
your integration governance will tell you how many requests your account can handle at a time.
will probably be at least 5x greater than 1
more unusual speed tips would be to make sure user events and workflows are setup to not run for your restlet/user/role
s
Definitely send parallel requests. However, be aware that at a certain point, resource contention becomes a problem. We have found that at around 6-7 simultaneous inserts/updates to the same record type is where we reach max throughput. That is obviously affected by a lot of factors, such as your service tier, time of day and system load, and many others.
s
I do find it interesting how many people completely disregard the contention issues - i.e. that NS is a shared system, at least with regard to ALL the things going on for your account at any given moment.
s
Yeah, it’s worth measuring throughput under different conditions, and adjusting the approach based on that. We had a data load process using 4 threads, and increased it to 8, but noticed that throughput of the individual scripts fell. We experimented with different #’s of threads and payload sizes until we found our sweet spot. You can never just assume that throughput will scale in a linear fashion. There are real limits.
b
desperate option is desperate
m
You can use an external Suitelet and get 50 concurrency for free. https://ursuscode.com/netsuite-tips/epic-battle-concurrent-map-reduce-vs-concurrent-suitelet/
🤯 1