Is the N/Cache data available across users/session...
# suitescript
a
Is the N/Cache data available across users/sessions?: • User A via Suitelet A creates a cache object A • User B via Suitelet A could access cache Object A ???
s
It is available across sessions, but the cache can’t be relied upon perfectly to avoid concurrency issues. You could add a custom field to the lines (assuming they represent a record type), and in the scheduled or map/reduce script do a check against the field right before they are processed, and skip if they have already been processed, otherwise set the field and begin processing. There’s still a race condition with either approach, though. If you are creating new records as a result of processing, and you can use information in the record to create a unique string, then you can set that as an external id on the records. NS will enforce that two records don’t get the same external id, so the save/submit will fail for any potential duplicate. That’s the best option.
c
Alternatively, you can pass parameters to a M/R when you're scheduling it: MapReduceScriptTask.params Lots of options here - custom field on whatever your work list is (custom trans line field/custom record field etc) - pass that in as a parameter & then make your M/R getInputData only pick up the ones that are meant for it, based on the param. You can get creative with this
If your business case doesn't mind a bit of lag, you can also just rely on N/task only allowing a deployment to run once at a time. Then have a 15 minute scheduled mop-up M/R that is also called by the suitelet. It'll only ever run a single instance so you don't have to care about races
n
I've done something similar to what @CD said. Basically create a process queue and let a map/reduce just run every 15 minutes *edit - exactly what CD was saying. I should read better
a
@CD Using Map Reduce Script Parameter would not solve my problem (I realize now I'm not using a Map Reduce) I can have multiple users clicking the Suitelet Submit button with overlapping selected lines at the same time, those selected lines would be send to a helper(middle man) Suitelet as a JSON Payload. I'm going to play with
N/cache
or the
externalid
idea @scottvonduhn But even the
externalid
because it is a Helper Suitelet (meaning is fast), If the helper suitelet is called twice with lets say 100 lines almost at the same time, it could potentially create some records from the first user and others from the second user and there would be a fight between externalids and errors all over the place.
s
Yes, you’d need to gracefully detect (and ignore/skip) the duplicate line collision errors, since they are essentially just noise and a byproduct of a busy multi-user system
But probably still handle or propagate other legitimate errors
s
another thought is to just try and make the line logic be idempotent if possible. Otherwise, the 'action queue' approach described here is a thing, I'm not particularly fond of NS as a queue, because it's not built for that purpose.
s
The lack of real atomic operations or other methods to control critical sections of code from being executed multiple times at once makes it hard. The benefit of the external ids is that the database enforces uniqueness at the data level. I haven’t found a good way to stop code from being run by multiple processes in NetSuite without relying upon some kind of data-backed approach yet.
c
If the lines can be sent 1-by-1, create each line as a custom record with the external id sent. Have a user event script on the custom record do whatever needs doing. The external id will stop duplicates
I feel like we’re not getting the whole story here so a decent design can be done.
But i guess this isn’t my consulting gig :)
b
The SAFE guide describes 2 methods of handling race conditions in 4.7 SuiteApp Designs and Concurrency Issues
s
@battk I’ve read through it before, and to summarize it, the first of the two methods described, a critical section scheduled script, can still be circumvented if another script or user manually edits a custom record or creates a duplicate record. The second method described relies upon built-in locking on standard records and optimistic locking being enabled on custom records, but to my understanding that only helps when editing existing records. For the problem of creating new records, the second method doesn’t help. Also, the second method can be circumvented by inline edits. The last solution suggested is to use external ids on custom records to create a semaphore, but again that is relying upon the database enforcing external ids uniqueness, which I believe is, fundamentally, the only guaranteed method of preventing race conditions.
a
I was able to leverage what I needed with the cache module, thank you all...
m
The Cache module in Netsuite is designed for performance enhancements and not as a general-purpose key store. It is a pet peeve of mine when people use it for other purposes, as it's not reliable because Netsuite does not guarantee that the data will persist.
s
Any use of cache should be aware that the cache could be invalidated, forcing the loader function to be called. You have to assume that loader can be called any time you perform a cache get.
s
has anyone measured performance of n/cache vs say SuiteQL searches (which I presume also have some behind-the-scenes 'caching')?
s
The speed benefits of the cache are more apparent the more the cache is hit. For example, I created a cache for an M/R script where files are created and saved to one of two folders. Over the course of thousands of Map contexts running, the cache is a vast improvement over making the same search or query call repeatedly. But if you’ll be making hits only rarely, and will have mostly cache misses, it may not improve performance very much.
s
I have no idea what technology that cache is using - so I also don't assume it's faster than other approaches. Would be nice to have some somewhat objective comparison - something to add to my never-ending list of things
s
I can’t recall the exact numbers for the script i mentioned above, but the savings were substantial. It was much closer to the speed seen with hard-coded folder ids than to the speed where a search was made every time. And it shaved at least 10 minutes off a script that was running for ~45 minutes, brining it down to ~35. But there’s too many other variables to be accurate. I feel like a M/R script could be created to test the performance of the cache vs. direct search and/or query
s
This is actually one reason I still prefer scheduled scripts over M/R - scheduled scripts lets you cache things in RAM naturally.
s
True, though in the case of the script above, it would have delayed our billing process by more than 8 hours (essentially a whole business day) vs the less than one hour we can achieve with the M/R script, so I appreciate having the choice and flexibility. Yes, parallel programming is more work and less simple, but sometimes the speed is essential. I do realize that multiple scheduled scripts can be run in parallel as well, but then additional work must go into dividing and assigning work to the separate deployments, and at that point the benefits of the scheduled scripts’ simplicity is lost
s
Yup, both script types have their own sweet spots. Glad we have both in our toolbox.