Hello All, I'm stumped on this one. I have a larg...
# suitescript
b
Hello All, I'm stumped on this one. I have a large saved search that cannot run in the UI that is being used to send an inventory feed to an FTP. The process uses the N/task module to run the search. The search runs and creates a file over 100MB. Because it's over 100MB, I cannot move to FTP. So, I broke the search up into chunks by using the internal ID (number) criteria option to create ranges so the files created are under 100MB. However, now when I run the chunked searches, one of them never results in data and leaves me with an empty file. The ONLY change I made is to add internal id number ranges to search which in theory should make it easier to execute. Of the 3 chunks I created, 2 of the 3 run in the UI and provide results, the 3rd doesn't, it times out. It's very odd to me that my main search that had no internal ID restrictions, will not run in the UI but runs when using the N/task module BUT one of my restricted searches doesn't run in the UI or in the N/task module. Does anyone have any idea why this might be? Or ideas on how to solve it? I've tried making small chunks, but I have to get too small to make it worth it it seems.
b
what do the searches look like
b
oh duh, that'd help... one sec
message has been deleted
goal was to get all items, regardless of serialization and binning in one search resulted without dups. it works great, we just have too many items and locations now that we merged an acquired company into our ecosystem.
and the results
i've narrowed it down to serialized items that use bins as the biggest issue. we loaded items from the acquired company in segments and it bombs out when it gets to the chunk of these serialized and binned items, however, like i said before, removing the last criteria for filtering the id range lets the task result in a file that contains all items. it's so bizarre to me. i'd expect that large one to fail if the smaller one does, but that's not the case.
b
try the searches without that giant formula filter in the middle
b
i can't do that because that totally messes up the results. that formula is what makes it so the results are not a cartesian product of locations, bins, and serial numbers
b
do it anyways, the internal id filter may not actually be the problem
b
i mean i get the premise, but do you have a way to rewrite the formula in a way to keep this in one search or no? I ask because I have 3 chunks of this search. The first chunk goes from id 1 - 500k. the next is 500001 to 542k. then i run into an issue with ids 542001 - 547999 because 548k+ renders in the UI.
b
i dont really know what the formula does
b
the other option i have that i have not done is to create searches that match the combination of criteria i have so that formula becomes mostly moot. serialized and binned, serialized and not binned, not serialized and binned, not serialized and not binned, but I may still have file size issues there which would requiring chunking so I thought the ids was the best approach.
b
id only really put the effort into it if it was causing problems
b
makes sense
the formula allows the results to not have to be summarized and also allows for the rows to only show up when an items location matches it's inventory number location and bin location if binned, resulting in the correct inventory number and bin showing in the results without any dups. it also allows for non-binned items to show their results properly. if i had used the inventorybinonhand join in the results, that instantly drops all the non binned items because they don't have a bin. it was a PITA to create but once done, seems to work well, at least until our SKU and location count exploded due to an acquisition.
example of serialized and binned results
b
doesnt really matter
run it the 3 searches without the formula filter
if the searches work, then the effort is put on the formula
if not, then other approaches
b
okay, i guess i thought i was past that stage because with the formula and without the id filter, it ran and gave me a 120+MB file. however with the formula and id filter, it ran on two files, giving me one that was 65MB and another that was like 42MB, but failed to complete on the other.
perhaps a separate question is warranted, how would I log the result of an async task that I don't know when it will complete. running a check status call will result in pending immediately. do you put in a delay loop (sounds like a bad idea) or just run it in the console after some time? I'm not getting an error message or anything as a result of this not generating and want to know exactly what the response is.
thanks for your help by the way!
c
Dunno how much time you have, but it may be worth doing what @battk said and breaking up searches but also take the heavy lifting out of the search and do it in code and see how it works out. If you handle the search and file creation yourself, you may also be able to reduce the file size depending on what you're doing since its not a data dump. How are you creating the file now? Are you writing CSV results to a file or what?
b
options for checking status usually involve a browser using setInterval
or another scheduled script to check the status on a schedule
b
gotcha, thanks!
thanks @creece , to answer your questions, the file is being created by the n/task module that creates an async search task. i've added an inbound dependency so that when the file is downloaded, it gets sent to the FTP. the n/task module does all the file creation work for me and yes i have it created as a csv