One for the Celigo fans. When you see this many er...
# integrations
c
One for the Celigo fans. When you see this many errors and Celigo's own scripts are timing out, you know it's time to just quit your job and find something new šŸ˜„ I estimate it will take at least 6 hours to process the backlog, assuming I can find the route cause.
b
my guess is that you murdered your item fulfillments with user event scripts / workflows and now it take too long to save records
the easy change is lower the imports page size
the hard change is fixing the user event scripts/workflows
c
@battk Maybe, but the scripts running on the IF record haven't changed in over 2 years.
I think those scripts are all AfterSubmit too so I wouldn't expect an issue with saving a fulfilment from Celigo.
b
a restlet is doing the importing
c
Yeah, so that would do the IF .save() then any other scripts that run on that IF would execute once the Celigo restlet terminates right?
b
thats not how after submit works
they arent asynchronous, they are part of the save
c
I can see four aftersubmit functions deployed against IF, if any one of them takes too long, it will cause the timeout error to be thrown in the restlet then. Nothing has changed in these scripts for a couple of years so not why they're timing out now.
b
its more likely that the restlet is timing out
you can go through the logs and find the script with the error
if its in the user event, your user event is taking too long, in which case changing the page size wont help you
if its the restlet, then lowering the page size will help, but your performance will still suck
c
message has been deleted
Not particularly useful logs for the restlet
b
its the user events you want to check
there is no difference in log for an error thrown in the user event
m
I have fixed over a year of Celigo errors. Talk about fun! Can you retry just a SINGLE fulfillment error and see if that processes first?
b
any error thrown in the user event is ultimately thrown in the restlet
c
I'll look through all the scripts and see which one threw the errors. It looks like the issue has resolved itself as fulfilments are flowing through now, I just just have to clear the backlog
I've just worked with one client that integrated NetSuite with Kafka, it's literally the nicest integration I've ever seen. I don't think the cost is feasible for many NS customers though.
t
You may need to decrease the batch size that you send per request, but we can still send requests in parallel as long as you have more than 1 concurrency set on the NetSuite connection within Celigo. Here are a few good articles on concurrency, governance, and flow optimization. • https://docs.celigo.com/hc/en-us/articles/360034522472-Govern-concurrency-of-NetSuite-connections • https://docs.celigo.com/hc/en-us/articles/360043926372-Configure-connections-to-optimize-throughput-and-governance • https://docs.celigo.com/hc/en-us/articles/360043927292-Fine-tune-integrator-io-for-optimal-performance-and-data-throughput
c
Batch size is 10, it took a long time to find that optimum number
Some of these have been waiting for over 4 hours.
t
What is your concurrency setting on the NetSuite connection?
c
I assume that's this?
t
That is the max concurrency allowed for that integration setup in NetSuite, but in Celigo you can also specify here
So you could have multiple connections in Celigo going against the same NetSuite integration
c
That is set to 1
Let's see what happens at 8.
t
Then that means Celigo is only sending one page of records at a time across all your flows using that connection
c
Looks like it requires tokens to change that setting
I'll do the necessary requests to get new tokens generated then give a higher number a try. Thanks.
t
Sounds good. So I think the combination of decreasing batch size and increasing concurrency would help a lot
c
Concurrency is now 12 and batch size is 5.
I'll log some stats and see if we get some performance improvements.
šŸ‘ 1