Avy Faingezicht
10/18/2023, 7:11 PMCustInvc
type rows, which makes me think the query is pulling in draft objects that the transaction detail hides. How can I exclude those?Avy Faingezicht
10/18/2023, 7:11 PMSELECT
tl.id as transaction_line_id,
tal.amount,
tal.account,
BUILTIN.DF(tal.account) as account_name,
t.id,
type,
trandate,
tranid,
t.memo as memo,
trandisplayname as name,
currency,
tal.exchangerate,
t.postingperiod,
tl.subsidiary,
BUILTIN.DF( t.status ) AS status_description,
FROM transactionLine tl
LEFT JOIN transaction t
ON tl.transaction = t.id
LEFT JOIN transactionAccountingLine tal
ON (tl.id = tal.transactionline AND tl.transaction = tal.transaction)
where t.posting = 'T' and postingperiod = '152'
order by transaction_line_id
Clay Roper
10/18/2023, 7:17 PMAvy Faingezicht
10/18/2023, 7:19 PMt.id
are expected, I actually am not counting `tal.id`s (you can see I am not selecting them) but I could.
By draft I mean transactions that have not actually postedClay Roper
10/18/2023, 7:20 PMtal.id
you're getting back results from multiple tal
lines, correct?Avy Faingezicht
10/18/2023, 7:20 PMClay Roper
10/18/2023, 7:21 PMClay Roper
10/18/2023, 7:23 PMAvy Faingezicht
10/18/2023, 7:23 PMClay Roper
10/18/2023, 7:26 PMClay Roper
10/18/2023, 7:26 PMAvy Faingezicht
10/18/2023, 7:27 PMBill 2213
Bill Credit 50
Bill Payment 1610
Credit Memo 16471
Currency Revaluation 166
Customer Refund 2080
Deposit 103188
Invoice 279945
Journal 14831
Payment 182136
Transfer 20
Grand Total 602711
vs from the API:
CustCred 22206
CustInvc 374884
CustPymt 182134
CustRfnd 2082
Deposit 103188
FxReval 166
Journal 14829
Transfer 18
VendBill 2038
VendCred 32
VendPymt 805
Grand Total 702382
Avy Faingezicht
10/18/2023, 7:30 PMtdietrich
10/18/2023, 9:21 PMAND ( TAL.Posting = 'T' )
Avy Faingezicht
10/18/2023, 10:45 PM603422
though still not there… I am not running another batch with tl.posting = 'T'
to see if that gets rid of the remaining ones, since ultimately this could come from simple join explosionsAvy Faingezicht
10/18/2023, 10:50 PMwhere
clause made it go from ~25s/1k records to ~35s/1k
Are CTEs and their behavior in SuiteQL well documented anywhere?tdietrich
10/18/2023, 10:50 PMAND ( T.Voided = 'F' )
AND ( T.Void = 'F' )
tdietrich
10/18/2023, 10:58 PMpostingperiod = '152'
first.tdietrich
10/18/2023, 10:59 PMAvy Faingezicht
10/18/2023, 11:00 PMAvy Faingezicht
10/18/2023, 11:00 PMtl.posting
actually didn’t make any difference, by the waytdietrich
10/18/2023, 11:01 PMAvy Faingezicht
10/18/2023, 11:04 PMAvy Faingezicht
10/20/2023, 3:33 PMDISTINCT
(I do want to get to the bottom of it, but wanted to get the ball rolling…) and interestingly I got less results as compared to without the distinct, but I still got some dupes. It looks like there’s missing values on both sides.
Is it possible that I’m hitting some kind of internal cache limit? Maybe it’s because I’m doing an order by transaction_line_id
and those are not granular enough? I am using your rownum pagination trick, so maybe there’s something going on with the cursor at the boundaries? I spun up another one ordering by transaction id, but that didn’t seem to fix it either.Avy Faingezicht
10/25/2023, 9:51 PM2023-03-10
and another with 2023-3-10
which then I have to dedupe… adding to_char(trandate, 'YYYY-MM-DD') as transaction_date,
dropped the error rate quite a bit because when deduping those rows get deduped correctly.
Now I’m down to the last .1% of errors, which are mostly AR lines missing on payment records… All of these are a single line missing in a two line transaction. Feels almost like an off by one.Avy Faingezicht
10/27/2023, 6:27 AMROWNUM
pagination recommendation.
I was iterating 1k records at a time, so going BETWEEN 0 AND 1000
to BETWEEN 1000 and 2000
etc. This causes an off by one when the 1000th is the first line in a two leg transaction, which then gets skipped by the cursor. The solution is to paginate instead from 0 to 999 and from 1000 to 1999, ensuring you pull 1k records on each run, and no duplication.
Also, FWIW seems like this trick breaks once you get to 1M records in a query, after that you have to introduce additional partitioning.