We have no idea of what is causing all the space t...
# general
r
We have no idea of what is causing all the space to be used up, and no way to do some sort of maintenance/reduction on it, but they're charging for it, so the higher it goes, the better for them.
a
I looked at the count of all records in the account and there's one custom record with over 1.5 million records. It's a simple record type (only a few fields), but my bet is that's the main culprit causing the overage I'm seeing.
r
ooh, how did you see that?
s
Two things to be cautious of: it appears the Total Storage is only calculated once per week on Sunday (or that was the case a few months ago, unless NetSuite has changed it recently). Also, we have one custom record with a half dozen fields, and there are over 25 million records in it. However, we did a test in two of our sandboxes and deleted every single one of those custom records. Our total storage did not drop at all, in fact it went up slightly, even after waiting a week for the Total Storage to be recalculated. We then went a bit crazy just deleting all custom record types that had over 100,000 records in them, and again, no change in Total Storage after waiting a week. We reported our results to our account manager basically saying that, if deleting practically everything in the account doesn't reclaim storage space, then what does? We still don't have an answer from NetSuite and it has been many months. I was also hoping 2019.2 would give us better visibility, but they did not deliver on that promise.
SuiteAnswers article 24924 tells a confusing story about storage. It states the following:
Customization can become a very large source of data usage when the cross references to custom elements expand the size of any particular record. Think of an item record that is linked to a custom field or custom record and then how many times that item record might be referenced by a PO, SO, RA, CM or invoice.
I have no idea why a transaction that references an Item, which in turn references a Custom record type, should account for more storage than just the existence of the custom record. If their database is normalized and they are storing references by foreign keys, it should make practically no difference on the referencing record, and 0 difference on any record two or more links removed.
k
It's not normalized well
But you don't really want normalization
I mean, saved searches are just a view of a stored view
s
I know that materialized views in Oracle actually do take up space, because their results are stored each time they are updated
Do the saved searches we create end up generating a materialized view in our account? If so, that could have major storage implications.
The biggest problem is the lack of transparency with how total storage is calculated. At least with the filing cabinet, we can see the file sizes, so it's pretty obvious how much storage each file consumes. With records, I have no way to tell.
a
wow - good looking out @scottvonduhn - I won't go too crazy deleting custom records then
s
My point was only to say that you should make sure it will even have an impact. For us, it didn't seem to matter, so we decided it wasn't worth doing in production.
a
@Ricardo I was just looking at all custom records in the account and seeing how many records there were
Right - that said, I also don't see a reason they need all this custom record data anyway, so cleanup might not be a bad idea
some of it dates back to 2015 and I'm guessing they don't need it all
s
Cleanup is a good thing. If nothing else, it should improve performance
a
Are you sure the lack of total storage updating wasn't because of the fact you were in sandbox and not production?
k
I bet they don't have those weekly processes to shrink sandboxed database applied
a
I'm hopeful, But I also would totally believe it if NS's calculation of storage makes no logical sense.
s
There is no way to know, but I'd like to have some kind of guarantee of success before we start deleting data in production.
a
Same - I'm inquiring internally here to see if anyone has any anecdotal evidence to support it.
k
you could always intentionally blow up your storage with a new custom record, and then wait a week, and then delete it and see if it works... but that could have considerable risk from a "well, it went up cause I did X, and never came back down when I deleted X" - and then having NetSuite stick you with the bill for it anyways.
Also - another thing I think kills it from a normalization/data storage standpoint - if you go to do a saved search on custom records, when adding a formula field, you see all fields, not just those that pertain to your record. I wouldn't be entirely shocked if all custom records were stored in ONE table as opposed to seperate tables, in which case a large variety of custom records, times the number of unique fields, times the total number of custom records could actually be the calculation of storage instead of just those that apply to each record type alone.
Though that would be an insane way to build the database, but if I think about it from a a database control perspective, you wouldn't necessarily want to give people the ability to add a whole bunch of tables ad hoc.
tinfoil 1
s
I went so far as to create an evil twin record type, and was going to copy over the 25 million records into it, wait a week, then delete it all, and wait another week, but got scared that it might have permanent billing impact on us, and it would be entirely my fault.
r
We don't have a sand box (cue's the "We'll do it live!" meme), so there's that. @KevinJ of Kansas I agree to both of your thoughts on that. It would be insane, but also pseudo-practical at the same time, depending on how you look at it.
c
This may come as a surprise but its simply a cash grab...the reason the storage wasnt enforced in the old days was they didnt know how. Now Oracle tells them to up revenue and this becomes a method...I doubt anyone comes up with a corporate answer that has substance.