Hey all, I wanted to start a thread about data pri...
# ai-netsuite
r
Hey all, I wanted to start a thread about data privacy, risks, etc to have a resource of links. I’ve shown this to 3 clients and they have all asked about it. Here are the links I’ve sent them. Please add anything else you have found.
watching following 1
NetSuite’s - Associated Risks, Controls, and Mitigation Strategies - https://docs.oracle.com/en/cloud/saas/netsuite/ns-online-help/article_9002708453.html
👍 1
n
Thanks Ryan! Digging a bit deeper I found this: Data usage for Claude.ai Consumer Offerings (e.g. Free Claude.ai, Claude Pro plan) https://privacy.anthropic.com/en/articles/10023555-how-do-you-use-personal-data-in-model-training We will not use your Inputs or Outputs to train our generative models, unless: (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Usage Policy, including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission), or (2) you’ve explicitly reported the materials to us (for example via our feedback mechanisms), or (3) by otherwise explicitly opting in to training.
Do you think the Trust & Safety review process opens a significant loophole which would concern customers? Anything from mentioning sensitive industries (“weapons,” “finance,” “crypto”) to security keywords could trigger a flag? Doesn't this mean that not all chats are private by default?
(Note: this is for the Consumer Claude Pro plan)
For Commercial customers: https://privacy.anthropic.com/en/articles/7996868-is-my-data-used-for-model-training There seems to be no "Trust & Safety" review for commercial customers...
r
@NetSuite Ninja those are good questions. I don’t know. Guessing their support would maybe have more information on what that means.
👍 1