2 Comments

Confidentiality is the biggest barrier to AI use in the private sector. Most of my clients are banks, and I can't include any identifiable information from their meetings, emails, documents, or ideas in AI prompts without risking immediate termination. AI note-taking or summarizing is outright banned.

Under GDPR, I can't use personally-identifying information in AI prompts without violating the law, which is why AI vendors are scarce in Europe. In the USA, even without GDPR, using inside information in AI prompts raises serious concerns. In health and medicine, using patient data in prompts would be a clear HIPAA violation.

These issues persist because there isn't a reliable way to prevent data from being sent back to the vendor. Even running models in-house doesn't fully solve the problem. Consequently, the use of LLM genAI is very limited for me. Stripping out all identifying information and verifying LLM outputs is so time-consuming that it's often faster not to use AI at all. The lax privacy protections in the USA only add to the complexity.

Expand full comment

This is one of the minority use cases when I *can* use LLM genAI: for summarising generic text to offset my tendencies to use more words. :)

Expand full comment