AI: How to Avoid Privacy Pitfalls

  • Most privacy concerns with AI are exaggerated and don't apply to typical users.

  • Use AI wisely, avoid sharing sensitive data, and reap its vast benefits without worry.

Generative AI has quickly become a powerful tool for millions worldwide, with impressive uptake since the release of ChatGPT, which reached 100 million users in just two months and is now valued at around $27 billion USD. With its human-like conversational abilities, AI offers a range of capabilities—writing, summarising, coding, generating images, and analysing data, to name a few. Yet, despite its rapid adoption, many still hesitate to use it, mainly due to privacy and security concerns.

While these concerns are valid, they often don't apply to 99% of what the average person would use AI for. In fact, it's possible to gain most of the benefits of AI without taking any risks at all. My experience shows that the barrier to using AI may not be as significant as you think.

Privacy and Security Concerns

It's true that, from a corporate perspective, there are significant risks in allowing employees to use tools like ChatGPT, which is why many organisations block access from their networks. The risks of sharing sensitive information with third parties, such as proprietary code or personal data, are very real. However, these situations are not the norm for most users.

Let's be clear: asking AI to handle sensitive data—like "here's a list of people's names, addresses, and credit card numbers, could you please sort them?"—is a bad idea. Similarly, submitting proprietary company code for a full review is risky. But for most personal and everyday uses, these concerns don't apply.

Common Uses Are Low Risk

Reflecting on my use of AI over the past week, involving around 100 different "chats"—some extending over 100 exchanges—I've found that I rarely need to consider privacy or security at all. This is because most of my interactions with AI don't involve sharing personal information. Instead, they are usually based on public data or involve the AI providing information directly to me.

For example, most of my chats replace Google searches, helping me find answers more directly: "What's the opposite of steel manning?" "Why does 'phishing' start with 'ph'?" "What's the UK equivalent of 'personal services income'?" and "Can you read this PDF from iCare and tell me the rules on dividends vs. salaries?" All these questions involve public information, so there's no sensitive data at risk.

You might argue that a compromised AI account could reveal insights about my personality. However, consider the broader picture: if you're worried about what a cybercriminal could learn from your AI queries, you'd need to shut down accounts like Gmail, shopping platforms, and Netflix first—these provide far more leverage than AI ever could for most people.

Safe AI Use

The next most common use for me is seeking advice on learning to code. I'm currently developing a few personal apps to boost my efficiency, and since the code is generated by AI or is publicly available, there's no risk in having AI review or refactor it. However, if I were to ask AI to review the proprietary code from my startup, that would be a different story—one involving real intellectual property risks.

When I think about the few instances where I had to actively consider privacy, one example was reviewing a letter of resignation for my son's partner. In this case, I simply removed personal details like names and addresses before submitting it to AI. Another example involved generating an organisational chart from real employee names. Here, I avoided uploading names directly by anonymising them, using a mapping table on my machine to scramble and unscramble IDs.

Reaping the Benefits Without Risk

In summary, less than 1% of my 1,000+ interactions with AI have required any privacy or security consideration, and even in those rare cases, the solutions were simple.

If you're hesitant about using AI due to security or privacy concerns, I encourage you to try out the free versions of these tools while being mindful of the information you share. Ask yourself, "Would I be worried if someone stole this information?" If the answer is "no," go ahead and hit "send." You might find that the benefits far outweigh the perceived risks.

Andrew Walker
Technology consulting for charities
https://www.linkedin.com/in/andrew-walker-the-impatient-futurist/

Did someone forward this email to you? Want your own subscription? Head over here and sign yourself right up!

Back issues available here.

Reply

or to participate.