Today, we hear a lot about AI being integrated into our hardware devices. PCs, smartphones, tablets—big OS companies are collaborating with AI to introduce many new features (perhaps too many).
They always claim that our data will be protected, but have you seen any real explanations or transparency about that?
Top comments (2)
Great question. And the short answer is: no, we haven't seen any real transparency.
We hear general promises: 'Your data is safe', 'We protect your privacy'. But where are the details?
Where are the clear answers to:
· Where is my data stored?
· Who has access to it?
· How long is it kept?
· How can I delete it?
· Is it used to train other AI models?
Without honest answers, promises are just marketing. And blind trust is not security.
Thank you for sharing your creativity with us. 🥰💞😁🐼💝🦋🥹🍻🐟🧊🤍🍷🍃🌊
I wish you more moments of happiness and success."
With the rise of GenAI, 'Data Protection by Design' is no longer optional—it's a security mandate for any scalable AI architecture. I especially appreciate your focus on the intersection of privacy and model training; ensuring data remains anonymized without losing its utility is the real engineering challenge of 2026!