Founder of SAGEWORKS AI — building the Web4 layer where AI, blockchain & time flow as one. Creator of Mind’s Eye and BinFlow. Engineering the future of temporal, network-native intelligence.
The point about model cards being important but often incomplete stuck with me. It's one of those things that sounds like a documentation problem on the surface, but I think it points to something deeper about how we're building the AI supply chain.
When you learned that most models rely on Common Crawl and that training decisions can introduce security risks, it connects back to the same issue—there's this long chain of dependencies where each link assumes the previous one did its due diligence. The base model inherits risks from the training data, the fine-tuned model inherits risks from the base model, and the application inherits risks from all of it. Model cards were supposed to make that chain traceable, but they're only as good as the weakest audit in the stack.
That parallel between your debugging practice (using po and stepping through breakpoints) and what you're learning about AI security is interesting, even if unintentional. You're learning to trace execution state in one context while discovering how hard it is to trace provenance in another. One has mature tooling, the other barely has conventions.
Are you finding that the AI security material is changing how you think about the apps you're building in SwiftUI, or do those still feel like separate learning tracks for now?
Thank you so much for your thoughtful comment — I really appreciate it!
That point about tracing execution state vs. tracing data provenance really stood out to me as well. I hadn’t thought about it that way at all, but it makes a lot of sense.
I actually started learning AI security because I’m exploring how to use AI in my work, and TryHackMe released content on it at the perfect time. As I’ve been learning, I feel like my understanding of AI — especially its risks — has become much clearer.
For SwiftUI, I’m trying not to overreach and instead focus on building things step by step within my current understanding. Because of what I’ve learned about AI security, I think I’ve become a bit more cautious about integrating AI features into applications.
For now, they still feel like separate learning tracks, but I feel like they might connect more over time.
Thanks again for sharing your perspective — it gave me a lot to think about!
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
The point about model cards being important but often incomplete stuck with me. It's one of those things that sounds like a documentation problem on the surface, but I think it points to something deeper about how we're building the AI supply chain.
When you learned that most models rely on Common Crawl and that training decisions can introduce security risks, it connects back to the same issue—there's this long chain of dependencies where each link assumes the previous one did its due diligence. The base model inherits risks from the training data, the fine-tuned model inherits risks from the base model, and the application inherits risks from all of it. Model cards were supposed to make that chain traceable, but they're only as good as the weakest audit in the stack.
That parallel between your debugging practice (using
poand stepping through breakpoints) and what you're learning about AI security is interesting, even if unintentional. You're learning to trace execution state in one context while discovering how hard it is to trace provenance in another. One has mature tooling, the other barely has conventions.Are you finding that the AI security material is changing how you think about the apps you're building in SwiftUI, or do those still feel like separate learning tracks for now?
Thank you so much for your thoughtful comment — I really appreciate it!
That point about tracing execution state vs. tracing data provenance really stood out to me as well. I hadn’t thought about it that way at all, but it makes a lot of sense.
I actually started learning AI security because I’m exploring how to use AI in my work, and TryHackMe released content on it at the perfect time. As I’ve been learning, I feel like my understanding of AI — especially its risks — has become much clearer.
For SwiftUI, I’m trying not to overreach and instead focus on building things step by step within my current understanding. Because of what I’ve learned about AI security, I think I’ve become a bit more cautious about integrating AI features into applications.
For now, they still feel like separate learning tracks, but I feel like they might connect more over time.
Thanks again for sharing your perspective — it gave me a lot to think about!