Yeah, technically. It's just a philosophy though, so there are times when it's not the appropriate philosophy.
In this case it really comes down to practicality. Is it useful to store the full binary file in the event log? Does that give any value? If the answer is no, then there's no point in saving the file in the log, just store a reference.
You can have different sources of truth for each "domain" or specialty. The source of truth for files is the file system... if it is gone from there, it doesn't matter what the database says. :)
It may still be important to (event-sourced) areas of the business to record that something changed and perhaps trigger a further action. You can set this up in a number of ways. The client could issue an API call after successful saving of the file (this is request-driven, probably not the way I would go). Or you could setup event notification on file operations (this is event-driven) -- S3 supports this or just use a file watcher for local apps.
At this point, this is really integration between two systems and no longer event-sourcing. Instead it is Event-Driven Architecture. Event sourcing really only applies inside individual domains, not across different systems. This is probably why you already had an inkling that event-sourcing would not solve the file management problem. By itself, it won't.
We have an audit requirement that it should not be possible to change data without a trace, so storing the file separately was a problem for us until we realized that we just have to store the file sha256 hash and that way we can check if the file is the right one. So we get the best of both worlds.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.