I'd probably argue against this case, since it shouldn't happen. Generally the way when you're in a properly supervised hierarchy is to handle errors that are reasonable, and let the supervision structure restart unexpected conditions.
For instance, if you're on a single node, not in the cloud, but bare metal you maintain, and the DB lives on the same host as the application reading/writing to it; then it's not reasonable to expect the DB to be down, in this case "let it crash". If you're on a cloud service though, where the DB lives on another node, then it's reasonable to assume that at some point your DB is going to be away, and it may make sense to handle that error in a reasonable way (like trying that request to another db server).
Since we're talking about files on the same system the node software is running, imo this is one of those "let it crash" moments, since under what reasonable conditions could that ever fail? And would its failing be fatal, or recoverable?