Detecting and recovering from such inconsistencies normally requires a complete walk of its data structures, for example by a tool such as fsck (the file system checker).
This must typically be done before the file system is next mounted for read-write access.
After a crash, recovery simply involves reading the journal from the file system and replaying changes from this journal until the file system is consistent again.
The changes are thus said to be atomic (not divisible) in that they either succeed (succeeded originally or are replayed completely during recovery), or are not replayed at all (are skipped because they had not yet been completely written to the journal before the crash occurred).
An intention to ask the question I put as premise into the preamble and the answer to my question would help me to investigate the issue I encountered with.
Please clarify your specific problem or add additional details to highlight exactly what you need.
A physical journal logs an advance copy of every block that will later be written to the main file system.If the file system is large and if there is relatively little I/O bandwidth, this can take a long time and result in longer downtimes if it blocks the rest of the system from coming back online.To prevent this, a journaled file system allocates a special area—the journal—in which it records the changes it will make ahead of time.Changes to the journal may themselves be journaled for additional redundancy, or the journal may be distributed across multiple physical volumes to protect against device failure.The internal format of the journal must guard against crashes while the journal itself is being written to.