Opened 18 years ago

Closed 18 years ago

#1835 closed patch (fixed)

Gradually delete big files to avoid I/O starvation on some filesystems

Reported by: bolek-mythtv@… Owned by: danielk
Priority: minor Milestone: 0.20
Component: mythtv Version: head
Severity: medium Keywords:
Cc: Ticket locked: no

Description

The attached patch implements gradual delete of big recordings by repeatedly truncating them. On some filesystems (eg. ext3) deleting mutli-gigabyte files can take many seconds and cause I/O starvation of real-time processes (recording or playback). This patch avoids that by spreading the I/O load over time. Deleting a recording is already done asynchronously in a separate thread, so it should not matter if it takes longer.

This new functionality is controlled by a new setting "GradualDeleteIncrement?" (in the settings table). If the value is positive it is used as a chunk size by which to truncate the file in each step. If the value is missing or 0 the delete is done in one step (i.e. the previous behavior).

There is no UI to change this setting as I have no skills (or interest) in UI programming.

Attachments (5)

gradual-delete.patch (2.6 KB) - added by bolek-mythtv@… 18 years ago.
gradual-delete2.patch (8.9 KB) - added by bolek-mythtv@… 18 years ago.
gradual-delete3.patch (8.9 KB) - added by bolek-mythtv@… 18 years ago.
1835-v4.patch (10.0 KB) - added by danielk 18 years ago.
Reviewed, but untested version of patch.
1835-v5.patch (10.6 KB) - added by danielk 18 years ago.
Updated patch (sets a minimum delete rate to 8 MB/s)

Download all attachments as: .zip

Change History (19)

Changed 18 years ago by bolek-mythtv@…

Attachment: gradual-delete.patch added

comment:1 Changed 18 years ago by cpinkham

Resolution: wontfix
Status: newclosed

Marking this as won't fix. If your filesystem can't handle large deletes, tune your system or pick a better filesystem. This has been discussed over and over again on the mailing lists.

comment:2 Changed 18 years ago by bolek-mythtv@…

That's too bad :-(.

However, I would like to point out for the record that there are always tradeoffs in switching filesystem. Neither of the often recommended XFS or JFS can be shrunk and reiserfs has it's own reliability issues. So, there are legitimate reasons to use ext3.

comment:3 Changed 18 years ago by cpinkham

Resolution: wontfix
Status: closedreopened

comment:4 Changed 18 years ago by cpinkham

Owner: changed from Isaac Richards to danielk
Status: reopenednew

Reopening after discussing on IRC. If someone wants this committed quicker, they could resubmit an updated patch with the settings code added. Probably 100MB increments would be good, but some might want lower. Though, if things take too long to delete they are put back on the Watch Recordings screen, so something like a 10MB delete rate every 2 seconds would not be good since it would take too long to delete a HD file. Currently deletes are sequential, so if you delete 5 programs in a playlist, they are deleted sequentially, so even 100MB every 2 seconds could cause issues on the Watch Recordings screen when deleting multiple HD programs at the same time.

comment:5 Changed 18 years ago by anonymous

Perhaps it would be a good idea to use percentage of total file size to determine increment size. This would work both with HD and regular content. Could also factor in the total time of the recording to, e.g. delete 5% of a half-hour of content per increment.

comment:6 Changed 18 years ago by behanw@…

A common technique is to move large files to a seperate "Delete" or "Trash" directory (on the same drive) before deleting them. That way the file can be deleted as slowly as is necessary without showing up the "Watch Recordings" screen. A rename(2) is both atomic and very fast as long as it's on the same filesystem.

comment:7 Changed 18 years ago by danielk

Milestone: 0.20
Status: newassigned

Boleslaw,

Can you modify this patch so that the delete increment is automatically calculated. You need to ensure that all files marked for deletion are gone before the next free space check on the deleting recorder. These checks are done every 5 to 10 minutes or so.

comment:8 Changed 18 years ago by bolek-mythtv@…

I think I need some more info. Chris said that multiple deletes are sequential, but I don't see how this works given that a separate thread is spawned for each and nobody waits for these threads (see MainServer::DoHandleDeleteRecording?).

Is there some code somewhere that maintains the list of files marked for deletion?

comment:9 Changed 18 years ago by ajlill@…

A couple of thoughts...

If there's an issue with I/O starvation on manual deletes, doesn't the same issue exist in autoexpire as well.....

Instead of trying to make sure the deletes are done before the next autoexpire run happens, why not just let the autoexpire thread do the deletes? It could be handled anologously to how live tv is handled. Move the recordings to a 'To Be Deleted' group and have autoexpire delete them next time it wakes up. A side-benefit is that if you delete by mistake, you have a couple of minutes to fix it!

comment:10 Changed 18 years ago by danielk

Bolek, there is no list of manually deleted files, they are just marked in the DB when the delete begins, and removed from the DB when the delete is completed. Chris and I were both assuming that you were doing this for auto-expire too, since this is where most of the deletes happen. It was bad of us to not look carefully at your code.

Anyway, ajlill is right, you should just place these files in the autoexpire list. And then implementing the automatic delete increment should be obvious. There is a list of files that need to be deleted and a deadline for completing the deletes.

Please send any additional questions directly to me by e-mail or to the mythtv-dev mailing list.

Changed 18 years ago by bolek-mythtv@…

Attachment: gradual-delete2.patch added

comment:11 Changed 18 years ago by bolek-mythtv@…

I uploaded a reworked patch based on the discussion on mythtv-dev.

comment:12 Changed 18 years ago by bolek-mythtv@…

Oh crap, I found a stupid bug (I forgot that QWaitCondition.wait locks the mutex back on resume).

Please see the new patch v3.

Changed 18 years ago by bolek-mythtv@…

Attachment: gradual-delete3.patch added

Changed 18 years ago by danielk

Attachment: 1835-v4.patch added

Reviewed, but untested version of patch.

comment:13 Changed 18 years ago by Yeechang Lee <ylee@…>

For what it's worth, I've been trying out both the v3 and v4 versions of the patch on my 0.19-fixes system (FC4 ATrpms-129; I patch the SRPM then rebuild).

Both versions patch cleanly onto the 0.19-fixes source. Both rename the file being truncated as 'cifsxxxx' (it's a network mount). However, I could never get v4, which I tried first, to work right; it'd either truncate really, really slowly (like 35K/second slowly) or not truncate at all. v3 works great (I did modify the sleep interval between truncations to ten seconds, not Boleslaw's two), except that the cifsxxxx files don't ever get cleared away after reaching the point where the remainder is smaller than the truncation size (83MB for me). Other than that, the patch works exactly as advertised. My restricted-to-ext3 NAS is redeemed!

Changed 18 years ago by danielk

Attachment: 1835-v5.patch added

Updated patch (sets a minimum delete rate to 8 MB/s)

comment:14 Changed 18 years ago by danielk

Resolution: fixed
Status: assignedclosed

(In [10235]) Closes #1835. Gradually delete large files to avoid I/O starvation using a modified version of bolek's patch.

Note: See TracTickets for help on using tickets.