Opened 15 years ago
Closed 14 years ago
#6734 closed defect (fixed)
Memory leak in EIT scanner (UK DVB-T)
Reported by: | Nick Morrott <knowledgejunkie (at) gmail (dot) com> | Owned by: | Stuart Auchterlonie |
---|---|---|---|
Priority: | minor | Milestone: | 0.24 |
Component: | MythTV - EIT | Version: | 0.22-fixes |
Severity: | medium | Keywords: | memory leak eit scanner |
Cc: | Ticket locked: | no |
Description
There have been reports from some users of constant mythbackend memory growth on the stable release when EIT scanning is enabled. My production backend is no exception and I see VSZ memory growth of ~8MiB/day when active EIT scanning is enabled.
I've finally gotten around to running valgrind on latest 0.21-fixes, and have attached the valgrind log to this ticket, having been run with options:
valgrind --leak-check=full --show-reachable=yes --verbose [[BR]] --log-file=/tmp/valgrind-021-fixes-0453.log [[BR]] /usr/local/bin/mythbackend --noupnp -v important,general,eit,channel [[BR]] > /tmp/mythbackend-valgrind-021-fixes.log
UPNP was disabled on the backend, and only one of the 3 cards (all KWorld DVB Xperts) has active EIT scanning enabled. Build details (source was exported from a checkout at r20947) are:
MythTV Version : exported MythTV Branch : branches/release-0-21-fixes Library API : 0.21.20080304-1 Network Protocol : 40 Options compiled in: linux debug use_hidesyms using_oss using_alsa using_backend [[BR]] using_dvb using_frontend using_iptv using_ivtv using_lirc [[BR]] using_opengl_vsync using_opengl_video using_v4l using_valgrind [[BR]] using_x11 using_xrandr using_xv using_xvmc using_xvmcw [[BR]] using_xvmc_vld using_bindings_perl using_bindings_python [[BR]] using_opengl using_ffmpeg_threads using_live
The summary of the leaktest (valgrind was run for approx 1 hour) was:
==20475== LEAK SUMMARY: ==20475== definitely lost: 2,276 bytes in 31 blocks. ==20475== indirectly lost: 302,980 bytes in 8,410 blocks. ==20475== possibly lost: 2,829,856 bytes in 40 blocks. ==20475== still reachable: 3,387,744 bytes in 56,158 blocks.
Multiplying up the ~300KiB/hr lost memory over a 24hr period would seem to agree well with my observed memory growth of ~8MiB/day. I don't know if this is an allowable 'leap' to make.
Attachments (3)
Change History (27)
Changed 15 years ago by
Attachment: | 6734-valgrind-021-fixes.tar.bz2 added |
---|
comment:1 Changed 14 years ago by
Status: | new → infoneeded_new |
---|
Does this still happen with the eit scanner in 0.22?
Stuart
comment:2 Changed 14 years ago by
I'm going to have to monitor my DVB-S/T equipped dev machine (running 0.22-fixes) for several days to check - my production systems are still on 0.21-fixes until I find the time for OS-upgrades.
comment:3 follow-ups: 4 5 Changed 14 years ago by
Nick,
Any update on this? Ready to close it no info.
comment:4 Changed 14 years ago by
Replying to robertm:
Nick,
Any update on this? Ready to close it no info.
Apologies for the lack of update on this ticket. I've recently had the test machine on 24/7 (purely EIT collection from DVB-T and DVB-S) but hadn't actually cron'ed the monitoring...
I will post any useful findings once I've left it running for some days (a week seems sensible?) with monitoring enabled - but early indications suggest it is not growing like it was on 0.21-fixes (if at all). If I do see some growth, I'll try and get a valgrind trace as before.
Please don't close for the time being :)
comment:5 Changed 14 years ago by
Replying to robertm:
Nick,
Any update on this? Ready to close it no info.
I've now been monitoring my 0.22-fixes-based DVB-T/S "test" machine for the past 5 days to monitor VSZ growth. I am observing a steady increase in VSZ usage, and the machine has only been used lightly for a handful of scheduled recordings from DVB-S. Memory growth has increased almost linearly, from approx 350MiB to almost 390MiB in those 5 days.
I have some more DVB-S recordings scheduled for the remainder of the week, but the weekend is free and I will endeavour to run valgrind to see if I see the same issue as with 0.21-fixes. I will also get a 7-day graph of the increase and post a link later today/tomorrow.
comment:6 follow-ups: 7 8 Changed 14 years ago by
I've been graphing my backend memory use for over 1yr. What i have noticed is that the memory usage will increase for a few days (anything up to a week) before reaching a steady state.
Can you monitor for longer than a week without restarting the backend? I expect it won't grow any more after a week.
Stuart
comment:7 Changed 14 years ago by
Replying to stuarta:
I've been graphing my backend memory use for over 1yr. What i have noticed is that the memory usage will increase for a few days (anything up to a week) before reaching a steady state.
Can you monitor for longer than a week without restarting the backend? I expect it won't grow any more after a week.
Sure. The current monitoring session started on 2010-01-15, so I'll keep it going over the weekend before it resumes its scheduled recordings next week to get more than a week's stats.
I have noticed occasional plateaus during the memory growth on my 0.21-fixes backend (original reason for ticket), but memory growth always resumes and appears to continue to rise until I restart the mythbackend service. It never plateaus permanently.
comment:8 Changed 14 years ago by
Replying to stuarta:
I've been graphing my backend memory use for over 1yr. What i have noticed is that the memory usage will increase for a few days (anything up to a week) before reaching a steady state.
Can you monitor for longer than a week without restarting the backend? I expect it won't grow any more after a week.
12 days and still growing (0.22-fixes, not trunk). VSZ graph here:
http://www.insidethex.co.uk/mythtv/mbe-mem-usage-2010-01-26-2w.png
Will update to trunk and/or run valgrind at the weekend if required.
comment:9 Changed 14 years ago by
Status: | infoneeded_new → new |
---|
comment:10 Changed 14 years ago by
@stuarta
Just wondering whether:
i) the recent commits to fix memory leaks will have touched upon the issues valgrind seems to have highlighted w.r.t. this ticket?; and
ii) whether you need any more information?
(I've not been well recently and have not yet updated to 0.23RCx)
comment:11 Changed 14 years ago by
[24004] needs backporting to 0.22-fixes,
but I would hope it's at least fixed in the sense that a 30 hour run with eit scanning and a couple of recordings in trunk reported 0 bytes "definitely lost"
comment:12 Changed 14 years ago by
Milestone: | unknown → 0.23 |
---|---|
Status: | new → infoneeded_new |
Version: | 0.21-fixes → 0.22-fixes |
Last report was against 0.22-fixes.
Waiting to find out if this occurs with 0.23/trunk
comment:14 Changed 14 years ago by
This evening I left valgrind running on a fresh checkout of trunk at r24200 for about 3 hours. This is on a Fedora 11 box with 1 DVB-T card and 1 DVB-S card. EIT is enabled for both cards.
mythbackend details:
MythTV Version : exported MythTV Branch : trunk Network Protocol : 56 Library API : 0.23.20100417-1 QT Version : 4.6.2 Options compiled in: linux debug use_hidesyms using_oss using_alsa using_pulse using_pulseoutput using_backend using_dvb using_frontend using_hdpvr using_iptv using_ivtv using_lirc using_mheg using_opengl_video using_opengl_vsync using_qtdbus using_qtwebkit using_v4l using_valgrind using_x11 using_xrandr using_xv using_xvmc using_xvmc_vld using_xvmcw using_bindings_perl using_bindings_python using_opengl using_ffmpeg_threads using_live using_mheg
The summary of the leaktest (valgrind was run for approx 3 hours) was:
==30155== LEAK SUMMARY: ==30155== definitely lost: 116 bytes in 1 blocks. ==30155== possibly lost: 962,816 bytes in 46,889 blocks. ==30155== still reachable: 4,914,033 bytes in 23,232 blocks. ==30155== suppressed: 0 bytes in 0 blocks.
The definitely lost record was:
==30155== 116 bytes in 1 blocks are definitely lost in loss record 362 of 475 ==30155== at 0x4004E5C: calloc (vg_replace_malloc.c:397) ==30155== by 0x650D783: my_thread_init (in /usr/lib/mysql/libmysqlclient_r.so.16.0.0) ==30155== by 0x6506BA4: mysql_server_init (in /usr/lib/mysql/libmysqlclient_r.so.16.0.0) ==30155== by 0x6532438: mysql_init (in /usr/lib/mysql/libmysqlclient_r.so.16.0.0) ==30155== by 0x401FDCD: (within /usr/lib/qt4/plugins/sqldrivers/libqsqlmysql.so) ==30155== by 0x5C8D190: QSqlDatabase::open() (in /usr/lib/libQtSql.so.4.6.2) ==30155== by 0x5482386: MSqlDatabase::OpenDatabase() (mythdbcon.cpp:87) ==30155== by 0xCBFFCD: clone (in /lib/libc-2.10.2.so)
Most of the "probably lost" memory was:
==30155== 958,124 bytes in 46,863 blocks are possibly lost in loss record 474 of 475 ==30155== at 0x4006F3D: malloc (vg_replace_malloc.c:207) ==30155== by 0x5E393EC: qMalloc(unsigned int) (in /usr/lib/libQtCore.so.4.6.2) ==30155== by 0x5E6C2C6: QMapData::node_create(QMapData::Node**, int, int) (in /usr/lib/libQtCore.so.4.6.2) ==30155== by 0x46FA6A7: QMap<unsigned int, unsigned long long>::node_create(QMapData*, QMapData::Node**, unsigned int const&, unsigned long long const&) (qmap.h:428) ==30155== by 0xCBFFCD: clone (in /lib/libc-2.10.2.so)
The full valgrind log is attached.
Changed 14 years ago by
Attachment: | 6734-valgrind-trunk-24200.tar.bz2 added |
---|
Valgrind log for trunk @ r24200
comment:15 Changed 14 years ago by
Status: | infoneeded_new → new |
---|
comment:16 Changed 14 years ago by
Milestone: | 0.23 → 0.23-fixes |
---|
comment:17 Changed 14 years ago by
Priority: | minor → critical |
---|
comment:18 Changed 14 years ago by
Status: | new → assigned |
---|
comment:19 Changed 14 years ago by
Milestone: | 0.23-fixes → 0.24 |
---|
comment:20 Changed 14 years ago by
I left trunk @ r26195 running for a few hours collecting EIT data from a single DVB-T card. Valgrind is still reporting leaks:
==2566== LEAK SUMMARY: ==2566== definitely lost: 4,372 bytes in 82 blocks. ==2566== indirectly lost: 2,063,760 bytes in 6,349 blocks. ==2566== possibly lost: 1,284,804 bytes in 13,092 blocks. ==2566== still reachable: 4,591,551 bytes in 9,648 blocks. ==2566== suppressed: 0 bytes in 0 blocks.
Full valgrind log is attached.
Changed 14 years ago by
Attachment: | 6734-valgrind-trunk-26195.log.bz2 added |
---|
valgrind log for trunk @ r26195
comment:21 Changed 14 years ago by
(In [26228]) Refs #6734. Create RecordingProfile? destructor. Cleans up the objects new'd in the constructor.
comment:22 Changed 14 years ago by
Priority: | critical → minor |
---|
The leak is now more like a small dribble. Downgrading severity.
comment:23 Changed 14 years ago by
(In [26231]) Refs #6734. Backports [26228]. Add RecordingProfile? destructor.
comment:24 Changed 14 years ago by
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
The changes already made against this ticket fix the majority of the leak as triggered by the EIT scanner. The problems lie primarily within the RecordingProfile? class which is being used both within the backend and the frontend.
Fixing the leaks further would cause segfaults in the frontend due to double free's. To fix it properly requires a complete refactor of the RecordingProfile? classes to separate out the UI components from the data components.
As this lies outside of the EIT scanner, i'm closing this ticket. I'll open a separate one for what remains.
Stuart
valgrind log for eit scanning memory growth