Opened 14 years ago
Closed 10 years ago
#1490 closed bug (Fixed)
Daemon uses 4% cpu when idle
Reported by: | tes | Owned by: | |
---|---|---|---|
Priority: | minor | Milestone: | 1.3.7 |
Component: | Core | Version: | 1.3.5 |
Keywords: | Cc: | kenny@the-b.org |
Description
When running deluged, all torrents inactive and no interface connected, the daemon uses 4% cpu on my system (Atom N270).
I'm attaching deluged.profile. I've verified using netstat that no connections are open.
Change History (19)
comment:1 by , 14 years ago
comment:2 by , 14 years ago
Version: | 1.3.1 → other (please specify) |
---|
Same here, altough it's slighty lower (1%) on a pretty old machine.
~# strace -p 6137 Process 6137 attached - interrupt to quit select(12, [6 8 11], [], [], {0, 47996}) = 0 (Timeout) gettimeofday({1295799730, 323780}, NULL) = 0 gettimeofday({1295799730, 324190}, NULL) = 0 gettimeofday({1295799730, 324339}, NULL) = 0 gettimeofday({1295799730, 324610}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48612}) = 0 (Timeout) gettimeofday({1295799730, 373767}, NULL) = 0 gettimeofday({1295799730, 374121}, NULL) = 0 gettimeofday({1295799730, 374286}, NULL) = 0 gettimeofday({1295799730, 375011}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48227}) = 0 (Timeout) gettimeofday({1295799730, 423693}, NULL) = 0 gettimeofday({1295799730, 424057}, NULL) = 0 gettimeofday({1295799730, 424175}, NULL) = 0 gettimeofday({1295799730, 424420}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48771}) = 0 (Timeout) gettimeofday({1295799730, 473641}, NULL) = 0 gettimeofday({1295799730, 473956}, NULL) = 0 gettimeofday({1295799730, 474105}, NULL) = 0 gettimeofday({1295799730, 474433}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48789}) = 0 (Timeout) gettimeofday({1295799730, 523852}, NULL) = 0 gettimeofday({1295799730, 524167}, NULL) = 0 gettimeofday({1295799730, 524318}, NULL) = 0 gettimeofday({1295799730, 524565}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48659}) = 0 (Timeout) gettimeofday({1295799730, 573685}, NULL) = 0 gettimeofday({1295799730, 574033}, NULL) = 0 gettimeofday({1295799730, 574157}, NULL) = 0 gettimeofday({1295799730, 574557}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48640}) = 0 (Timeout) gettimeofday({1295799730, 623938}, NULL) = 0 gettimeofday({1295799730, 624354}, NULL) = 0 gettimeofday({1295799730, 624474}, NULL) = 0 gettimeofday({1295799730, 624732}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48461}) = 0 (Timeout) gettimeofday({1295799730, 673701}, NULL) = 0 gettimeofday({1295799730, 674073}, NULL) = 0 gettimeofday({1295799730, 674192}, NULL) = 0 gettimeofday({1295799730, 674636}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48556}) = 0 (Timeout) gettimeofday({1295799730, 723747}, NULL) = 0 gettimeofday({1295799730, 724103}, NULL) = 0 gettimeofday({1295799730, 724228}, NULL) = 0 gettimeofday({1295799730, 724483}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48715}) = 0 (Timeout) gettimeofday({1295799730, 773661}, NULL) = 0 gettimeofday({1295799730, 774017}, NULL) = 0 gettimeofday({1295799730, 774142}, NULL) = 0 gettimeofday({1295799730, 774541}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48657}) = 0 (Timeout) gettimeofday({1295799730, 823803}, NULL) = 0 gettimeofday({1295799730, 824119}, NULL) = 0 gettimeofday({1295799730, 824279}, NULL) = 0 gettimeofday({1295799730, 824535}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48698}) = 0 (Timeout) gettimeofday({1295799730, 873696}, NULL) = 0 gettimeofday({1295799730, 874009}, NULL) = 0 gettimeofday({1295799730, 874167}, NULL) = 0 gettimeofday({1295799730, 874567}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48664}) = 0 (Timeout) gettimeofday({1295799730, 923769}, NULL) = 0 gettimeofday({1295799730, 924120}, NULL) = 0 gettimeofday({1295799730, 924243}, NULL) = 0 gettimeofday({1295799730, 924495}, NULL) = 0 select(12, [6 8 11], [], [], {0, 48701}^C <unfinished ...> Process 6137 detached
comment:3 by , 14 years ago
Milestone: | Future → 1.3.x |
---|---|
Type: | defect → bug |
Version: | other (please specify) → 1.3.1 |
comment:4 by , 14 years ago
After noticing 100Mb per day network usage while not doing anything, I've investigated some more.
While idle, Deluge is still exchanging peers with other computers. I'm not sure if it is initiating these. It is at least responding to them (them = "get_peers"?). Disabling DHT and Peer Exchange does not solve the problem.
follow-up: 8 comment:5 by , 13 years ago
What OS and libtorrent are you using?
How many torrents have you got loaded?
Libtorrent will still communicate with trackers even for torrents in paused state.
What plugins have you got enabled?
Have you encountered the same problem with 1.3.2?
comment:6 by , 13 years ago
Resolution: | → invalid |
---|---|
Status: | new → closed |
Closing due to no further information provided. Reopen if this still occurs with releases of latest libtorrent and deluge.
comment:7 by , 13 years ago
Resolution: | invalid |
---|---|
Status: | closed → reopened |
Version: | 1.3.1 → 1.3.3 |
Reopening because this still happens, even after removing all torrents (in that case, restarting the daemon helps). Active plugins are label and webui.
comment:10 by , 13 years ago
Milestone: | 1.3.x → Future |
---|---|
Status: | reopened → pending |
Version: | 1.3.3 → 1.3.5 |
libtorrent 0.16 deluge 1.35
on x86_64 (where gtod is not a syscall) deluged does about 30 ctxt switches per second while completly idle.
comment:11 by , 13 years ago
This is caused by the use of Twisted for noblocking IO, since python doesn't have a working thread model and cannot afford to block for extended periods of time. It got worse with twisted >11.0.0 (which I'm currently using on Gentoo/x86); updating to 11.1.0 or later significantly raises the CPU use & context switch rate on an otherwise completely idle system, in line with the 4% observed here. Looking at http://labs.twistedmatrix.com/2011/11/twisted-1110-has-been-released.html I see: "The poll() reactor as default where applicable, instead of select() everywhere."
comment:13 by , 12 years ago
Cc: | added |
---|
comment:14 by , 12 years ago
Milestone: | Future → performance |
---|
comment:15 by , 12 years ago
I love Deluge & was excited to get it running on my raspberry pi, but 5-8% CPU usage at idle is a no-go on a device which is pretty constrained at the best of times.
comment:16 by , 12 years ago
I shrink CPU usage to under 1% if I change the AlertManager's update interval from 0.05 (20 times a second) to 1 (once per second) in deluge/core/alertmanager.py. Are there any downsides to this?
It seems... odd... practice to have to poll for alerts (especially that frequently) but I don't know how the libtorrent API works so perhaps it's necessary.
In any case Twisted appears to be performing as advertised. It's not to blame just because it's being used to implement something approaching a tight loop.
follow-up: 18 comment:17 by , 11 years ago
I can confirm that increasing the AlertManager's interval fixes this issue. I think that 0.05 is a too low value. Is it a problem to merge this upstream?
comment:18 by , 11 years ago
Replying to ezequielg:
I can confirm that increasing the AlertManager's interval fixes this issue. I think that 0.05 is a too low value. Is it a problem to merge this upstream?
I have been patching my Deluge on Gentoo to use 1s since forever without any problems (https://github.com/hhoffstaette/portage/blob/master/net-p2p/deluge/files/deluge-1.3.6-alertmanager_interval.patch). My guess is that this is by far the most common reason for excessive CPU usage.
comment:19 by , 10 years ago
Milestone: | performance → 1.3.7 |
---|---|
Resolution: | → Fixed |
Status: | pending → closed |
I have set it to 0.3s so that should reduce the idle cpu usage to a more reasonable level.
1.3-stable: [d6b44afb9981] develop: [19bc0fb46817]
Well, that didn't fit, so here it is: http://www.mediafire.com/?m3avpl58phylmmg