Custom Query (2447 matches)
Results (484 - 486 of 2447)
Ticket | Resolution | Summary | Owner | Reporter |
---|---|---|---|---|
#2250 | Fixed | Speedup removing multiple torrents from core | ||
Description |
Each call to core.remove_torrent took about 1 second on my machine. Most of time 90%+ was used by writing the state file. When removing multiple torrents, the state file is now written after removing all the torrents. https://github.com/bendikro/deluge/commit/master-core-remove-torrents |
|||
#2255 | Fixed | Speed optimizations to the daemon | ||
Description |
I've been profiling the daemon and I've created a patch that decreases the CPU usage of the daemon while clients are connected. torrent.get_statusThis method is called a lot of times! To speed this up I've restructured and split the code so that most of the calls that previously were to get_status now calls create_status_dict which simply creates a dict based on the cached info instead of asking libtorrent for a new status. The internal dictionary used to create the dictionary with the status values is now created only once each for torrent instead of every time get_status is called. The inner methods of get_status are moved out to avoid creating them for each call. There are also changes to speed up frequent tests like has_metadata, and as few calls to libtorrent as possible. core.get_torrents_status / libtorrent.post_torrent_updateslibtorrent.post_torrent_updates tells libtorrent to post an alert with the torrents that have changed since the last call to post_torrent_updates. This is called from core.get_torrents_status when a client requests an updated status dict. If the statuses for the torrents were updated less than 2 seconds ago (the interval the GTKUI asks for updates), it will return a status dict created from the cached data instead of calling post_torrent_updates. This should make the daemon perform better with multiple clients connected. Startup (loading torrents)To speed this up I've added a variable wait_on_handler in alertmanager which is set just when the torrents are loaded. alertmanager will then call the handlers without waiting. This speeds up adding the torrents considerably (~40%). Log statements (log.isEnabledFor)Many of the log statements require "heavy" computations, so when summing together, the total time for the log statements is many seconds (roughly 5-10s on my desktop) when loading many torrents (I've tested with 2000). Therefore I've added log.isEnabledFor on the most important log statements. When profiling startup of 2000 torrents, the log statements I've added log.isEnabledFor for used from 150ms to 600ms each (in total). Redundant and unecessary RPC messagesI've removed some redundant RPC requests, and combined some to have less packets transmitted on startup.
|
|||
#2258 | Fixed | RuntimeError when emiting event | ||
Description |
I've seen this traceback a couple of times, last time months ago, so this does not happen often. I presume this issue applies to master as well. The call in YaRSS2 that causes this is component.get("EventManager").emit(...) Traceback (most recent call last): File "/usr/local/lib/python2.7/threading.py", line 524, in __bootstrap self.__bootstrap_inner() File "/usr/local/lib/python2.7/threading.py", line 551, in __bootstrap_inner self.run() File "/usr/local/lib/python2.7/threading.py", line 504, in run self.__target(*self.__args, **self.__kwargs) --- <exception caught here> --- File "/usr/local/lib/python2.7/site-packages/twisted/python/threadpool.py", line 167, in _worker result = context.call(ctx, function, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext return self.currentContext().callWithContext(ctx, func, *args, **kw) File "/usr/local/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext return func(*args,**kw) File "build/bdist.linux-x86_64/egg/yarss2/rssfeed_scheduler.py", line 135, in rssfeed_update_handler File "/home/bro/programmer/deluge/deluge/deluge/core/eventmanager.py", line 51, in emit component.get("RPCServer").emit_event(event) File "/home/bro/programmer/deluge/deluge/deluge/core/rpcserver.py", line 450, in emit_event for session_id, interest in self.factory.interested_events.iteritems(): exceptions.RuntimeError: dictionary changed size during iteration |