Custom Query (2449 matches)


Show under each result:

Results (22 - 24 of 2449)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Ticket Resolution Summary Owner Reporter
#277 WorksForMe Wrong connected seeds markybob sillyass

As you can see from the screen there are 2 available seeders and I'm connected to many more ..

#248 Invalid Wrong arguments to GtkFileChooser.set_current_folder() andar

Using r3188, restarted the gui and got this.

[DEBUG   ] config:117 Setting 'enabled_plugins' to -1 of <type 'int'>
Traceback (most recent call last):
  File "/usr/lib64/python2.4/site-packages/deluge/ui/gtkui/", line 182, in _on_new_core
  File "/usr/lib64/python2.4/site-packages/deluge/", line 194, in start
  File "/usr/lib64/python2.4/site-packages/deluge/", line 114, in start
  File "/usr/lib64/python2.4/site-packages/deluge/", line 126, in start_component
  File "/usr/lib64/python2.4/site-packages/deluge/ui/gtkui/", line 132, in start
  File "/usr/lib64/python2.4/site-packages/deluge/ui/gtkui/", line 170, in update_core_config
  File "/usr/lib64/python2.4/site-packages/deluge/ui/gtkui/", line 360, in set_default_options"button_location").set_current_folder(
TypeError: GtkFileChooser.set_current_folder() argument 1 must be string, not bool
apps file failure
#42 WontFix Writes and Reads reduction. andar anonymous

I've noticed that Deluge keeps flushing the data back to the drive almost immidiately after it receives it. It also seems to read data for uploading as it's needed. With just one torrent going 100KB/s down and 50KB/s up my HDD LED keeps flashing like crazy. This is very taxing for hard drives. Please make an option available to have the data flushed less frequently (like a slider with some reasonable values and a reasonable default value, say 2MB).

This would reduce writes.

Also, some sort of intelligent and more RAM-heavy (hey, RAM is cheap and it could be optional) uploading mechanism would be in order. Like buffering more data at once, I'm not sure how the protocol works, but parts that have most demand could be pre-emptively cached or/and if there's some sort of a queue, parse who needs what in advance and cache this data in bigger chunks in preperation before sending it.

This would reduce reads.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Note: See TracQuery for help on using queries.