Discussion:
[libtorrent] Optimising network/NFS I/O
Sivachandran Paramasivam
2016-12-01 09:42:49 UTC
Permalink
Hi,

I am seeding my files from NFS. I am observing slow uploading rate compared
to disk based seeding with same set of files and peers.

I dumped the sessions stats, added logs to default_storage and found that
the files are read as 16KB block size. Reading 16KB from disk is a lot
faster than reading from network. But through a sample application I found
that if I read large data(e.g. 4MB, 8MB) from network then the latency is
amortized and network storage is on-par with disk storage in reading speed.

Now my question is, is there way in libtorrent to coalesce multiple
reads(16KB) into one large read for a non-http transfer? Or is there a way
to increase the block size for reads?

Thanks,
Sivachandran Paramasivam
------------------------------------------------------------------------------
Arvid Norberg
2016-12-03 14:07:32 UTC
Permalink
On Thu, Dec 1, 2016 at 4:42 AM, Sivachandran Paramasivam <
Post by Sivachandran Paramasivam
[...]
I dumped the sessions stats, added logs to default_storage and found that
the files are read as 16KB block size.
I don't think you can tell that from the session stats. You'd have to use
dtrace or strace or something that looks at system calls.

Reading 16KB from disk is a lot
Post by Sivachandran Paramasivam
faster than reading from network. But through a sample application I found
that if I read large data(e.g. 4MB, 8MB) from network then the latency is
amortized and network storage is on-par with disk storage in reading speed.
Fundamentally the bittorrent protocol requests blocks at 16 kiB
granularity. When reading from disk, you can make some assumptions that may
be true a lot of the time. for instance, if you get a request for a block
in one piece from a peer, chances are the peer will request some other
blocks from the same piece soon. This logic is implemented as read-ahead in
the libtorrent disk cache. This will read multiple 16 kiB blocks at a time.

You may want to make the libtorrent disk cache larger.

However, fundamentally I believe, this problem comes down to the *delay*
increasing (i.e. the roundtrip time from the peer to your disk) and the
requesting peers don't send enough outstanding requests to fill the
bandwidth delay product. This is understandable because the peers don't
necessarily have a very good sense of what the delay and bandwidth capacity
is. The simplest way for *you* to do something about this is to connect to
more peers and seed more things. The more peers you have, the more
aggregate outstanding requests you'll get to fill your upload pipe.

Now my question is, is there way in libtorrent to coalesce multiple
Post by Sivachandran Paramasivam
reads(16KB) into one large read for a non-http transfer? Or is there a way
to increase the block size for reads?
yes, and it should be doing that by default. Although, if you're
write-heavy, coalescing writes takes priority over reads.
--
Arvid Norberg
Loading...