Post by linxsI run 3 client_test process to download the 3 torrent files.
Whether client_test is config to high performance mode or min_memory mode,
almost the 1G memory is consumed.
And it seems that the memory will not be released until client_test exist.
Do you know what kind of memory this is, that is being consumed?
If it is anonymous memory (i.e. heap allocations within the process) it's
most likely a bug and it can be tracked down with a heap profiler. If you
suspect this is the case, please do and post the results back. I think this
is unlikely though.
What I think is more likely is that the memory is part of the page cache.
This could be, for instance, because you're downloading from a network
that's faster than the drive you're saving the files to. In my experience
with some versions of linux, this may result in the kernel allocating new
dirty pages, backed by a slow device, until it runs out of memory, causing
the system to crawl to a halt more or less. In the background the dirty
pages are being flushed, but the downloading creates new one at a higher
rate.
Anyway, you may want to experiment with setting the file_pool_size to 1, to
force closing files more often.
You may also experiment with including fdatasync() calls after writes, to
see what happens.
But fundamentally, you'll have to do some more digging. Also, you would
likely use less memory by running a single process instead of 3.
--
Arvid Norberg