Is there an appreciable difference between a single thread waiting for work and multiple threads that are created and destroyed on demand?



I have a piece of software that will, every few minutes or so, need to write data to disk. This is a slow process, and I would like to offload it to a secondary thread, rather than pausing my entire program while data is written.

Would it be better to structure this as a single worker thread that does its work, then pauses and waits for the next piece of work, or as multiple threads, where each time work needs to be done a new thread is spawned?


If I understand correctly, you are looking at using treading to reclaim processor time that would otherwise be lost to waiting on I/o (writing to disk); you are asking how maintaining a constant pool of threads compares to ad hoc (creating and destroying threads per use)?

This probably comes down to balancing ‘good code’ verses the overhead cost of creating threads. – Common wisdom on threads suggest keeping a thread-pool specificity to avoid the overhead of creating new treads – However in situations like this the spawning overhead (typically a few microseconds) is likely negligible compare to the hardware/disk write situation you are trying to address.

Another consideration is the system and hardware you are writing to/with. Having multiple concurrent writes may negatively effect your write speed (or cause other problematic behaviour).