38 Commits

Author SHA1 Message Date
Rasmus Munk Larsen
6560692c67 Improve EventCount used by the non-blocking threadpool.
The current algorithm requires threads to commit/cancel waiting in order
they called Prewait. Spinning caused by that serialization can consume
lots of CPU time on some workloads. Restructure the algorithm to not
require that serialization and remove spin waits from Commit/CancelWait.
Note: this reduces max number of threads from 2^16 to 2^14 to leave
more space for ABA counter (which is now 22 bits).
Implementation details are explained in comments.
2019-02-22 13:56:26 -08:00
Rasmus Munk Larsen
93f9988a7e A few small fixes to a) prevent throwing in ctors and dtors of the threading code, and b) supporting matrix exponential on platforms with 113 bits of mantissa for long doubles. 2018-11-09 14:15:32 -08:00
Christoph Hertzberg
2c083ace3e Provide EIGEN_OVERRIDE and EIGEN_FINAL macros to mark virtual function overrides 2018-09-24 18:01:17 +02:00
Rasmus Munk Larsen
44d8274383 Cast to longer type. 2018-09-19 13:31:42 -07:00
Rasmus Munk Larsen
d638b62dda Silence compiler warning. 2018-09-19 13:27:55 -07:00
Ravi Kiran
1f0c941c3d Collapsed revision
* Merged eigen/eigen into default
2018-09-17 18:29:12 -07:00
Rasmus Munk Larsen
8d9bc5cc02 Fix g++ compilation. 2018-08-23 13:06:39 -07:00
Rasmus Munk Larsen
668690978f Pad PerThread when we emulate thread_local to prevent false sharing. 2018-08-23 12:54:33 -07:00
Rasmus Munk Larsen
6cedc5a9b3 rename mu. 2018-08-23 12:11:58 -07:00
Rasmus Munk Larsen
6e0464004a Store std::unique_ptr instead of raw pointers in per_thread_map_. 2018-08-23 12:10:08 -07:00
Rasmus Munk Larsen
d35880ed91 merge 2018-08-23 11:36:49 -07:00
Christoph Hertzberg
a709c8efb4 Replace pointers by values or unique_ptr for better leak-safety 2018-08-23 19:41:59 +02:00
Rasmus Munk Larsen
15d4f515e2 Use plain_assert in destructors to avoid throwing in CXX11 tests where main.h owerwrites eigen_assert with a throwing version. 2018-08-14 12:17:46 -07:00
Rasmus Munk Larsen
8278ae6313 Add support for thread local support on platforms that do not support it through emulation using a hash map. 2018-08-13 15:31:23 -07:00
Eugene Zhulenev
e204ecdaaf Remove SimpleThreadPool and always use {NonBlocking}ThreadPool 2018-07-16 15:06:57 -07:00
Benoit Steiner
dc524ac716 Fixed compilation warning 2017-07-06 21:11:15 -07:00
Rasmus Munk Larsen
bfd7bf9c5b Get rid of Init(). 2017-03-10 08:48:20 -08:00
Rasmus Munk Larsen
d56ab01094 Use C++11 ctor forwarding to simplify code a bit. 2017-03-10 08:30:22 -08:00
Rasmus Munk Larsen
344c2694a6 Make the non-blocking threadpool more flexible and less wasteful of CPU cycles for high-latency use-cases.
* Adds a hint to ThreadPool allowing us to turn off spin waiting. Currently each reader and record yielder op in a graph creates a threadpool with a thread that spins for 1000 iterations through the work stealing loop before yielding. This is wasteful for such ops that process I/O.

* This also changes the number of iterations through the steal loop to be inversely proportional to the number of threads. Since the time of each iteration is proportional to the number of threads, this yields roughly a constant spin time.

* Implement a separate worker loop for the num_threads == 1 case since there is no point in going through the expensive steal loop. Moreover, since Steal() calls PopBack() on the victim queues it might reverse the order in which ops are executed, compared to the order in which they are scheduled, which is usually counter-productive for the types of I/O workloads the single thread pools tend to be used for.

* Store num_threads in a member variable for simplicity and to avoid a data race between the thread creation loop and worker threads calling threads_.size().
2017-03-09 15:41:03 -08:00
Benoit Steiner
3beb180ee5 Don't call EnvThread::OnCancel by default since it doesn't do anything. 2016-12-14 18:33:39 -08:00
Benoit Steiner
2f5b7a199b Reworked the threadpool cancellation mechanism to not depend on pthread_cancel since it turns out that pthread_cancel doesn't work properly on numerous platforms. 2016-12-09 13:05:14 -08:00
Benoit Steiner
7bfff85355 Added support for thread cancellation on Linux 2016-12-08 08:12:49 -08:00
Benoit Steiner
eb6ba00cc8 Properly size the list of waiters 2016-09-12 10:31:55 -07:00
Benoit Steiner
13df3441ae Use MaxSizeVector instead of std::vector: xcode sometimes assumes that std::vector allocates aligned memory and therefore issues aligned instruction to initialize it. This can result in random crashes when compiling with AVX instructions enabled. 2016-09-02 19:25:47 -07:00
Rasmus Munk Larsen
a9c1e4d7b7 Return -1 from CurrentThreadId when called by thread outside the pool. 2016-06-23 16:40:07 -07:00
Rasmus Munk Larsen
d39df320d2 Resolve merge. 2016-06-23 15:08:03 -07:00
Benoit Steiner
a29a2cb4ff Silenced a couple of compilation warnings generated by xcode 2016-06-22 16:43:02 -07:00
Benoit Steiner
f8fcd6b32d Turned the constructor of the PerThread struct into what is effectively a constant expression to make the code compatible with a wider range of compilers 2016-06-22 16:03:11 -07:00
Benoit Steiner
aedc5be1d6 Avoid generating pseudo random numbers that are multiple of 5: this helps
spread the load over multiple cpus without havind to rely on work stealing.
2016-06-14 17:51:47 -07:00
Rasmus Munk Larsen
f1f2ff8208 size_t -> int 2016-06-03 18:06:37 -07:00
Rasmus Munk Larsen
76308e7fd2 Add CurrentThreadId and NumThreads methods to Eigen threadpools and TensorDeviceThreadPool. 2016-06-03 16:28:58 -07:00
Benoit Steiner
1ae2567861 Fixed some compilation warnings 2016-05-26 15:57:19 -07:00
Benoit Steiner
2a54b70d45 Fixed potential race condition in the non blocking thread pool 2016-05-12 11:45:48 -07:00
Christoph Hertzberg
2150f13d65 fixed some double-promotion and sign-compare warnings 2016-05-11 23:02:26 +02:00
Benoit Steiner
6a5717dc74 Explicitely initialize all the atomic variables. 2016-05-11 10:04:41 -07:00
Benoit Steiner
dc7dbc2df7 Optimized the non blocking thread pool:
* Use a pseudo-random permutation of queue indices during random stealing. This ensures that all the queues are considered.
 * Directly pop from a non-empty queue when we are waiting for work,
instead of first noticing that there is a non-empty queue and
then doing another round of random stealing to re-discover the non-empty
queue.
 * Steal only 1 task from a remote queue instead of half of tasks.
2016-05-09 10:17:17 -07:00
Benoit Steiner
2b72163028 Implemented a more portable version of thread local variables 2016-04-19 15:56:02 -07:00
Benoit Steiner
78a51abc12 Added a more scalable non blocking thread pool 2016-04-14 15:23:10 -07:00