Henry Schreiner
9bb26eb8f1
Restore __device__
2017-10-21 00:50:38 +00:00
Henry Schreiner
4245475d22
Fixing missing inlines on device functions for newer CUDA cards
2017-10-20 03:20:13 +00:00
Gael Guennebaud
9c353dd145
Add C++11 max_digits10 for half.
2017-09-06 10:22:47 +02:00
Benoit Steiner
a4089991eb
Added support for CUDA 9.0.
2017-08-31 02:49:39 +00:00
Gael Guennebaud
21633e585b
bug #1462 : remove all occurences of the deprecated __CUDACC_VER__ macro by introducing EIGEN_CUDACC_VER
2017-08-24 11:06:47 +02:00
Gael Guennebaud
bbd97b4095
Add a EIGEN_NO_CUDA option, and introduce EIGEN_CUDACC and EIGEN_CUDA_ARCH aliases
2017-07-17 01:02:51 +02:00
Benoit Steiner
3baef62b9a
Added missing __device__ qualifier
2017-06-13 12:56:55 -07:00
Benoit Steiner
449936828c
Added missing __device__ qualifier
2017-06-13 12:54:57 -07:00
Gael Guennebaud
26f552c18d
fix compilation of Half in C++98 (issue introduced in previous commit)
2017-06-09 13:36:58 +02:00
Gael Guennebaud
d588822779
Add missing std::numeric_limits specialization for half, and complete NumTraits<half>
2017-06-09 11:51:53 +02:00
Abhijit Kundu
9bc0a35731
Fixed nested angle barckets >> issue when compiling with cuda 8
2017-04-27 03:09:03 -04:00
Gael Guennebaud
e958c2baac
remove UTF8 symbols
2017-03-07 10:47:40 +01:00
Benoit Steiner
7b61944669
Made most of the packet math primitives usable within CUDA kernel when compiling with clang
2017-02-28 17:05:28 -08:00
Benoit Steiner
34d9fce93b
Avoid unecessary float to double conversions.
2017-02-27 16:33:33 -08:00
Srinivas Vasudevan
f7d7c33a28
Fix expm1 CUDA implementation (do not shadow exp CUDA implementation).
2016-12-05 12:19:01 -08:00
Srinivas Vasudevan
09ee7f0c80
Fix small nit where I changed name of plog1p to pexpm1.
2016-12-02 15:30:12 -08:00
Srinivas Vasudevan
218764ee1f
Added support for expm1 in Eigen.
2016-12-02 14:13:01 -08:00
Rasmus Munk Larsen
a0329f64fb
Add a default constructor for the "fake" __half class when not using the
...
__half class provided by CUDA.
2016-11-29 13:18:09 -08:00
Benoit Steiner
dff9a049c4
Optimized the computation of exp, sqrt, ceil anf floor for fp16 on Pascal GPUs
2016-11-16 09:01:51 -08:00
Benoit Steiner
c80587c92b
Merged eigen/eigen into default
2016-11-03 03:55:11 -07:00
Benoit Steiner
7a0e96b80d
Gate the code that refers to cuda fp16 primitives more thoroughly
2016-11-01 12:08:09 -07:00
Benoit Steiner
38b6048e14
Deleted redundant implementation of predux
2016-10-12 14:37:56 -07:00
Benoit Steiner
78d2926508
Merged eigen/eigen into default
2016-10-12 13:46:29 -07:00
Benoit Steiner
2e2f48e30e
Take advantage of AVX512 instructions whenever possible to speedup the processing of 16 bit floats.
2016-10-12 13:45:39 -07:00
Benoit Steiner
d485d12c51
Added missing AVX intrinsics for fp16: in particular, implemented predux which is required by the matrix-vector code.
2016-10-06 10:41:03 -07:00
Benoit Steiner
698ff69450
Properly characterize the CUDA packet primitives for fp16 as device only
2016-10-04 16:53:30 -07:00
Benoit Steiner
409e887d78
Added support for constand std::complex numbers on GPU
2016-10-03 11:06:24 -07:00
Benoit Steiner
26f9907542
Added missing typedefs
2016-09-20 12:58:03 -07:00
RJ Ryan
b2c6dc48d9
Add CUDA-specific std::complex<T> specializations for scalar_sum_op, scalar_difference_op, scalar_product_op, and scalar_quotient_op.
2016-09-20 07:18:20 -07:00
Gael Guennebaud
8f4b4ad5fb
use ::hlog if available.
2016-08-29 11:05:32 +02:00
Gael Guennebaud
35a8e94577
bug #1167 : simplify installation of header files using cmake's install(DIRECTORY ...) command.
2016-08-29 10:59:37 +02:00
Gael Guennebaud
d937a420a2
Fix compilation with MSVC by using our portable numext::log1p implementation.
2016-08-22 15:44:21 +02:00
Igor Babuschkin
59bacfe520
Fix compilation on CUDA 8 by removing call to h2log1p
2016-08-15 23:38:05 +01:00
Igor Babuschkin
aee693ac52
Add log1p support for CUDA and half floats
2016-08-08 20:24:59 +01:00
Benoit Steiner
fe778427f2
Fixed the constructors of the new half_base class.
2016-08-04 18:32:26 -07:00
Benoit Steiner
9506343349
Fixed the isnan, isfinite and isinf operations on GPU
2016-08-04 17:25:53 -07:00
Gael Guennebaud
17b9a55d98
Move Eigen::half_impl::half to Eigen::half while preserving the free functions to the Eigen::half_impl namespace together with ADL
2016-08-04 00:00:43 +02:00
Benoit Steiner
02fe89f5ef
half implementation has been moved to half_impl namespace
2016-07-29 15:09:34 -07:00
Christoph Hertzberg
c5b893f434
bug #1266 : half implementation has been moved to half_impl namespace
2016-07-29 18:36:08 +02:00
Gael Guennebaud
395c835f4b
Fix CUDA compilation
2016-07-22 15:30:24 +02:00
Gael Guennebaud
47afc9a365
More cleaning in half:
...
- put its definition and functions in its own half_impl namespace such that the free function does not polute the Eigen namespace while still making them visible for half through ADL.
- expose Eigen::half throguh a using statement
- move operator<< from std to half_float namespace
2016-07-22 14:33:28 +02:00
Gael Guennebaud
0f350a8b7e
Fix CUDA compilation
2016-07-21 18:47:07 +02:00
Gael Guennebaud
87fbda812f
Add missing log10 and random generator for half.
2016-07-21 15:46:45 +02:00
Gael Guennebaud
01d12d3e82
Some cleanup in Halh: standard functions should be defined in the namespace of the class half to make ADL work, and thus the global is* functions can be removed.
2016-07-21 15:10:48 +02:00
Gael Guennebaud
a96a7ce3f7
Move CUDA's special functions to SpecialFunctions module.
2016-07-11 18:39:11 +02:00
Gael Guennebaud
2f7e2614e7
bug #1232 : refactor special functions as a new SpecialFunctions module, currently in unsupported/.
2016-07-08 11:13:55 +02:00
Benoit Steiner
8fd57a97f2
Enable the vectorization of adds and mults of fp16
2016-06-07 18:22:18 -07:00
Eugene Brevdo
39baff850c
Add TernaryFunctors and the betainc SpecialFunction.
...
TernaryFunctors and their executors allow operations on 3-tuples of inputs.
API fully implemented for Arrays and Tensors based on binary functors.
Ported the cephes betainc function (regularized incomplete beta
integral) to Eigen, with support for CPU and GPU, floats, doubles, and
half types.
Added unit tests in array.cpp and cxx11_tensor_cuda.cu
Collapsed revision
* Merged helper methods for betainc across floats and doubles.
* Added TensorGlobalFunctions with betainc(). Removed betainc() from TensorBase.
* Clean up CwiseTernaryOp checks, change igamma_helper to cephes_helper.
* betainc: merge incbcf and incbd into incbeta_cfe. and more cleanup.
* Update TernaryOp and SpecialFunctions (betainc) based on review comments.
2016-06-02 17:04:19 -07:00
Benoit Steiner
b6e306f189
Improved support for CUDA 8.0
2016-05-31 09:47:59 -07:00
Benoit Steiner
3a5d6a3c38
Disable the use of MMX instructions since the code is broken on many platforms
2016-05-27 09:13:26 -07:00