43 Commits

Author SHA1 Message Date
Benoit Steiner
09a19c33a8 Added missing EIGEN_DEVICE_FUNC qualifiers 2016-05-11 14:07:43 -07:00
Benoit Steiner
0451940fa4 Relaxed the dummy precision for fp16 2016-05-05 15:40:01 -07:00
Benoit Steiner
6c3e5b85bc Fixed compilation error with cuda >= 7.5 2016-05-03 09:38:42 -07:00
Benoit Steiner
da50419df8 Made a cast explicit 2016-05-02 19:50:22 -07:00
Benoit Steiner
2b890ae618 Fixed compilation errors generated by clang 2016-04-29 18:30:40 -07:00
Benoit Steiner
c61170e87d fpclassify isn't portable enough. In particular, the return values of the function are not available on all the platforms Eigen supportes: remove it from Eigen. 2016-04-27 14:22:20 -07:00
Benoit Steiner
25141b69d4 Improved support for min and max on 16 bit floats when running on recent cuda gpus 2016-04-27 12:57:21 -07:00
Benoit Steiner
6744d776ba Added support for fpclassify in Eigen::Numext 2016-04-27 12:10:25 -07:00
Benoit Steiner
7718749fee Force the inlining of the << operator on half floats 2016-04-14 11:51:54 -07:00
Benoit Steiner
5379d2b594 Inline the << operator on half floats 2016-04-14 11:40:48 -07:00
Benoit Steiner
5c13765ee3 Added ability to printf fp16 2016-04-14 10:24:52 -07:00
Benoit Steiner
36f5a10198 Properly gate the definition of the error and gamma functions for fp16 2016-04-13 18:44:48 -07:00
Benoit Steiner
d6105b53b8 Added basic implementation of the lgamma, digamma, igamma, igammac, polygamma, and zeta function for fp16 2016-04-13 15:26:02 -07:00
Benoit Steiner
87ca15c4e8 Added support for sin, cos, tan, and tanh on fp16 2016-04-13 14:12:38 -07:00
Benoit Steiner
473c8380ea Added constructors to convert unsigned integers into fp16 2016-04-13 11:03:37 -07:00
Benoit Steiner
833efb39bf Added epsilon, dummy_precision, infinity and quiet_NaN NumTraits for fp16 2016-04-11 11:03:56 -07:00
Benoit Steiner
8d22967bd9 Initial support for taking the power of fp16 2016-04-08 14:22:39 -07:00
Benoit Steiner
737644366f Move the functions operating on fp16 out of the std namespace and into the Eigen::numext namespace 2016-04-07 11:40:15 -07:00
Benoit Steiner
df838736e2 Fixed compilation warning triggered by msvc 2016-04-06 20:48:55 -07:00
Benoit Steiner
532fdf24cb Added support for hardware conversion between fp16 and full floats whenever
possible.
2016-04-06 17:11:31 -07:00
Benoit Steiner
58c1dbff19 Made the fp16 code more portable. 2016-04-06 13:44:08 -07:00
Benoit Steiner
cf7e73addd Added some missing conversions to the Half class, and fixed the implementation of the < operator on cuda devices. 2016-04-06 09:59:51 -07:00
Benoit Steiner
72abfa11dd Added support for isfinite on fp16 2016-04-06 09:07:30 -07:00
Benoit Steiner
0ea7ab4f62 Hashing was only officially introduced in c++11. Therefore only define an implementation of the hash function for float16 if c++11 is enabled. 2016-03-31 14:44:55 -07:00
Benoit Steiner
92b7f7b650 Improved code formating 2016-03-31 13:09:58 -07:00
Benoit Steiner
f197813f37 Added the ability to hash a fp16 2016-03-31 13:09:23 -07:00
Benoit Steiner
c36ab19902 Added __ldg primitive for fp16. 2016-03-31 10:55:03 -07:00
Benoit Steiner
b575fb1d02 Added NumTraits for half floats 2016-03-31 10:43:59 -07:00
Benoit Steiner
8c8a79cec1 Fixed a typo 2016-03-31 10:33:32 -07:00
Benoit Steiner
e02b784ec3 Added support for standard mathematical functions and trancendentals(such as exp, log, abs, ...) on fp16 2016-03-29 09:20:36 -07:00
Benoit Steiner
7a570e50ef Fixed contractions of fp16 2016-03-23 16:00:06 -07:00
Benoit Steiner
0e68882604 Added the ability to divide a half float by an index 2016-03-23 09:46:42 -07:00
Benoit Steiner
6971146ca9 Added more conversion operators for half floats 2016-03-23 09:44:52 -07:00
Benoit Steiner
f9ad25e4d8 Fixed contractions of 16 bit floats 2016-03-22 09:30:23 -07:00
Benoit Steiner
7bd551b3a9 Make all the conversions explicit 2016-03-18 12:20:08 -07:00
Benoit Steiner
5a51366ea5 Fixed a typo. 2016-03-14 09:25:16 -07:00
Benoit Steiner
fcf59e1c37 Properly gate the use of cuda intrinsics in the code 2016-03-14 09:13:44 -07:00
Benoit Steiner
97a1f1c273 Make sure we only use the half float intrinsic when compiling with a version of CUDA that is recent enough to provide them 2016-03-14 08:37:58 -07:00
Benoit Steiner
e29c9676b1 Don't mark the cast operator as explicit, since this is a c++11 feature that's not supported by older compilers. 2016-03-12 00:15:58 -08:00
Benoit Steiner
eecd914864 Also replaced uint32_t with unsigned int to make the code more portable 2016-03-11 19:34:21 -08:00
Benoit Steiner
1ca8c1ec97 Replaced a couple more uint16_t with unsigned short 2016-03-11 19:28:28 -08:00
Benoit Steiner
0423b66187 Use unsigned short instead of uint16_t since they're more portable 2016-03-11 17:53:41 -08:00
Benoit Steiner
048c4d6efd Made half floats usable on hardware that doesn't support them natively. 2016-03-11 17:21:42 -08:00