If I use
thrust::host, the lambda usage is fine
thrust::transform(thrust::host, a, a+arraySize,b,d,(int a, int b)->int
return a + b;
However, if I change the thrust::host to thrust::device, The code wouldn't pass the compiler. Here is the error on vs 2013
The closure type for a lambda ("lambda (int, int)->int") cannot be used in the template argument type of a
__global__ function template instantiation, unless the lambda is defined within a
So the problem is how to add the
__global__ to lambda?
Best How To :
Currently it is not possible. Quoting from Mark Harris:
That isn't supported today in CUDA, because the lambda is host code. Passing lambdas from host to device is a challenging problem, but it is something we will investigate for a future CUDA release.
What you can do in CUDA 7 is call thrust algorithms from your device code, and in that case you can pass lambdas to them...
With CUDA 7, thrust algorithms can be called from device code (e.g. CUDA kernels, or
__device__ functors). In those situations, you can use (device) lambdas with thrust. An example is given in the parallelforall blog post (again by Mark Harris) here.