enum cudaError

CUDA error types

Enumerator:
cudaSuccess  The API call returned with no errors. In the case of query calls, this can also mean that the operation being queried is complete (see cudaEventQuery() and cudaStreamQuery()).
cudaErrorMissingConfiguration  The device function being invoked (usually via cudaLaunch()) was not previously configured via the cudaConfigureCall() function.
cudaErrorMemoryAllocation  The API call failed because it was unable to allocate enough memory to perform the requested operation.
cudaErrorInitializationError  The API call failed because the CUDA driver and runtime could not be initialized.
cudaErrorLaunchFailure  An exception occurred on the device while executing a kernel. Common causes include dereferencing an invalid device pointer and accessing out of bounds shared memory. The device cannot be used until cudaThreadExit() is called. All existing device memory allocations are invalid and must be reconstructed if the program is to continue using CUDA.
cudaErrorPriorLaunchFailure  This indicated that a previous kernel launch failed. This was previously used for device emulation of kernel launches.
Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.
cudaErrorLaunchTimeout  This indicates that the device kernel took too long to execute. This can only occur if timeouts are enabled - see the device property kernelExecTimeoutEnabled for more information. The device cannot be used until cudaThreadExit() is called. All existing device memory allocations are invalid and must be reconstructed if the program is to continue using CUDA.
cudaErrorLaunchOutOfResources  This indicates that a launch did not occur because it did not have appropriate resources. Although this error is similar to cudaErrorInvalidConfiguration, this error usually indicates that the user has attempted to pass too many arguments to the device kernel, or the kernel launch specifies too many threads for the kernel's register count.
cudaErrorInvalidDeviceFunction  The requested device function does not exist or is not compiled for the proper device architecture.
cudaErrorInvalidConfiguration  This indicates that a kernel launch is requesting resources that can never be satisfied by the current device. Requesting more shared memory per block than the device supports will trigger this error, as will requesting too many threads or blocks. See cudaDeviceProp for more device limitations.
cudaErrorInvalidDevice  This indicates that the device ordinal supplied by the user does not correspond to a valid CUDA device.
cudaErrorInvalidValue  This indicates that one or more of the parameters passed to the API call is not within an acceptable range of values.
cudaErrorInvalidPitchValue  This indicates that one or more of the pitch-related parameters passed to the API call is not within the acceptable range for pitch.
cudaErrorInvalidSymbol  This indicates that the symbol name/identifier passed to the API call is not a valid name or identifier.
cudaErrorMapBufferObjectFailed  This indicates that the buffer object could not be mapped.
cudaErrorUnmapBufferObjectFailed  This indicates that the buffer object could not be unmapped.
cudaErrorInvalidHostPointer  This indicates that at least one host pointer passed to the API call is not a valid host pointer.
cudaErrorInvalidDevicePointer  This indicates that at least one device pointer passed to the API call is not a valid device pointer.
cudaErrorInvalidTexture  This indicates that the texture passed to the API call is not a valid texture.
cudaErrorInvalidTextureBinding  This indicates that the texture binding is not valid. This occurs if you call cudaGetTextureAlignmentOffset() with an unbound texture.
cudaErrorInvalidChannelDescriptor  This indicates that the channel descriptor passed to the API call is not valid. This occurs if the format is not one of the formats specified by cudaChannelFormatKind, or if one of the dimensions is invalid.
cudaErrorInvalidMemcpyDirection  This indicates that the direction of the memcpy passed to the API call is not one of the types specified by cudaMemcpyKind.
cudaErrorAddressOfConstant  This indicated that the user has taken the address of a constant variable, which was forbidden up until the CUDA 3.1 release.
Deprecated:
This error return is deprecated as of CUDA 3.1. Variables in constant memory may now have their address taken by the runtime via cudaGetSymbolAddress().
cudaErrorTextureFetchFailed  This indicated that a texture fetch was not able to be performed. This was previously used for device emulation of texture operations.
Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.
cudaErrorTextureNotBound  This indicated that a texture was not bound for access. This was previously used for device emulation of texture operations.
Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.
cudaErrorSynchronizationError  This indicated that a synchronization operation had failed. This was previously used for some device emulation functions.
Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.
cudaErrorInvalidFilterSetting  This indicates that a non-float texture was being accessed with linear filtering. This is not supported by CUDA.
cudaErrorInvalidNormSetting  This indicates that an attempt was made to read a non-float texture as a normalized float. This is not supported by CUDA.
cudaErrorMixedDeviceExecution  Mixing of device and device emulation code was not allowed.
Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.
cudaErrorCudartUnloading  This indicated an issue with calling API functions during the unload process of the CUDA runtime in prior releases.
Deprecated:
This error return is deprecated as of CUDA 3.2.
cudaErrorUnknown  This indicates that an unknown internal error has occurred.
cudaErrorNotYetImplemented  This indicates that the API call is not yet implemented. Production releases of CUDA will never return this error.
cudaErrorMemoryValueTooLarge  This indicated that an emulated device pointer exceeded the 32-bit address range.
Deprecated:
This error return is deprecated as of CUDA 3.1. Device emulation mode was removed with the CUDA 3.1 release.
cudaErrorInvalidResourceHandle  This indicates that a resource handle passed to the API call was not valid. Resource handles are opaque types like cudaStream_t and cudaEvent_t.
cudaErrorNotReady  This indicates that asynchronous operations issued previously have not completed yet. This result is not actually an error, but must be indicated differently than cudaSuccess (which indicates completion). Calls that may return this value include cudaEventQuery() and cudaStreamQuery().
cudaErrorInsufficientDriver  This indicates that the installed NVIDIA CUDA driver is older than the CUDA runtime library. This is not a supported configuration. Users should install an updated NVIDIA display driver to allow the application to run.
cudaErrorSetOnActiveProcess  This indicates that the user has called cudaSetDevice(), cudaSetValidDevices(), cudaSetDeviceFlags(), cudaD3D9SetDirect3DDevice(), cudaD3D10SetDirect3DDevice, cudaD3D11SetDirect3DDevice(), * or cudaVDPAUSetVDPAUDevice() after initializing the CUDA runtime by calling non-device management operations (allocating memory and launching kernels are examples of non-device management operations). This error can also be returned if using runtime/driver interoperability and there is an existing CUcontext active on the host thread.
cudaErrorInvalidSurface  This indicates that the surface passed to the API call is not a valid surface.
cudaErrorNoDevice  This indicates that no CUDA-capable devices were detected by the installed CUDA driver.
cudaErrorECCUncorrectable  This indicates that an uncorrectable ECC error was detected during execution.
cudaErrorSharedObjectSymbolNotFound  This indicates that a link to a shared object failed to resolve.
cudaErrorSharedObjectInitFailed  This indicates that initialization of a shared object failed.
cudaErrorUnsupportedLimit  This indicates that the cudaLimit passed to the API call is not supported by the active device.
cudaErrorDuplicateVariableName  This indicates that multiple global or constant variables (across separate CUDA source files in the application) share the same string name.
cudaErrorDuplicateTextureName  This indicates that multiple textures (across separate CUDA source files in the application) share the same string name.
cudaErrorDuplicateSurfaceName  This indicates that multiple surfaces (across separate CUDA source files in the application) share the same string name.
cudaErrorDevicesUnavailable  This indicates that all CUDA devices are busy or unavailable at the current time. Devices are often busy/unavailable due to use of cudaComputeModeExclusive or cudaComputeModeProhibited. They can also be unavailable due to memory constraints on a device that already has active CUDA work being performed.
cudaErrorInvalidKernelImage  This indicates that the device kernel image is invalid.
cudaErrorNoKernelImageForDevice  This indicates that there is no kernel image available that is suitable for the device. This can occur when a user specifies code generation options for a particular CUDA source file that do not include the corresponding device configuration.
cudaErrorIncompatibleDriverContext  This indicates that the current context is not compatible with this version of the CUDA Runtime. This can only occur if you are using CUDA Runtime/Driver interoperability and have created an existing Driver context using an older API. Please see Interactions with the CUDA Driver API for more information.
cudaErrorStartupFailure  This indicates an internal startup failure in the CUDA runtime.
cudaErrorApiFailureBase  Any unhandled CUDA driver error is added to this value and returned via the runtime. Production releases of CUDA should not return such errors.


Generated by Doxygen for NVIDIA CUDA Library  NVIDIA