Introduction

Overview


GPUs are massively parallel processors with thousands of small but highly efficient cores designed for handling multiple tasks simultaneously.

With Alea GPU, you can take advantage of this processing power to accelerate .NET and Mono applications in a simple and efficient way on Windows, Linux and Mac OS X. You develop your GPU code with the .NET language and tools you already know. The Alea GPU runtime system efficiently handles execution on the GPU and all the memory management.

Alea GPU is

  • Fast
  • Easy to use
  • Highly productive
  • Cross platform

Highlights


Alea GPU delivers outstanding developer experience and is unique in several aspects.

Automatic Memory Management

Alea GPU automatically copies data between the CPU and the GPU memory in an economic way, which reduces boiler plate code and simplifies development substantially.

Programming Models

Alea GPU provides simple to use parallel-for and parallel aggregate methods, similar to the corresponding methods of the .NET TPL library, and implements the CUDA programming model for advanced GPU programming.

Unified CPU and GPU Types

.NET arrays and many .NET types can be used directly in GPU code, including properties such as the array length.

Scripting

Alea GPU provides a just-in-time (JIT) compiler and compiler API for GPU scripting.

NVIDIA CUDA Libraries

Many useful libraries of the CUDA ecosystem, such as cuBlas, cuRand and cuDNN, are tightly integrated with Alea GPU.

Native

Alea GPU natively supports all .NET languages, including C#, F# and VB. It compiles .NET code directly to GPU code without first generating intermediate C or C++.

Cross Platform

Alea GPU is cross platform and runs on Windows, Linux and Mac OS X. You can conveniently develop on Windows and deploy on any platform without recompiling the GPU code for the target platform.

Debugging and Profiling

Alea GPU comes with advanced debugging and profiling support, compatible with the NVIDIA Nsight Debugger and NVIDIA Visual Profiler.

High Performance

There is no performance compromise, compiled code runs as fast as CUDA C/C++.

Simple Installation

Alea GPU can be installed entirly from NuGet packages. No third party compilers or tools have to be installed. In particular, the NVIDIA CUDA C/C++ compiler and tools are not required.

Documentation and Samples

Alea GPU comes with extensive documentation and a growing collection of self contained samples, which can be downloaded from our sample gallery and used as a starting point for your own development.

Future proof

Alea GPU is built with a future-oriented technology based on LLVM compiler infrastructure.

Programming Models


Alea GPU relies on the syntax as defined by C#, F# or VB and extends them with a library and a set of utility constructs for parallel computing. Alea GPU provides multiple programming models with different levels of abstractions.

Parallel-For and Parallel Aggregation

The Alea GPU parallel-for allows to execute a lambda expression, delegate or function on a GPU in parallel for each element of a collection or each index of an ordered range. The Alea GPU parallel aggregation is designed to aggregate multiple inputs to a final value using a binary function, delegate or lambda expression. In combination with the Alea GPU automatic memory management, developers can write parallel GPU code as if they would write serial loops.

CUDA

For maximal flexibility, Alea GPU implements the CUDA programming model. It is designed to execute data-parallel workloads with a very large number of threads. CUDA exposes parallel concepts such as thread, thread blocks or grid to the programmer so that he can map parallel computations to GPU threads in a flexible yet abstract way. CUDA also exposes the GPU memory hierarchy to the programmer. He can take advantage of the different memory types to optimize memory access and IO bandwidth.