I bumped into this excellent article by Susan Morgan, where she explains various terms associated with parallel programming. This article explains the common parallel and multi-threading concepts, and the differences between the hardware and software aspects of parallel processing. It briefly describes the hardware architectures that make parallel processing possible, and presents several popular parallel programming models.
Stumbled into this youtube video about the first computer ever designed:
Charles Babbage designed one of the first computers but never lived to see it built. Here’s a demo of one of the two in the world that work. This one is at the Computer History Museum in Mountain View, California, USA, and here they fire it up (well, crank it up) and explain a bit about it.
Concurrency is concerned with two or more activities happening at the same time, eg., thinking while walking is one form of concurrency. When talking about concurrency in computer programming we mean a single system performing multiple tasks independently. Although it is possible that concurrent tasks may be performed at same time, that’s where it becomes parallel, but it is not required.
Parallelism is concerned with running two or more activities in parallel with explicit goal of increasing the overall performance.
Thus all Parallel programs are Concurrent in nature but not vice-versa, as illustrated:
GPU maker NVIDIA is going to make its CUDA compiler runtime source code, and internal representation format public, opening up the technology for different programming languages and processor architectures. The announcement was made on Wednesday at the kick-off of the GPU Technology Conference Asia in Beijing, China.
The company says it will use the LLVM compiler infrastructure as the vehicle for the public CUDA source code. LLVM is a open source project that maintains source code collections of various of compile, runtimes, and other development tools. The new LLVM-based CUDA source, will be available in the latest release of the CUDA Toolkit, version 4.1, which was also launched this week.
The CUDA open source set-up does not, however, mean NVIDIA will arbitrarily accept changes and enhancements to its compiler technology from other developers. The company still intends to retain complete control of its source code. Tool developers will be able to modify the standard compiler and runtime for their own customized needs, but little of this is likely to be folded back into NVIDIA’s code base
The main idea is to allow software tool makers to port the CUDA compiler to other environments that NVIDIA or its commercial partners are not interested in pursuing on their own. In the case of programming languages, there are already compilers for C, C++, and Fortran, which are the big three for high performance computing. But as the market for GPU computing expands, NVIDIA foresees the need for other languages such as Python or Java, as well as domain specific languages.
In an effort to make it easier for programmers to take advantage of parallel computing, NVIDIA, Cray Inc., the Portland Group (PGI), and CAPS enterprise announced today a new parallel-programming standard, known as OpenACC™. OpenACC is a new open parallel programming standard designed to enable the millions of scientific and technical programmers to easily take advantage of the transformative power of heterogeneous CPU/GPU computing systems.
OpenACC allows parallel programmers to provide simple hints, known as “directives,” to the compiler, identifying which areas of code to accelerate, without requiring programmers to modify or adapt the underlying code itself. By exposing parallelism to the compiler, directives allow the compiler to do the detailed work of mapping the computation onto the accelerator.