HPC for Energy shared a nice infographic on history of super-computing:
Brief History of Super Computing
By Andrew Binstock of Dr. Dobbs Journal
In his current editorial, our esteemed Dr. Dobb’s Editor-in-Chief Andrew Binstock discusses whether or not parallel programming will ever be taken up by the masses. He comes to the conclusion that (*spoiler alert*) the multiple cores in his devices will be more likely be used to run multiple and separate processes rather than a single, multithreaded process.
I can’t really disagree with his conclusion. When I first encountered parallel programming, it was in the context of scientific and technical computing. These areas of endeavor will always need more and more computational power and parallelism provides the excellent means to provide an ever-increasing number of relatively cheap processing cycles.
As multicore processors began to be announced, I had to wonder if there would really be much need for them. My test for the usefulness of just about any new-fangled technology is whether or not my 75-year-old mother would be able to get any benefit from it. I had a hard time coming up with a reason for her to invest in a multicore processor. She sends and reads e-mail, surfs the Interwebs, and keeps up with her grandkids via Facebook. Not much need for multiple cores within that scenario. Even so, I could easily see where multiple processes (coarse-grained parallelism), each running on separate cores, could give her a throughput benefit, as Andrew points out.
Continue reading at source
Very informative article about Heterogeneous Programming by Michael Wolfe, Compiler Engineer, The Portland Group, Inc.
The heterogeneous systems of interest to HPC use an attached coprocessor or accelerator that is optimized for certain types of computation. These devices typically exhibit internal parallelism, and execute asynchronously and concurrently with the host processor. Programming a heterogeneous system is then even more complex than “traditional” parallel programming (if any parallel programming can be called traditional), because in addition to the complexity of parallel programming on the attached device, the program must manage the concurrent activities between the host and device, and manage data locality between the host and device.