Some General Parallel Terminology (1)

  • Supercomputing / High Performance Computing (HPC) >The world’s fastest and largest computers.
  • Node > A computer, comprised of multiple CPUs/processors/cores.
  • Task > a program or program-like set of instructions that is executed by a processor.
  • Pipelining >Breaking a task into steps.
  • Shared Memory/Symmetric Multi-Processor (SMP) > an architecture where all processors have direct access to  physical memory.
  • Distributed Memory > Processors have their own local memory. No concept of global address space across all processors.
  • Communications > data exchange through a shared memory bus or over a network.
  • Synchronization > The coordination of parallel tasks often associated with communications.
  • Granularity > measure of the ratio of computation to communication.
    • Granularity coarse > high ratio, good for rare communication
    • Granularity fine > small ratio, good for frequent communication
  • Parallel Overhead > amount of time required to coordinate parallel tasks as opposed to doing useful work.
  • Scalability >ability to demonstrate a proportionate increase in parallel speedup with the addition of more processors.

Leave a Reply

Your email address will not be published. Required fields are marked *