From SETI to CERN: What you need to know about cluster computing and how it can help your enterprise
Briefly

SETI@Home used volunteer machines to process radio telescope data by sending small chunks to a screensaver-style application that returned analyzed results, peaking at about two million users. Cluster computing combines many nodes—often low-powered or inexpensive machines—into a coordinated system to perform parallel data analysis and scale efficiently. Middleware distributes and manages tasks according to node capability and responsiveness to maximize utilization. Fault tolerance enables continued operation despite individual node failures. Cluster computing can operate across worldwide personal devices or within a single data center and supports applications such as scientific simulations, data analysis, and financial modeling.
Rather than one huge central computer crunching all the data looking for signals from alien civilizations, they created an application users could download to their own system. It was essentially a screensaver, kicking in after a few minutes of inactivity, and it downloaded a small chunk of information to analyze. The result was then sent back to its UC Berkeley home base, giving you a cool visualization of the data synthesis process.
The secret sauce is in how they're put together. Cluster computing nodes are usually regular old computers or chips containing memory, processing, storage, etc. What makes it a cluster computing environment is the middleware that analyzes and parcels out work, making sure it's done with the maximum efficiency around how many nodes there are, their processor capability, response speed etc.
Read at IT Pro
[
|
]