The case for clusters: looking to maximize server and storage capabilities? Clusters are the key.
Problem: I'm givin" her all she's got, Captain. Intensive computing can slow down even the fastest systems and, as schools adopt even more specialized applications, there's nowhere to go but down.
Solution: One of the main benefits of a cluster is its ability to share resources among a number of servers to speed the system, as the University of Colorado, Boulder has recently discovered.
The school uses Sun Fire servers from Sun Microsystems (www.sun.com) to help run three processor-intensive applications: WebCT courseware and course management tools; a Web mail application used by about 24,000 students; and PLUS, a secure application that gives students access to their personal information, such as financial aid and health insurance. That kind of drain can lead to many frustrated students and administrators, but since the installation of the servers, the university has already seen a steady increase in the usage of the server applications. "Our IT infrastructure supports 26,850 FTE students and 2,000 faculty on our campus. We noticed the value we have added to our students' education efforts and our faculty's work experience with the servers," says Linda Drake, manager of Computing Systems for the school.
Problem: We don't hove the money for more computer equipment. With department budgets being trimmed to the bone, it doesn't make sense to spend more money on technology.
Solution: Industry standards usually mean a university can save big on equipment costs. "Server clusters are being put together very affordably to allow schools to build what becomes, essentially, a supercomputer," says Dean Kline, senior manager of Public Relations for Dell (www.dell.com). "The cost becomes more manageable and allows the institution to do some very powerful computing." As an example, Kline points to the University at Buffalo (NY), which employs a high-performance computing cluster (comprising more than 2,000 servers) and storage area network for human genome research, bio-informatics, protein structure prediction, and large-scale computer simulations.
Problem: The need for speed. Remember when you were impressed with a computer that boasted a 386 processor and 8 MB of RAM? Those days are a fond memory when you consider the intense number crunching power needed for modern research.
Solution: High-performance computing clusters perform trillions of complex calculations per second, accomplishing work that was previously reserved for multimillion dollar mainframe computers. Server clusters can do the job at a fraction of the cost.
"The cluster itself is becoming a computer," says Richard Coffey, IT director at the Scientific Computing and Imagery Institute at the University of Utah. "It used to be that you used it as a file server, but now we are considering that the cluster itself has a series of elements much Like a computer has a CPU, a network card, and a disk. Now for each cluster we buy, we evaluate it as we would a computer: How is it going to connect to the network? What CPU and memory are we going to need? We look at the cluster the same way we used to look at desktops."
Problem: Looks like we'll have to take it back to the shop. In the past, if a mainframe crashed, or a backup system failed, important information was lost, and productivity came to a standstill.
Solution: If part of a cluster fails, the system stays operational as other machines carry the burden. "Industry-wide standardization is enabling the complex systems to be managed more simply and more affordably," says Dell's Kline. Coffey at the SCI Institute adds that the cluster allows an IT department to "quickly retool our resources for an emergency response, or for a specific type of research that we are doing." Multiple clusters can be administered from a single "master" administration server, and the standardized technology enables IT staff to quickly react to any situation. "If we ever lose a server, we can quickly swap things out, put an image of the operating system that we need on that new server, and be back up and running in very little time."
Problem: Too much disparate information. Joe Student's admission records are stored on one server, while his financial aid data is stored on another, and his grades are elsewhere. And guess what? Because of some techno-hiccup, those servers currently aren't on speaking terms.
Solution: Consolidating servers and taking advantage of the SAN solutions available for server clusters enables schools to better manage the increasing flow of data they collect. "In the past, we had 25 to 30 servers around the campus, each with its own pocket of data (depending on whether it was for student information, or administrative or staff purposes). Data was spread all over the place," notes Tim Link, CIO of the Ohio State University Newark and Central Ohio Technical College. "Now, with the clusters and the SAN, we are consolidating all that data and reducing our server requirements by about half. It has really made things so much easier to manage."
Backup is easier, as well. "In the past, it was so difficult running the backup process on all the different servers, with a variety of software packages. It was really a mess," Link says. "We're doing backup much more efficiently, and the SAN solution has been a huge improvement."
Problem: A little help from my friends. Collaboration is the key to research, but if one researcher can't easily exchange information with another, it can be Like trying to drive a car blindfolded. You might eventually get where you're going, but there will be plenty of bumps along the way.
Solution: Servers to the rescue. Last December, Harvard University (MA) teamed with IBM (www.ibm.com) to launch "The Crimson Grid," a high-performance server network that will be used by students and faculty for research, data sharing, and collaboration in such areas as life sciences, engineering, and applied sciences. The project will also help other academic institutions take advantage of grid computing, which taps data and computing resources from different computing systems and makes them available when and where they are needed.
"A grid could potentially provide the tools to solve any type of problem, from a complex literature search to mining the genome," says Jayanta Sircar, CIO of Harvard's Division of Engineering and Applied Sciences. "Harvard's goal is to help create an out-of-the-box implementation of grid computing that will enable students throughout the region to Leverage commonly shared resources for collaborative research and knowledge sharing."
Problem: If I only had a brain. The Engineering Research Center at Mississippi State University is involved in a wide range of projects for government and industry, testing (through complex simulations) automobile "crash-worthiness," aerodynamics, and material strength, among other things. "The vast amount of computation involved in our studies required a new type of technology that would allow us to maximize our computing resources," says Roger Smith, senior systems administrator of the center.
Solution: The use of IBM's eServer systems gave the research center a 400 percent increase in processing power over its previous cluster, Smith says.
Problem: Spreading ourselves too thin. "One of our constraints had been that when someone came up with an application, we didn't have a server to run the application on," recalls Rich Ferguson, CIO of Quinnipiac University (CT). "We were mapping applications to servers."
Solution: The shared resources and consolidation of the clusters make it easy to expand a system as needed. Says Ferguson, "We've reduced the number of servers from 75 (delivering services centrally) to about 40. We're moving now to the model where servers give us an integrated infrastructure that allows us to easily expand as we need to."
Problem: How many IT people does it take to fix a server? No, it's not a joke; we're just pointing out the fact that as technology expands, there is usually the need to add personnel to maintain it.
Solution: Again, the standardized components of a cluster mean easier maintenance and, with it, additional savings in staffing. Says OSU's Link: "We think of our cluster as an environment that can expand in response to demand, so it's not a big deal to expand. Sometimes you can afford the hardware to expand, but you can't afford the staff time to reconfigure everything. That's why having that manageability is critically important here."
|Printer friendly Cite/link Email Feedback|
|Date:||Feb 1, 2004|
|Previous Article:||Patently conflicted: medical schools have gotten into the health technology business in a big way, but in the wake of litigation, institutions may...|
|Next Article:||Ready for my close-up.|