Printer Friendly

Next generation storage for next generation studios; Case study: BlueArc and the fantastic four.

Often, storage in the studios for applications including animation, special effects, or editing, are simply a JBOD (Just a Bunch of Disks), directly attached to workstations striped together to get acceptable performance, measured by raw throughput. Projects that have grown in size and scope, along with a growing thirst for more sophisticated visual effects, have both caused a significant increase in the complexity of the creation process, and demand a re-thinking of storage infrastructures for the studio.

When the storage infrastructure is incapable of meeting requirements for a large project, some studios have been forced to compromise the creative side. This is done by reducing the complexity of the shots, often limiting the number of elements in a frame, which reduces the strain on the server and storage infrastructure. Other studios break the artists into shifts to try to "spread the load" across a greater part of the day. This, along with generally longer workdays due to infrastructure slowdowns, puts an undue strain on the creative and IT staff.

To handle these growing requirements, Network Attached Storage (NAS) has had to change dramatically in many dimensions, including higher throughput performance, transaction performance, increased file system capacity and scalability, and improved availability. Although there are many technical trends that are driving changes to storage in the studio such as higher resolution formats, faster networks, and 64-bit computing, the primary driver of growing storage requirements in entertainment are the creative trends. For entertainment, all things are driven by creative requirements conceived by writers, directors, producers, visual effects supervisors and artists. It is somewhat of a chicken and egg scenario, where creativity pushes the technology and technology enables the creativity.

With an increasing appetite for more sophisticated visual effects, one obvious change is that blockbuster "effects" movies have more shots then ever. The number of shots per movie has gone from a few hundreds for a major production into potentially thousands of shots spread across multiple studios. These shots are also longer and more complex, with more elements in every frame. The creative side is also driven by directors being more involved in digital creation which can lead to revisions as they see what is possible and push technology to its creative limits.

[ILLUSTRATION OMITTED]

With the increase in the number of shots, even when projects may be distributed to multiple studios, production studio houses have had to grow their creative staff, and prepare for a more scalable and collaborative post-production environment. With studios quickly growing, the storage infrastructures quickly became a bottleneck, forcing studios to use high-end centralized storage solutions to scale with requirements.

For example, Fox Studio's recent Fantastic Four movie production had over 1,000 shots and was spread across ten studios, with four doing the majority of the work. For Fantastic Four, some studios focused on character creations such as Giant Killer Robots' (GKR) revolutionary work on the Human Torch, while others focused on broader scene effects, like Meteor Studios' work on a key bridge scene, where characters discovered their powers. In both of these cases, even with portioning, the projects were among the largest the studios had ever undertaken. Both studios required a high performance centralized NAS infrastructure to support the massive number of shots and the complexity of the projects, and both studios relied on BlueArc's Titan storage Server to meet their requirements.

For GKR, this meant having a solution that could dynamically scale as the complexity and the number of shots grew and as their studio continued to expand. "BlueArc from the beginning was simple to implement and scale, and allows us to take on more projects and larger projects than we have ever done before", said Rich Simon, senior administrator at GKR. They grew their storage infrastructure from 4TB terabytes to 8TB then to 12TB terabytes to handle the full scope of the projects. With Titan's ability to scale file systems to 256 TB, GKR has the ability to continue scaling without having to break up projects across multiple file systems. Capacity increases were driven by the number of shots and complexity of, as well as the challenge of, creating realistic flame effects with CGI, a task impossible only a few years ago. GKR created many new tools and worked with 3D software developers to achieve their goal of realistic flame effects forming the "core" of the Human Torch.

The sophistication complexity of the flame effects also pushed storage performance, with more than 60 artists working collaboratively on shots. In addition to artists, GKR has 75 render nodes and also accessed files on Titan to perform complex calculations to produce smooth finished (rendered) images. Along with the artists accessing layers and elements associated with each frame, GKR also has a 75 node render farm, used to perform complex calculations needed to create the smooth finished images for each frame. At night, GKR also uses CPU power of the artists' workstations as render nodes, pushing the render farm to more than 130 nodes, all accessing Titan. Prior to BlueArc, GKR experienced slowdowns when many workstations accessed central storage. Per Rich Simon, "Once we passed about ten workstations and fifteen render machines, the disks started to bog down, so we needed not only considerably larger storage capacity, but also faster throughput." With Titan, GKR was able to handle the entire render farm and artists workstations, getting over 300 Megabytes MB/sper second during peak periods, a level impossible with software-based NAS solutions. With Titan, GKR was able to handle the entire render farm and artists workstations, getting over 300 Megabytes MB/per second during peak periods, a level impossible with software-based NAS solutions.

High throughput has always been a requirement for studio storage, but now with increased complexity, transactional performance is now critical as well. However, ever-increasing complexity of CGI and animation has added transactional requirements to storage as well. To create these complex frames, many reference files are required, which need to be accessed by the artists to create them and by the render farm to complete each frame. These files can include information on every element in a frame from particles such as smoke, flame, clouds or even fur, as well as layers containing textures, colorshadows, lighting and other relevant elements for each frame. For example, this has increased in complexity well over four fold-from even a few years ago. In a recent major production, a studio used over four million different leaves throughput the production. This has altered the requirement for studio storage to support a higher number of simultaneous requests and transactions per second. BlueArc's hardware architecture, using similar technology to that of high performance switches and routers, allows it to sustain over 60,000 simultaneous connections and well over 100,000 IO/sec per second (IOPS) in a studio environment. This ensures that all render node requests are responded to promptly, and prevent failed renders stemming from storage access or latency issues.

Meteor Studio's efforts in Fantastic Four were more subtle than the Human Torch, but no less sophisticated. Their challenge was to create a Brooklyn Bridge sequence without the bridge ever being used in the production, saving production costs and increasing creative flexibility. This required the creation of a computer generated (CG) CGI bridge along with water, cars, helicopters, people, and even a fire truck. To avoid distracting from the movie, this meant photo-realistic effects that viewers would perceive as if they were real. In the past, photo-realistic effects were often distracting as viewers could discern real from CG computer generated images, but newer technology has enabled more detail creation creativity with more elements, making effects more difficult to spot.

Responsible for over 240 shots, Meteor Studios had scaled to about 80 more than 80 artists and over 80 render nodes in the render farm, as they pushed both overall throughput and IOPS performance with elements like water, reflections, CGI crowds and other complex effects. At times during the production, Meteor Studios exhibited performance over 100,000 IOPS sustained, producing rendered frames for review. These loads cause traditional NAS solutions to slow down, but not the BlueArc Titan server. Jami Levesque, Director of Technology at Meteor Studios said, "While under our previous environment it could take up to fifteen minutes to open a Maya file, it's now a matter of seconds, saving valuable artist time and focus." This can result in more iterations and better quality output.

Meteor Studios also took advantage of Titan's capability to support tiered storage, using the appropriate type of disk for each step of their workflow. For their high performance render farm, they stored files on fast 15K Fibre Channel drives, enabling high throughput, I/O, and availability. For completed or inactive files, such as those waiting for a re-write, they leverage lower cost, higher capacity Serial ATA (SATA) drives. This allowed them to create a solution that has a balanced price/performance model that matched their workflow, vs. sacrificing performance with an all SATA solution.

Studio storage for entertainment has had to grow and adapt to and scale with both the creative and technical trends in this industry. Studios can no longer afford to simply have to use local, non-high availability storage as projects scaled grow to include or hundreds of collaborative artists working together. A centralized high performance centralized pool of NAS storage solution is required. The storage must scale dynamically well beyond typical 16TB file system limits, must sustain very high throughput and I/O performance, and must allow for a collaborative file-sharing environment. Although traditional NAS servers have not been able to keep up with the needs of studios, innovations like BlueArc's innovative Titan Servers are designed to scale and sustain the performance required in these challenging environments. Titan has been proven by fire in real world studios, enabling studios to focus on their artistic creative results, which must be the critical primary focus of all studios.

Ron Totah is Director of Technical Marketing at BlueArc (San Jose, CA).

www.BlueArc.com
COPYRIGHT 2005 West World Productions, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2005, Gale Group. All rights reserved. Gale Group is a Thomson Corporation Company.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Case Studies
Author:Totah, Ron
Publication:Computer Technology Review
Geographic Code:1USA
Date:Oct 1, 2005
Words:1660
Previous Article:ILM for life sciences: the next big storage play?
Next Article:2005 storage year in review.
Topics:


Related Articles
It's a Family Affair.
Generation X generous with damages, study shows.
Fluoroquinolones protective against cephalosporin resistance in gram-negative nosocomial pathogens.
Scalable network storage architectures.
How to manage the data crunch: IHEs of every size are finding solutions to their server and storage needs that require less manpower to maintain and...
Scaling NAS with adaptive resource switches.
"IMS Magazine" and "SIP Magazine" from Technology Marketing Corp.
Finders and Keepers: Helping New Teachers Survive and Thrive in Our Schools.
Optimizing serial attached SCSI with PCI Express.
"IMS Magazine" and "SIP Magazine" from Technology Marketing Corp.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters