Tuesday, June 03, 2008

Rendering, a scalability case study

Recently I assisted an overseas partner to resolve a performance/scalability problem regarding Maya software rendering on Windows 2003. Their customer was telling them that the Sun Fire X4600 M2 server's performance is worst than Sun Ultra 40 M2 workstation. It is hard to believe that the server's performance (8, 16 cores) is worst than the workstation (only 4 cores).

I was given the Maya scene file and managed to render on my older Sun Fire X4600 server with Windows 2003 64-bit and Maya 8.5.

By default, Maya renderer will try to make use of all the CPU (default command line "-n 0") in the computer. However, not all the CPU will be used throughout the entire rendering process. Below Task Manager showed that only a few of the cores are actively participated in a 16-core rendering.

A typical frame is rendered using various number of cores and it is plotted against the ideal case (1/n). As you can see, the 'sweet spot' is to render with 1-3 cores. Also, performance is getting worst if more than 3/4 cores are deployed in the rendering. Likely Maya has to spend more time to do housekeeping internally. If the server has enough memory for the job, the best choice is to render one frame per core. Having more processes rendering 1 frame per core will result in more jobs trying to fetch the Maya file and it's corresponding dependencies (they are big, in this case it is 1+GB) from the network.

The CPU utilisation for "Render -n 16" command is captured using perfmon and is visualised using Gnuplot as shown below. Obviously the average CPU utilisation is less than 20% even through Maya claimed to use all 16 cores to render.

Performance issue is not a static problem, often time you have to strike a balance between cpu, io and network. So the moral of the story will be:

  • Understand your application
  • Don't simply take the default setting
  • Understand your system
  • Monitor all aspect of the infrastructure, CPU, memory, network, storage, io ...

Labels: ,

0 Comments:

Post a Comment

<< Home