Memo aspiring studio

    When I worked for a young but rapidly growing company. While the projects were modest in size all went quite well. But when the project began to grow both in complexity and volume, and the number of render nodes, and the number of employees, then here we are expecting a very unpleasant surprise – and how to provide joint fast, simple and convenient operation for all of the studio . And unfortunately not responsible specialists or by someone other solution was found, in the end the project began to sag under their own weight. After a while I could not stand this agony and resigned, and the project has been closed after a year.

    And now when I have a lot of free time, I can put in order all your thoughts on the organization of a major studio and put them all on public display. Since these problems faced by every studio as it grows, then this article will help you find a ready solution for large or very large studio.

    They will discuss the following scale: more than 100 employees, more than 200 render nodes, more than 100 TB single file storage, more than 30 Gb / s speed file storage. Some of the ideas presented here are proven and are actually functioning, some are at the level of theory and the effect can be unpredictable. All the information given in the article is given for reflection, and has completed a logical decision. All specialized and complex terms will try to explain a simple and accessible language, even if not always technically accurate, detailed explanations to have wiki.

    Section 1. Network Storage

    The first thing that is necessary to solve the studio – this is of course where to store work files. For this purpose file server is bought, say a 24 TB. But over time, it starts to miss in terms of volume and speed of operation. Then bought a second server. And then the question arises as to scatter files between two servers. Something like this problem is solved. And then the two servers and no longer enough to buy another server. But now the files are spread? It is clear that you can not live like this, produced chaos and confusion. Also, if time starts 100 render nodes, once they are loaded data files scenes and textures from the server. Because of what the server thinks for a long time and is not available for employees.

    The first thing that must be addressed is how and what will be arranged by our store. And it must meet the following requirement:

    • store must be the same, ie When adding a new server file structure remains the same, but its volume and speed increased;
    • storage should be easily scaled, ie that we may at any time, without stopping his work could increase its volume or speed;
    • repository must be highly available, that is, while it is available to all users should be above 99,9%;
    • storage must be fault-tolerant, ie, any of the elements of the repository should be able to hot swap without loss of availability to users.

    And to make it easier to understand, this section I broke into several pieces, which will hold from the basics to a complete understanding of how this type of storage can be realized.

    Part 1. The software SAN. The protocol iSCSI.

    First, let’s understand terminology. What types of storage facilities are:

    • NAS (Network Attached Storage) – network storage, computer hard drives full of shared over a network.
    • SAN (Storage Area Network) – Storage Area Network, several NAS on the same network.

    Now let’s understand how we combine multiple NAS to SAN. For that there are many different protocols.

    • SMB (Server Message Block) – is familiar to all of us to report rassharivaniya files, printers, and other resources on the network;
    • SCSI (Small Computer System Interface) – sometimes pronounced “scuzzy” and it is more suitable to us a protocol that allows you to use drives of another computer as their own. Creating partitions on them, formatting and producing absolutely any operations as if the discs are completely their own.

    What is special protocol SCSI, and why not for us SMB. The fact that the network balls can not be combined into a single repository, but thanks to the SCSI we can throw a few NAS drives on one computer and create one RAID. Thus get a simple version of a single repository, which will help us understand the basics of networking SAN.

    SCSI consists of two elements. SCSI Target – aim to understand under those discs that will be available for transfer to another computer. And SCSI Initiator – the initiator, the service connects to your computer disks from a specified target.

    To collect the simplest SAN I use Windows Server 2012. Need two cars. One we will collect discs, and their second hitch.

    Setting SCSI Target:

    First we need to install the server purposes. Put it only on those file server disks that we want to pass on to customers, a customer does not need to set a goal. To install done a few simple steps:

    1. Go to the Server Manager (starts automatically when you log on);
    2. Push button Add Roles and Features .
    3. In the Add Roles Wizard and components we click Next until it reaches the section Server Roles ;
    4. In the box we find the choice of roles to File Services and Storage Service , it File and services iSCSI , and it has chosen the role of Server target iSCSI < / strong>. In the opened window we click buttons Add Roles .
    5. In the components, if desired, you can select the service iSNS server . It serves as a centralized service management iSCSI.
    6. We reach the end of the wizard and click Set .

    Now proceed with the configuration. Still a few simple steps:

    1. Go to the Server Manager , from left to find a new bookmark File Services and Storage Service , and in it a bookmark < strong> iSCSI .
    2. Right on top

    3. find a drop-down list Tasks , and in it to Create Virtual Disk iSCSI . Appears simple and intuitive wizard.
    4. In Wizard to create virtual disks iSCSI enter the following parameters for the divisions:
      1. Location virtual disk – here we choose in what place he would retain the image of the virtual disk to which we will continue to work. The image has all the familiar expanding VHD.
      2. name of the virtual disk – as the name would be called our file and description if necessary.
      3. size of the virtual disk iSCSI – enter the size of our virtual disk.
      4. purpose iSCSI – This is something like an access point to which to cling initiators to add the disk. At this stage, you need to select an existing target, but as the goals yet, choose new target iSCSI .
      5. name of the goal of access – enter a name for the purpose, and a description if necessary.
      6. Access Servers – here we need to specify which initiators have the right to connect to that machine. We click a button Add , in the window that appears, select the item Enter a value for the type , in the drop-down list, select IP address . And enter the IP address of the computer on which we will flip discs.
      7. Enabling Authentication Service – here for advanced systems. administrators all clear, the rest can skip this step.
      8. confirmation – check all that we have introduced, and click the button Create .

    5. This, in general, and then the whole setup. In Server Manager section iSCSI to display information about our drives and goals, as well as information on their condition.

    Setting SCSI Initiator:

    Initiator software is required, it is preinstalled in all versions of Windows. Configure the initiator even easier and all, Windows is not even the server is performed the same way:

    1. Click on the Start button , or what its place in the new Windows, and type on the keyboard iscsi .
    2. In the list that appears, select initiator iSCSI , the first time you see a window with the message of the need to start the service, click Yes .
    3. Next in the window that opens initiator iSCSI in the Targets , a group fast connection to enter the IP address of our destination and click the button Quick connection … .
    4. In the resulting window

    5. fast connection to display the name of our goal iSCSI.
    6. Push the button

    7. Finish . In the initiator iSCSI in the Targets in the group target discovery should appear in our purpose and status.
    8. general that’s all. Now we can go to the manager Disk Management and find there our connected virtual disk.

    It remains only to connect the drive to the system, create a partition and format it. This is done only once on one computer. If a different initiator to this end all partitions and file systems are already available.

    This is the simplest version of the SAN from a single repository. While it is useless, but now we know how the SCSI. The basis of almost all possible networks SAN.

    Now the question arises how do we increase the amount of storage. A general answer is simple, connect several servers with the objectives for the initiator.

    Now that when the system has a lot of records, we can make them a RAID array. And best of Poole. In fact RAID array can not be gathered in a cluster, and it is not as flexible and convenient as a pool. And why can not we do without the cluster look a little later.

    Now that we have a unified and scalable disk array, the question arises as to increase the speed of the array. And then we have two options. First, the easiest way to do that in the car initiator was a lot of powerful network cards that will be distributed files and users.

    But, unfortunately, such a scheme has many disadvantages. Firstly it is very limited scalability and speed of network cards that it vlezut. At one point, the time will come when, and 7-10 gigabit NICs will be missed. In the second such scheme has a single point of failure. In a single seeder server.

    The second embodiment, which solves both of these problems. You can increase the number of initiators that and will distribute the files. In this case we get a unified and scalable storage, in which we can easily build and upload speed will not be a single point of failure. In the event of failure of the file server, it will save the raid. In the event of failure of one of the initiators of its load distributed to others. And then the general logic throughout the studio can be summarized as follows.

    The scheme is the same for all possible variants of the repository when end users run on Windows machines, regardless of the platform and the technology that we will continue to consider. In the case where the end user is running on Linux, the user can connect directly to the store, or even a piece of itself to be the repository.

    But here, in the Windows version, we are waiting for a big problem. The purpose of participating in the array can be connected to only one initiator. The goal is not involved in the array can be connected to multiple initiators, each initiator will see it as a normal drive, but there is one problem putting an end to the scheme. The files on the disk each initiator will think that he alone owns the disk and only he has the right to use it. As a result, the initiators will not see each other’s files, and even worse, copying blocks of another initiator their blocks. The problem is in the file system, not NTFS, not FAT, or ReFS is not a clustered file system, and thus a single disk can be connected to only one initiator. Discs with cluster file system can be connected to multiple initiators, that is their main difference. But unfortunately Windows cluster file system does not know, but it still does not give up on Windows as a platform for building the repository, and later we will look at how to get around these limitations.

    Fortunately, things are much better in Linux. There are plenty of such systems, and in addition they are free, are much more powerful and scalable than in Windows.

    But before you begin to build a clustered storage consider the options on what protocols and transmission networks, we can build it.

    Part 2. Iron SAN. Options protocols.

    Now let’s see what are the ways to build SAN exist. To do this, look at the existing protocols for data transmission.

    • Ethernet – uses packet data technology mainly for local area networks LAN, this is what we face every day and submitted to any computer. Represented mainly Ethernet card with RJ-45 port. Over it runs TCP / IP. And on top of TCP / IP is already known to us working SMB and iSCSI.
    • SCSI – the technology to physically connect and transfer data between computers and peripheral devices. In the nude already obsolete. Presented by special expansion cards SCSI.
    • Fibre Channel – a protocol for high-speed data transmission. Is different from the Ethernet so that it is focused and optimized exclusively for data transmission block devices (disk storage). Due to the fact that excluded all unnecessary greatly increases performance and reduces data access time. Inside it be SCSI and in fact it is a logical development. Presented to a special expansion card, HBA adapter. Combined into a network with FC switches.
    • Fibre Channel over Ethernet – how to understand the normal FC only working on the Ethernet. This was done to consolidate and reduce the cost of the total costs of building the network. But the data access time is greatly increased. Presented a special HBA adapter that supports this technology. The network is combined with an ordinary Ethernet switch that supports this technology.
    • InfiniBand – and this is a whole bus, similar to PCI Express, on which may already be based a lot of important protocols. This is a very young technology, designed to integrate and simplify switching, by consolidating all networks into one. Inside are directly protocol suite, but we consider only interesting to us. IP over InfiniBand – as the name implies is the same TCP / IP. RDMA – Minutes of direct memory access another computer to have little interest in the nude. SCSI RDMA Protocol – and this is interesting, via RDMA can be captured and the data from the block device. InfiniBand expansion cards represented HCA. Combined into a network using InfiniBand switches.

    Now that we have an understanding of what protocols exist at all and how they work, we can begin to build on this strong cluster storage.

    Part 3. SAN software combining into a single repository.

    Option 1. Windows Failover Cluster

    Windows has a problem on one goal, you can connect only one initiator. It would seem that the situation is hopeless and you can forget about the Windows cluster storage. But in fact, there is a solution. Enough to raise failover cluster , which will serve more than one computer, and is already on the cluster disks and create a hook in it with a pool of disks.

    But, unfortunately, this method has a lot of disadvantages, so consider this option only superficially. For those who want more detail can be found in the search engines to find articles on request “ Clustered Storage Space Windows “. The problem with this solution is too high hardware requirements, and this despite the fact that the scalability of the solutions will approach except that small, high medium enterprises.

    stages of construction are as follows:

    1. If the outside of the cluster pool can be created from any device, even though the stick, the only cluster of drives SAS;
    2. All SAS drives must be connected to all nodes of the cluster, you need a protocol FC, with the appropriate hardware;
    3. Iron

    4. must meet very high standards, in particular to the standards, access times and responses;
    5. cluster is only part of the entire network infrastructure, so the same should be configured domain and dns server buduyuschego all nodes of the cluster should be introduced into the domain.
    6. on all nodes of the cluster service must be installed failover cluster .
    7. Next

    8. a domain administrator on any future cluster nodes, run fault-tolerant servers Manager .
    9. Manager, find the item in a cluster , and follow the instructions in the wizard.
    10. Enter the host names that will serve the cluster.
    11. At a certain stage will be carried out tests of fitness equipment for the cluster role. Here is all of the tests should be marked with green signs. In the case of a cluster of red just do not meet, in the case of yellow will fail during operation. Test very much, but all the conditions to be met.
    12. When a cluster meet the cluster management console opens, much like the server management console.
    13. In this console go to the section with the pools, and there he built our pool, in the pool of a virtual disk. And in this virtual disk rassharivat shared.

    And now about the shortcomings and why this cluster can advise only enemy:

    1. huge infrastructure costs, not only that SAS drives are much more expensive than SATA, so to them yet, and FC Give.
    2. cluster is not really failover , and cluster failures . That is, task is performed by only one of the cluster nodes and other monitoring only if the first failed.
    3. What would all cluster nodes at the same time work, you must configure the load balancer. In this case, the service based on TCP / IP can be rasparralelit between all nodes. All other services will still work to failure.
    4. scalability of such a cluster is very controversial, but it is possible to increase the volume, but the available volume can not be extended so as to not create a cluster thin disks, and therefore will have to create a new volume and a separate bowl.
    5. level 5 array is not available in the cluster available only 0 and 1. Therefore have to either lose volume or reliability of choosing between arrays.
    6. Since architectural features of the cluster node can not be a file server, and therefore again a big toll on the number of servers serving.
    7. maximum number of nodes that can service the cluster on Server 2012, only 63.
    8. and many more terrible items …

    All of these shortcomings can be are irrelevant, or even my nagging, if there were no other solutions on Windows they do not. But fortunately there are Linux in that it is not enough that the easier, more free, and on any hardware.

    Option 2. Distributed parallel file system with fault tolerance. On the example of Gluster FS.

    There is a large choice of this kind of file system GlusterFS, Lustre, Ceph, GPFS, Google FS, MooseFS, MogileFS, POHMELFS and more. etc. And fortunately most of them are free, and even OpenSource. But there is one drawback, as we found out earlier they work only on Linux.

    To understand how they work will raise one of them. The simplest and most popular pet, with the same one of the most powerful is the GlusterFS. The hardest part of setting it up was to choose the right distribution. For these purposes has been tried more than 10 distributions OpenSuse 12 and 11, SLES 11, Ubuntu 10 and 12, Fedora 16, CentOS 5 and 6, etc. In all these distributions it refused to work properly. That can not come, then freezes at all, if you use more than 3 servers, the speed does not. And only the good old Red Hat Enterprese Linux (RHEL) version 5, she starts with sex trafficking and functioning correctly at high loads.

    For GlusterFS is not required any special hardware. The architecture also divided into server and client side. It works either on the Ethernet, or by Infiniband. Discs can be absolutely any. Computer configuration can also be arbitrary. Instead of SCSI to get the files from the disk using Linux service Fuse. In addition GlusterFS is free and even OpenSource. And the whole setup is just a 7 simple lines with the teams. I’ll use an example GlusterFS version 3.3.1.

    Installing GlusterFS on the file server:

    To begin to install RHEL 5 file server. During installation, you can not enter a serial number, we are interested in only the free part. You must also select all the development packages in the Development (for experienced only gcc and kernel).

    Now you need to install the FUSE. By default, it does not go on a drive with RHEL, so I will use the search by rpm . for RHEL 5. The package is called the fuse-2.7.4-1.el5.rf.x86_64.rpm . Download and install.


    rpm-i fuse-2.7.4-1.el5.rf.x86_64.rpm

    Now it’s time to install GlusterFS . To do this, just go to official website and see the address of the repository for RHEL. Prescribes the address in the list of repositories, and then will be available packages GlusterFS. Install the packages glusterfs, glusterfs-fuse, glusterfs-server. For Infiniband need another package glusterfs-rdma.


    cp glusterfs-epel.repo / etc / yum.repos.d /

    yum install-y glusterfs glusterfs-fuse glusterfs-server

    After installation, a little tweak the configuration file. It is located at / etc / glustefs / glusterd.vol. First in line to option transport-type socket, rdma to remove the word RDMA , that would be in the logs through the line there was a message that there is no Infiniband. Also in the lines option transport.socket.keepalive-time 10 and option transport.socket.keepalive-interval 2 you must add two zeros. This is the response time in milliseconds, if the node does not respond within the specified time, it will be broken. For these indicators iron must be of very high quality, and since not all there is increasing these parameters. As a result, the file should look like this:

    volume management
    type mgmt / glusterd
    option working-directory / var / lib / glusterd
    option transport-type socket
    option transport.socket.keepalive-time 1000
    option transport.socket.keepalive-interval 200
    option off

    or for those who like to drive a command line:

    cat> / etc / glusterfs / glusterd.vol << EOF volume management type mgmt / glusterd option working-directory / var / lib / glusterd option transport-type socket option transport.socket.keepalive-time 1000 option transport.socket.keepalive-interval 200 option off end-volume EOF

    It remains only to run server-side service team

    service glusterd start

    Installation is complete. That each server is set up permanently by hand, all of the commands can be reduced to a single script that can be copied to the console and everything will tune

    cd / tmp


    rpm-i fuse-2.7.4-1.el5.rf.x86_64.rpm


    cp glusterfs-epel.repo / etc / yum.repos.d /

    yum install-y glusterfs glusterfs-fuse glusterfs-server

    cat> / etc / glusterfs / glusterd.vol << EOF volume management type mgmt / glusterd option working-directory / var / lib / glusterd option transport-type socket option transport.socket.keepalive-time 1000 option transport.socket.keepalive-interval 200 option off end-volume EOF service glusterd start cd mkdir / gluster mkdir / gluster/brick01

    Just copy them to the console immediately after the installation of RHEL and GlusterFS is automatically set, and will make preset.

    Server configurations:

    To start on each node file storage will create a folder that will be used for storage. I do it will be / gluster/brick01 on the first server, / gluster/brick02 on the second server, etc. The main thing that the folder names of all servers were different.

    Next to one of the nodes of the cluster, it will be called the master, execute commands to configure GlusterFS.

    gluster peer probe IP

    With this command, it checks for the presence of another node GlusterFS and connecting it to a common cluster. Check the status of the cluster nodes, use the command gluster peer status . If you add a node an error or you just messed up the configuration can be simply out of the defective unit to remove all settings and try again to add the node. Settings are stored in the folder / var / lib / glusterd.

    Now you need to create a disk in the cluster, it is done team

    gluster volume create VOL_NAME IP1 :/ patch1 IP2 :/ patch2 IP3 :/ patch3

    Here we introduce the name of our drive, as well as the folders which servers will be used for cluster maintenance. After the name of the disk, you can specify the type of the array that will be created in the folders. Total array of options available for about 9 pieces. In the documentation they all described in detail. CDs can also be several with different types of arrays.

    It remains only to make our drive active

    gluster volume start VOL_NAME

    That’s all that and the whole setup. Only three lines. And you have a powerful cluster storage. Check the status of the disk, use the command gluster volume info . Servers can be added and the volume on the go team gluster volume add VOL_NAME IP :/ patch . If the cluster is added to the new machine, or another reason the unbalance, the command gluster volume rebalance VOL_NAME start .

    Setting the client side:

    Here it is actually even easier. In the same way as on the server side to the client machine you are installing GlusterFS. Except for the fact that the package gluster-server can not install and service glusterd can not run.

    To get started you will need a folder with reference to which we will be given all the contents of the cluster. I created this / gluster / storage. And then executes a simple command

    mount-t glusterfs IP :/ VOL_NAME / gluster / storage

    Generally it all. Now when accessing the directory / gluster / storage client will be granted all the contents of the cluster. When writing files in that folder will be evenly distributed across the nodes of the cluster. The scalability of the cluster is limited to several petabytes of data, speed of several terabits per second.

    Linux users can connect directly to a cluster or use a protocol NFS. Windows users need to distribute files as SMB. Due to the architecture of each node in the cluster can also be a client, allowing you to save on a hard servers. But we must not forget that the SAN should be made a separate network for the best performance.

    In addition, we note GlusterFS also Lustre. Setting it is not much harder GlusteFS, but differs in that the metadata on a separate server, and the client is not polling all of the server for files, and refers to the metadata server for the file, which already tells the client where the file is stored. Because of this Lustre is the most productive and large scalable of all. Why should only the statistics, in the top 30 super computers Lustre installed on more than half, including the very top-end. Also, companies such as HP, DELL, CRAY, SGI and others deliver the chandelier in their scalable file storage.

    Option 3. The cloud infrastructure

    In fact, this section is worthy of deeper study and even some large article, but since this article and so very much, we will only consider general questions.

    Everywhere trumpeting that the cloud infrastructure can reduce the cost significantly. So let’s understand the expense of what is happening. Let’s say we have an ordinary, non cloud infrastructure. In this infrastructure should be: 1. File servers, the price of each of the TR 130, 2. Render Node, price 2 * XEON E5 120 TR, 3. Computer users, the price of 40 TR, excluding monitors. The sum of these three computers of nearly 300 TR And now look like one of them is used. In a file server idle processor and memory, hard disk drives in the render node, the user drives, the core and memory.

    So cloud infrastructure allows you to use all the power of the computer as a whole, not in parts. That is, instead of three cars per 300 TR You can buy one for 150 TR and it will serve as file storage, compute node and workstation employee. Thus, we have 2 times already saving to the servers, and the same amount on infrastructure and support, total cost reduction of more than 3 times.

    Number of cloud very much, including a very good free, but not every cloud is able to meet our requests. To all the machines running smoothly and do not interfere with each other, they must fulfill the following requirements:

    1. icing should be supported graphics card (PCI Passtrough) in a virtual machine. Otherwise, an employee will not work properly if it all will slow.
    2. must be available to the reservation of resources for each machine and configurable priorities. Our file server needs a little bit on the CPU frequency to service the file server, it is necessary to ask him the guaranteed amount and the maximum priority to other participants in the clouds on this machine. The user must also specify the guaranteed amount of CPU, memory and network speed and storage as well as the ability to use all the resources of the server with priority over render particular node. So render node, that is left.
    3. Cloud must support a single cluster storage.

    What happens in the end, file storage works with the highest priority, and by reserving it will always be a certain amount of available resources. The user also works in a minimum set of resources, and no one on it will not climb, but if the user is on a complicated operation need more than two cores, a cloud in a matter of milliseconds take them from the render nodes. And render node always works in the end there is no down time remaining life ever.

    Part 4. Iron SAN unification into a single repository.

    If you have quite a problem with the system administrator and he is not able to collect this kind of store, you can turn to companies that specialize in this kind of business. Fortunately these companies abound. They are also often present at the events dedicated to computer graphics.

    Typically, such companies are developing their own hardware, but on the basis of generally. As a software framework they also have their software, but again, based on the Linux file systems listed.

    If viewed from a financial point of view, the services of these companies are much more expensive than store personally collected. The point here is that in addition to iron, this price is still a huge number of services professionals like engineers and managers with their interest. But in return you get stable working, fine-tuned solution, warranty and service.

    Section 2. Switching

    Now we know how to build a storage cluster. Raises another question, but what does it all connect? First studio buy 48 port 1Gb / s switches, then a second, then a third. And here are faced with an unpleasant surprise, they begin to slow down or even worse to fail, and for two reasons. If the first trivial commutation speed between these switches is very low and when one group of people climbing on one switch port on another switch gives a very low speed. The second problem is much more terrible. Switches can only handle a certain number of packets per second (IOPS). And when you connect multiple switches together, this figure does not stack, and remains at that level, but the load on this parameter increases. As a result, when the achievement of the maximum packets are simply lining up in a buffer switch and waiting for when they were treated, and when the limit is reached soon buffer packets are simply lost, thus a connection is dropped. Especially important when there is a problem rendering a 100 + render nodes.

    Since then switching to establish such a large and loaded networks. It turns out all again for a long time invented and used everywhere.

    There are special so-called modular switches. As appreciated by name they consist of modules. The minimum set for this switch is composed of three elements. A first housing in which a chassis for installation of modules, the housing also be wired between the modules. The second is a control matrix (matrix switching). And the third is the necessary modules. They represent a blade with a set of ports. They can be used in a wide variety of options with quite a diverse set of ports. The modules are inserted into the rack, and communicate with each other by a switching matrix. The amount of these added modules as needed.

    Option 1. Modular switches LAN.

    Firms supplying modular switches for Ethernet and a lot of different classes. But we will consider only the most interesting.

    on budget options are oriented companies such as DLink and Netgear. The cost of each item, and the module is only 50-70 thousand rubles.

    Accordingly, the minimum set: body + steering matrix + 1 module will cost from 150 TR, each subsequent module TR 50-70 As you can see from the price of an ordinary switch is almost the same, except that the pre-need to invest in the housing and a control matrix. But then, you can easily build up to 4-10 units, 10 units on the 48-port 1 Gb / s for each module in the end turn out 480 Gigabit ports. And all the switching will perform well even under heavy loads.

    But it is necessary to indent. If the speed is stable, he is able to deliver between ports, whatever they may be, at least 10G. That such parameters as bandwidth, which faces off in packets per second, switches budget may not be enough. In principle, we are working with very large files, and packages we though many, but they are big. But I would not look toward low-end models.

    Average price is a range of HP. Here the price of each item, and the module is already 100 thousand rubles. But here already has a large range of products for every need. For example, consider the line 12500 .

    The line stands several options, ranging from the smallest (4 modules) to very large (18 modules). And the beauty is that with the growth of the company there is no need to change the units, they are united in the whole range, enough to purchase a larger rack and your modular switch is easy to grow in size.

    to top-end are Cisco. They are distinguished by great performance and security, as well as the result and the price. For studios such monsters are unnecessary, such better suited to banks where many small packets and reliability greatly appreciated.

    PRICE adapter Ethernet (c transceiver, depending on the technology) at a rate of 10 Gb / s is 20 tr

    Option 2. Modular switches Fibre Chanel. Directors.

    Modular switches are called FC – Directors. Here the choice is more modest, the number of firms can be counted on the fingers of ATTO, Brocade, HP, IBM, Mellanox, QLogic, Cisco. Switches still consist of three elements.

    As a budget option is present or not, or modular, stackable switches. The cost of these switches from 100 tons rubles. But for them the same typical problem described in the beginning of this section, the weak velocity stack. Therefore, no further considering.

    If we consider the modular FC switches the price of each item starts from 30 to 50 thousand dollars, QLogic cheaper, HP expensive, others still need to find. Accordingly, the price of a minimum configuration of one module with 16 ports of 100 000 $ for QLogic and 150 000 for HP.

    If we calculate the minimum price of the 1st FC ports is almost 60 t.rubley to compare the minimum price the same Ethernet port is only 10 m rubles. Now perhaps understandable why come up with FCoE, albeit at the expense of responsiveness.

    The cost of an HBA to the transceiver at 8 Gb / s range from 30 TR

    Option 3. Modular switches InfiniBand.

    Modular InfiniBand switches are also called Directors. The company can be counted on the fingers of one hand QLogic, Mellanox, Cisco.

    Budget option here is not the budget, not a modular switch costs from $ 3,000. Contains up to 36 ports.

    Modular even more expensive. If you take the line of Mellanox GridDirector the shell will cost $ 10,000 and each module to be 18 40G ports will cost another $ 10,000. IS5000 line has a different pricing policy, the base case from $ 40,000, and each module is a 36-port 10G be worth $ 7,000. In some range of enclosures can be changed to larger without changing modules.

    The cost of the adapter HCA at 10 Gb / s is from 30 TR

    Option 4. SAS switch.

    Interestingly enough the development of LSI. The bottom line is that the external output SAS RAID controller clings to the SAS switch. Thereby the controller are all available disks in such a network. But its practical use is severely limited low scalability, so we will not consider.

    Outcome: Summary of network selection

    We considered the switches in the version they serve only one protocol. Of course in modern versions have a switch that can serve multiple protocols, or even all of the time. But now we will not complicate things and consider the final cost of each alternative network.

    not modular Ethernet, 1Gb / s Ethernet + FCoE, Netgear 1G/10G Ethernet + FCoE, HP 1G/10G FC, QLogic 8Gb / s InfiniBand, Mellanox IS5000 10G/40G
    Cost module, TR 50 50 100 1000 210/300
    minimum cost of the switch, tr 50 150 400 3000 1410
    minimum number of ports 48 48/10 48/10 16 36/18
    maximum value of 1 port tr 1 3/15 8/40 190 40/80
    maximum value of 1 Gb / s per port, TR 1 3/1, 5 8/4 24 4/2
    maximum number of ports 48 480/100 864/180 512 648/508
    minimum cost of 1 port tr 1 fifth 2/10 60 8/11
    minimum cost of 1 Gb / s per port, TR 1 1/0, 5 2/1 8 0,8 / 0,3
    cost the client adapter, TR built vstroen/20 vstroen/20 30 30
    cost of 1 GB / s for the customer, TR 1 1/2, 5 2/3 9 3,8 / 3,3

    As we can see the price of each port falls with the increasing switch and eventually turns into a completely democratic figure does not differ from non-modular switch. The cost of 1Gb / s speed 10G ports are not much greater than the cost of the same speed in the 1G port, but it is able to cater for much larger amounts of data, which means you can save on the servers. But the FC is quite expensive, the cost of one port in the switch is completely clogged TR 60 + In each client must be installed even at the cost of HBA adapter 30 TR. InfiniBand for all its hefty price, was the most cost-effective rate of 1 Gb / s for the client, and in fact in some servers have built-in adapter, then the price generally falls to 300-800 rubles per 1 GB / s.

    What to use in the studio each decides by himself based on their needs. If the FC is too expensive, the InfiniBand too steep. All in a row users do not need a 10G speeds. For me as a modular Ethernet switches for the studio are the best solution. But in the case of cloud infrastructure undisputed leader Infiniband.

    Section 3. File System

    Now that we have a place to store the files and we can provide to our store any volume and speed of the network, there came the question of how and where to store the files and in what form. Typically, to reduce the load on the server is introduced special programs that take files from the server to the local computer, the local computer is processing the file, and is discharged back to the server. Such systems are called SVN (version control system). But I never liked this kind of system because they are very slow and complicate the job, and rarely when there is generally possible to meet versioning. For me as the system comes up with for programmers working at a large distance from each other and in the multimedia studio is not very comfortable.

    Personally, I really easier to work when, instead of a local disk file is used for the treatment of file storage. Placed there all their working files and versions. The result is that such disks usually exchange takes place even faster than local, with 10G networks up to 10 times faster, and always have quick access to all the information stored on the server, its newest and old, long-forgotten versions. Such situations are possible even when one person can not cope with the problem, throws a reference to the file in the chat colleague or Lida, and he has a second opening and sees the problem.

    But no matter what the server does not form a dump of the files they need to be well-organized. For this purpose the structure of this kind.

    On the server, there is Daddy where all of our projects, and then the tree look like this, for investments:

    1. pr001_demoProject – the project number and title;
    2. dp019_compositing – the number and name of the task or department;
    3. depending on the task:
      1. ps001_Jon or ob001_tumba – the number and the name of the character or object;
      2. sc001_firstFlight or sr001_firstFlight – a scene if it is a movie or a series if it is a show and the name of the series. The name then acts as a hint if a person forgot to what the scene is going on. Series may also consist of scenes;

    4. sh001_flyInMyBlood – room and shot his name;
    5. pr001_dp019_sc001_sh001_v077_likeProducer.comp – but in the name of the file we enter the full path to the file, the version of the file and do not forget to comment what is interesting in this version, and what we have done. This is done to make sure we saved files in the wrong place can be easy to identify and move to where they should lie.

    That’s how easy and simple the most complex project can be broken down into its simplest components, navigation and operation which will bring pleasure. The columns can be interchanged at will. The system is the same for everyone, and therefore such compositor easily find the necessary files made by another department for the desired shot.

    Numerical designation plays a key role in navigation, helps in the correct sequence and to simplify the notation. But the main thing at the expense of fixed numbering allows scripted and automate work with files. Up to the fact that the user will have to manually go to this vault only in cases of emergency. Take the work out of the files it will be Project Manager, the file will be created automatically in the right place with the right source, and outputs to the desired name. A new version of the script generated from the program interface.

    Protection is also very simple and stark. At the level of access rights. Each department is allowed to work only in the folder of his department to another department access to files or read-only, or even closed. In addition, the file should be periodically runs through the script and for some or all of the files to expose read-only access to the Division’s staff had no choice but to create a new version. For those who have files smaller and more active work can at least every hour, for large files is not as fast editing every day.

    As a result, we have an extremely simple, clear, swift and secure file system that can function as a Project Manager and of itself. Work which is a pleasure.

    Section 4. Project Manager

    As for project managers, here I have not seen any solutions that are optimized for CG studios. All I’ve seen (Jeera, Cerebro, Bolero, MS Project, and more. Etc.) lapped with one interface and one ideology that is optimized for programmers and developers, but not on the media. In all these programs just completely ignored concepts such as Pipeline and KPI (key performance indicators), in addition they are just out for a simple interface and intuitive recording and display the information as of shot or scene. And so most of the leaders prefer to maintain all its records in simple tables Excell. As it turns out they are a lot of visibility and easier all these Project Managers.

    Let’s see what is displayed on the table and why the table is much easier these project managers:

    1. visibility. As you can see just look at the table and it is immediately clear that at some stage be. That’s what the majority and choose Excel instead of these project managers. As each task is to optimize visibility in Shota allocated in a separate unit and is painted the color of your status. The sequence of these blocks form a kind of progress bar.
    2. Pipeline. The production of any shots are built in strict order by department (or tasks). Therefore it or forget to twist, to deviate from the common scheme of production, through forgetfulness or other reasons, it will be very difficult.
    3. KPI . Indicator of the effectiveness of the employee. Present in any self-respecting industry, except Russian CG. But how to evaluate the effectiveness of an artist? It turns out there are methods. Efficiency = ratio of the number of frames the problem * / elapsed time. Factor of this is not an abstract value, as determined by experiment, and means how many times the more difficult task will be made longer than the less complicated. According to the complexity of any task can also be disassembled into components. For example, to convert to Stereo: complexity factor is equal to 1 if the shot can be made automatic conversion, 2 if the scene only static and the camera barely moves, if at stage 3 1 character, 4 if the scene a lot of characters, 5 If a lot of characters or complex planned scene. Same with tracking: 1 if the scene only static, 5 if you had to do by tracking objects. Animation: 1 simple solid objects, 2 mechanism, 3 characters, 5-armed monster. Modeling: 1 Vases, Plates, 5 complex robot for animation. As you can see again, everything just is wanted. KPI can vary greatly from all other things being equal shot to shotu, it all depends not only on the complexity, but also the mood and well-being of the performer. But in terms of monthly it is very predictable and accurately plan production.
    4. similar shot . Fortunately, the scene is a lot of similar shots, with the same graphics. But unfortunately I have met many supervisors are similar shots distribute different artists, which is not an effective use of resources. If one employee should say 6:00 on the performance of one shot, and even in 1 hour, throwing out the graphics on similar shots, and when there are 5 similar shots in the scene, such employee will spend only 10 hours. And if you scatter to different employees, then it will be spent 6:00 * 5 = 30 hours of employees, and each will be different picture. Therefore, similar shots better to catch and store records.

    But at one point features office suites no longer enough, and there could come to the aid of the web version of the project manager.

    That will help is to solve the following issues:

    1. Automation . Excel certainly can automate a lot of things, but not as much as samopisnaya program. After collecting statistical machine can nominate artists, consistency of performance, estimated time of capture and execution of the task.
    2. Planning . After appearing statistics on the performance of employees, and will be able to automatically distribute the performers and the performance can be fully plan and calculate the entire project before the start of production, based on statistics, not words, “I think so”.
    3. Sharing . Certainly well when the task is not assigned to cry over his shoulder to take the challenge. Hence artists should have access to the project manager, read a description of the problem, see the sketches and supporting materials to be noted about the capture and surrender unsubscribe error. Thus, the task of keeping the project is distributed among employees.
    4. Web application . Since the application will run in a regular browser it is not necessary to install and update the computers performers. Scaled such applications by adding additional Web servers. And due to the scalability of such project managers can be held still and maintenance tasks such as assembly of the component parts of scenes.

    Contained herein version was made for the studio which was engaged in overlapping graphs on the footage. Slightly modifying the version and is available for studios that are fully engaged in the production of computer.

    Actually writing a project manager can any student web programmer who knows how to work with databases. The most difficult task is to bring the database to the screen and enter the data into the database.


    As a result, we have a file storage that easily scales up to huge sizes and speeds. Network large enough to encompass thousands of employees at high speeds. Simple and clear each file system to break the project down into simple components. And no less a simple project manager that allows you to quickly assess the status of the project and automate most routine operations.

    Hopefully now no one has any technical difficulties with the growth of the company and the volume of computer graphics. And neither of artists that do not detract from their main task. When the technical side is adjusted, remains optimistic that the art piece and would not let him down.

    If you enjoyed the article you can help expand it by clicking Like on your social buttons, and if it brought you a great favor or benefit, and want to thank the author, or inspire me to new even more interesting articles you can make a donation. All money raised will go to improving the site and to meet their own needs.

    26th May 2013 Articles, Home

    Leave a Reply