NetApp ONTAP Clustered Hardware Architecture Tutorial Video

NetApp ONTAP Clustered Hardware Architecture Tutorial Video



in the last lesson we covered the scalability of limitations of salmon mode which are caused by the fact that we can only have two nodes or controllers in the same system so now realized that the scalability issue was there and they wanted to have an operating system that supported having more than two nodes that were part of the same system or cluster so they had a choice of how they were going to do this they could either upgrade seven modes to support more nodes in a cluster or they could acquire another company that already had this capability and they went for the second option because upgrading seven mode to support multiple nodes in the cluster would have required a complete rewrite of the operating system so instead they acquired spinnaker networks in 2003 and that give them the cluster data ontap capability so let's have a look at the clustered data on top hardware architecture it's very similar to seven mode with one key difference that we'll get to in a minute so cluster mode it also is comprised of controllers that are paired in an H shape here so you can see here we've got controller one we've got controller two they're configured in an H a Pierre exactly the same way as it would be if it was a seven mode system so controller one is connected to its shelves controller two is connected to its shelves and we're also connected to each other as shelves and we've got the H a cabling between them so they can detect if the other controller goes down so that architecture looks exactly the same as in seven mode and high availability works the same way as in seven mode as well then what we do to get more nodes in our cluster is we add an additional H a pair so now we've got controller on controller 4 which are also cabled to each others disk shelves now we don't connect every single controller to every other controllers this shelter we don't have controller one connected to control where two three and four shelves because to do that we would need loads of extra ports on the disk shelves and the controllers it wouldn't be manageable okay so we don't connect every controller to every dish shelf we just have our controllers connected up in each appears the same way as in seven would then the way that we enable the cluster and the difference between cluster mode and seven would is we put in a pair of cluster interconnects witches these are Ethernet switches and we have a pair of them and every controller in the cluster is connected to both of the cluster interconnect switches this is what enables connectivity between all the nodes and makes it a single system cluster so let's look at how the client connectivity is going to work for same as in seven mode our disks are owned by one and only one controller so in our example here we've got this a1 which is owned by controller one and clients can access that data over a port on controller one the difference between coaster mode and seven mode is the clients can also access that same data set through control or two they can hit a part on either controller one or control or two or any controller in the entire cluster and if it's on a different controller than the one that actually owns the disks the traffic will go over the cluster interconnect another thing we can do is we can Mitter the data throughout the different controllers on our cluster that means that the data does now not have to go over the clustering or connect it can be hit directly on any of the controllers so coaster mode gives us much better scalability than we got with seven mode the first improvement is in capacity scaling cluster data on top can scale to 24 nodes if you're only using nice protocols so that sifts and NFS if you're using San protocols fibre channel Fibre Channel over Ethernet or I scuzzy it can scale up to 8 nodes again don't worry about those knives and sign protocols I'll be covering what they are in depth in a later module because we can have up to 24 nodes in a single system cluster now a single cluster can be scaled up to 138 petabytes currently that's much higher than as possible in seven would also disks shelves and nodes can be added non-disruptive li so you can start small and then you can grow bigger over time so we could start with its single node or a two node cluster and then we could go up to a four node or six node cluster etc now the the nodes or controllers and an H a pair have to be the same model but you can have different model controllers in the same cluster so for example controller one in control or two could be one model and controller three in control or four could be a different model within the same cluster for operational scaling a cluster is managed as a single system if we want it to have 24 nodes with seven mode that would be 1280 pairs we would have 12 different systems to manage cluster mode 24 nodes if you're only using nice protocols you can manage that as a single system cluster also the cluster can be virtualized in two different virtual storage systems which are called SVM storage virtual machines or the older name for them is V servers in the current documentation you'll see it they're usually called SVM's but they used to be called V servers and if at the camera I'm blind Brazil usually refer to as V servers your SVM's or V servers appear as a single system to your clients an SVM level administrators can be created with access to only their own SVM so for example we could have an SVM for department a and we could have an SVM for Department B Department a administrators would only have access to the department a SVM and vice-versa for department be actually Department B wouldn't even know the department is SVM existed so it's very secure data can be moved easily and non disruptively between all nodes in the cluster whenever we move there that move is carried out over the cluster interconnect so again this is much better than it was in seven wood and seven wood it's easy to move data between sets of desks on the same node but not between different nodes are different systems in cluster mode you can easily a non-disruptive Li move data anywhere throughout the entire cluster so if you need to move your data to a larger capacity set of disks or you want to move it to higher or lower performance disks that is very easy now while we were talking about multi-tenancy there with the SVM's multi-tenancy is supported in seven mode as well with V failures but the way multi-tenancy is implemented in cluster mode is much better a cluster mode was designed for multi-tenancy right from the beginnings and it's very easy to manage your different SVM's but you manage them just the same as if it was a single system in seven mode it does support multi-tenancy but it's not so smooth the way it's been implemented the last type of scaling benefit I want to cover is performance scaling in custom would your data processing a spread throughout all the different nodes in the cluster and all the nodes or controllers have got their own CPU memory and network resources the system provides a linear performance scaling and load balancing across the cluster meaning if you had a two node cluster and then you upgrade that to a four node cluster you've now just doubled your performance and data can be measured or cached across multiple nodes in the cluster


4 thoughts on “NetApp ONTAP Clustered Hardware Architecture Tutorial Video

Leave a Reply

Your email address will not be published. Required fields are marked *