Posts Tagged ‘SmartPools’

Top 3 operational differences in EMC Isilon OneFS 7.1.1

Kirsten Gantenbein

Kirsten Gantenbein

Principal Content Strategist at EMC Isilon Storage Division
Kirsten Gantenbein
Kirsten Gantenbein

As EMC Isilon OneFS 6.5 and OneFS 7.0 reach their end-of-service life (EOSL) this year, many EMC Isilon customers will be upgrading to OneFS 7.1.1. If you plan to upgrade to OneFS 7.1.1, there are several new features, enhancements, and operational changes that may affect your day-to-day administration tasks. We want you to be aware of some the differences that impact upgrade planning, because they may require pre-upgrade tasks. You can find detailed information in the OneFS 7.1.1 Behavioral and Operational Differences and New Features document on the Isilon Community and OneFS 7.1.1 release notes on the EMC Online Support site.

Meanwhile, here are the top three changes for you to prepare for:

  • Access zones: directory configuration and NFS access
  • SmartPools®: node pool configuration
  • Role-based access controls

Access zones

In OneFS 6.5, access to cluster resources was controlled by authentication providers such as SMB, NFS, and SSH. Beginning in OneFS 7.0, user access to the cluster is controlled through access zones. With access zones, you can partition the cluster configuration into self-contained units, and configure a subset of parameters as a virtual cluster with its own set of authentication providers, user mapping rules, and SMB shares. The built-in access zone is the System zone, which, by default provides the same behavior as OneFS 6.5. You can connect to access zones using all available authentication providers, NFS exports, and SMB shares.

In OneFS 7.1.1, however, you cannot configure NFS exports in multiple access zones. NFS access is restricted to the System zone only. (In OneFS 7.2, NFS is zone-aware for access to multiple access zones.)

Also, access zones require a unique top-level root directory in OneFS 7.1.1. The root directories, or base paths, for multiple access zones in OneFS 7.1.1 cannot overlap with each other.

An important note!

If you currently use multiple access zones in your OneFS 7.0 or OneFS 7.1 cluster, you must check your access zone configuration for overlapping directories. If base paths overlap before you upgrade to OneFS 7.1.1, all previously created access zones will be assigned a base path of /ifs. Refer to OneFS 7.1.1 and Later: Best Practices for Upgrading Clusters Configured with Access Zones before upgrading to prevent a scenario where directories are assigned a new base path to accommodate access zones in OneFS 7.1.1.

SmartPools

In OneFS 6.5, a group of nodes is called a disk pool. Different types of drives could be assigned to a disk pool. There are several changes in SmartPools since 7.0. Beginning in OneFS 7.0, a group of nodes is called a node pool, and a group of disks in a node pool is called a disk pool. Also beginning in OneFS 7.0, nodes are automatically assigned to node pools in the cluster based on the node type. This is called autoprovisioning. Node pools can only include drives of the same equivalence class (review the equivalence class of nodes in the Isilon Supportability & Compatibility Guide). However, you can include multiple node pools into a higher level grouping called tiers. Finally, in the web administration interface of OneFS 7.1.1, SmartPools is located as a tab within Storage Pools.

Disk pools can no longer be viewed or targeted directly through the OneFS 7.1.1 web administration interface or the command-line interface. Instead, the smallest unit of storage that can be administered in OneFS 7.0 is a node pool. Disk pools are managed exclusively by the system through autoprovisioning.

An important note!

If you are running OneFS 6.5 or OneFS 6.5.5 and have node pools of mixed node types, you must configure disk pools into supported OneFS 7.0 and later node pool configurations well in advance of upgrading to OneFS 7.1.1. Supported node pool configurations must contain nodes of the same type, according to their node equivalence class.

Role-based access control (RBAC)

In OneFS 6.5, you can grant web and SSH login and configuration access to non-root users by adding them to the administrator group. In OneFS 7.0 and later, the admin group is replaced with the administrator role using role-based access control (RBAC). RBAC enables you to create and configure additional roles. A role is a collection of OneFS privileges that are granted to members of that role as they log in to the cluster. Only root and admin user accounts can perform administrative tasks and add members to roles. OneFS comes pre-loaded with built-in roles for security, auditing, and system administration, and you can create custom roles with their own sets of privileges.

For information about role-based access, including a description of roles and privileges, see Isilon OneFS 7.0: Role-Based Access Control.

An important note!

For OneFS 6.5 and OneFS 6.5.5 users upgrading to OneFS 7.1.1, make sure you add existing administrators to an administrator role.

For more information about OneFS 7.1.1

Visit these links for more information about:

Start a conversation about Isilon content

Have a question or feedback about Isilon content? Visit the online EMC Isilon Community to start a discussion. If you have questions or feedback about this blog, or comments about the video specifically, contact us at isi.knowledge@emc.com. To provide documentation feedback or request new content, contact isicontent@emc.com.

[display_rating_result]

The top 3 operational differences between EMC Isilon OneFS 6.5 and OneFS 7.0

Kirsten Gantenbein

Kirsten Gantenbein

Principal Content Strategist at EMC Isilon Storage Division
Kirsten Gantenbein
Kirsten Gantenbein

isilon-onefs-7-0Attention all current EMC® Isilon® OneFS 6.5 users: OneFS 6.5 will reach its end of service life (EOSL) on June 30, 2015. OneFS 7.0 introduces several new features, enhancements, and operational changes. If you need to upgrade to OneFS 7.0, you might be wondering what’s different about this version and how these differences will affect your day-to-day administrative tasks. You can learn more by looking at the Administrative Differences in OneFS 7.0 white paper.

The top three changes that OneFS 6.5 users should prepare for are:

  • Administration using role-based access control (RBAC)
  • Authentication using access zones
  • Managing groups of nodes in SmartPools

Role-based access control

In OneFS 6.5, you can grant web and SSH login and configuration access to non-root users by adding them to the admin group. The admin group is replaced with the administrator role in OneFS 7.0 using RBAC. A role is a collection of OneFS privileges, usually associated with a configuration subsystem, that are granted to members of that role as they log in to the cluster.

For information about role-based access, including a description of roles and privileges, see Isilon OneFS 7.0: Role-Based Access Control.

An important note!

After you upgrade to OneFS 7.0, make sure you add existing administrators to an administrator role.

Access Zones

In OneFS 7.0, all user access to the cluster is controlled through access zones. With access zones, you can partition the cluster configuration into self-contained units and configure a subset of parameters as a virtual cluster with its own set of authentication providers, user mapping rules, and SMB shares. The built-in access zone is the “System” zone, which by default provides the same behavior as OneFS 6.5, using all available authentication providers, NFS exports, and SMB shares.

For information about access zones, see the OneFS 7.0.2 Administration Guide.

SmartPools

In OneFS 6.5, a group of nodes is called a disk pool. In OneFS 7.0, a group of nodes is called a node pool, and a group of disks in a node pool is called a disk pool. Also, Isilon nodes are automatically assigned to node pools in the cluster based on the node type. This is called autoprovisioning. Disk pools can no longer be viewed or targeted directly through the OneFS 7.0 web administration interface or the command-line interface. Instead, the smallest unit of storage that can be administered in OneFS 7.0 is a node pool. Disk pools are managed exclusively by the system through autoprovisioning.

An important note!

Before you upgrade to OneFS 7.0, you must configure disk pools into a supported node pool configuration. Disk pools must contain nodes of the same type, according to their node equivalence class. Disk pools that contain a mixture of node types must be reconfigured.

For information about how to prepare your Isilon cluster for upgrade to OneFS 7.0, see the Isilon OneFS 7.0.1 – 7.0.2 Upgrade Readiness Checklist.

For more information about OneFS 7.0

Visit these links for more information about:

Start a conversation about Isilon content

Have a question or feedback about Isilon content? Visit the online EMC Isilon Community to start a discussion. If you have questions or feedback about this blog, contact us at isi.knowledge@emc.com. To provide documentation feedback or request new content, contact isicontent@emc.com.

[display_rating_result]

Understanding Global Namespace Acceleration (GNA)

Colin Torretta

Colin Torretta

Senior Technical Writer
Colin Torretta

Latest posts by Colin Torretta (see all)

With the proliferation of solid state drives (SSDs) in data centers across the world, companies are finding more and more ways to take advantage of the high speed and low latency of SSDs in unique and exciting ways. Within the EMC® Isilon® OneFS® operating system, one of the innovative ways Isilon is using SSDs is for Global Namespace Acceleration (GNA). GNA is a feature of OneFS that increases performance across your entire cluster by using SSDs to store file metadata for read-only purposes, even in node pools that don’t contain dedicated SSDs.

GNA is managed through the SmartPools™ software module of the OneFS web administration interface. SmartPools enables storage tiering and the ability to aggregate different type of drives (such as SSDs and HDDs) into node pools. When GNA is enabled, all SSDs in the cluster are used to accelerate metadata reads across the entire cluster. Isilon recommends one SSD per node as a best practice, with two SSDs per node being preferred. However, customers with a mix of drive types can benefit from the metadata read acceleration with GNA regardless of how SSDs are placed across the cluster. When possible, GNA stores metadata in the same node pool containing the associated data. If there are no dedicated SSDs in the node pool, however, a random selection is made to any node pool containing SSDs. This means as long as SSDs are available somewhere in the cluster, a node pool can benefit from GNA.

For more information about GNA, see the “Storage Pools” section of the OneFS web administration and CLI administration guides.

Important considerations when using GNA

Here are some important considerations to keep in mind when determining whether GNA can benefit your workflow.

  • Use GNA for cold data workflows. Certain workflows benefit more from the performance gains that GNA provides. For example, workflows that require heavy indexing of “cold data”—which is archive data on stored on disks that is left unmodified for extended periods of time—benefit the most from the increased speed of metadata read acceleration. GNA does not provide any additional benefit to customers who already have solely SSD clusters, because all metadata is already stored on SSDs.
  • SSDs must account for a minimum of 1.5% of the total space on your cluster. To use GNA, 20% of the nodes in your cluster must contain SSDs, and SSDs must account for a minimum of 1.5% of the total space on your cluster, with 2% being strongly recommended. This ensures that GNA does not overwhelm the SSDs on your cluster. Failure to maintain these requirements will result in GNA being disabled and metadata read acceleration being lost. To enable GNA again, metadata copies will have to be rebuilt, which can take time.
  • Consider how new nodes affect the total cluster space. Adding new nodes to your cluster affects the percentage of nodes with SSDs and total available space on SSDs. Keep this in mind whenever you add new nodes to avoid GNA being disabled and the metadata copy being immediately deleted. SSDs must account for a minimum of 1.5% of total space on your cluster.
  • Do not remove the extra metadata mirror. When GNA is enabled, an SSD is set aside as an additional metadata mirror, in addition to the existing mirrors set by your requested protection, which is determined in SmartPools settings. A common misunderstanding is that the SSD is an “extra” mirror and it can be safely removed without affecting your cluster. In reality, this extra metadata mirror is critical to the functionality of GNA, and removing it causes OneFS to rebuild the mirror on another drive. See the graphic below for information on the number of metadata mirrors per requested protection when using GNA. For more information about requested protection, see the “Storage Pools” section of the OneFS Web Administration Guide.
The number of metadata mirrors required by GNA per requested protection level in OneFS.

The number of metadata copies required by GNA to achieve read acceleration per requested protection level in OneFS.

 

[display_rating_result]

How to keep your EMC Isilon cluster from reaching capacity

Kirsten Gantenbein

Kirsten Gantenbein

Principal Content Strategist at EMC Isilon Storage Division
Kirsten Gantenbein
Kirsten Gantenbein

It’s important to maintain enough free space on your EMC® Isilon® cluster to ensure that data is protected and workflows are not disrupted. At a minimum, you should have at least one node’s worth of free space available in case you need to protect data on a failing drive.

When your Isilon cluster fills up to more than 90% capacity, cluster performance is affected. Several issues can occur when your cluster fills up to 98% capacity, such as substantially slower performance, failed file operations, the inability to write or delete data, and the potential for data loss. It might take several days to resolve these issues. If you have a full cluster, nearly full cluster, or need assistance with maintaining enough free space, contact EMC Isilon Technical Support.

Fortunately, there are several best practices you can follow to help prevent your Isilon cluster from becoming too full. These are detailed in the “Best Practices Guide for Maintaining Enough Free Space on Isilon Clusters and Pools” (requires login to the EMC Online Support site). Some of these best practices are summarized in this blog post.

Monitoring cluster capacity

To prevent your cluster from becoming too full, monitor your cluster capacity. There are several ways to do this. For example, you can configure email event notification rules in the EMC Isilon OneFS® operating system to notify you when your cluster is reaching capacity. Watch the video “How to Set Up Email Notifications in OneFS When a Cluster Reaches Capacity” for a demonstration of this procedure.

Another way to monitor cluster capacity is to use EMC Isilon InsightIQ™ software. If you have InsightIQ licensed on your cluster, you can run FSAnalyze jobs in OneFS to create data for InsightIQ’s file system analytics tools. You can then use InsightIQ’s Dashboard and Performance Reporting to monitor cluster capacity. For example, Performance Reports enable you to view information about the activity of the nodes, networks, clients, disks, and more. The Storage Capacity section of a performance report displays the used and total storage capacity for the monitored cluster over time (Figure 1).

Figure 1: The Storage Capacity section of a Performance Report in InsightIQ 3.0.

Figure 1: The Storage Capacity section of a Performance Report in InsightIQ 3.0.

For more information about InsightIQ Performance Reports, see the InsightIQ User Guides, which can be found on the EMC Online Support site.

To learn about additional ways to monitor cluster capacity, such as using SmartQuotas, read “Best Practices Guide for Maintaining Enough Free Space on Isilon Clusters and Pools.”

More best practices

Follow these additional tips to maintain enough free space on your cluster:

  • Manage your data
    Regularly delete data that is rarely accessed or used.
  • Manage Snapshots
    Snapshots, which are used for data protection in OneFS, can take up space if they are no longer needed. Read the best practices guide for several best practices about managing snapshots, or read the blog post “EMC Isilon SnapshotIQ: An overview and best practices.”
  • Make sure all nodes in a node pool or disk pool are compatible
    If you have a node pool that contains a mix of different node capacities, you can receive “cluster full” errors even if only the smallest node in your node pool reaches capacity. To avoid this scenario, ensure that nodes in each node pool or disk pool are of compatible types. Read the best practices guide for information about node compatibility and for a procedure to verify that all nodes in each node pool are compatible.
  • Enable Virtual Hot Spare
    Virtual Hot Spare (VHS) keeps space in reserve in case you need to move data off of a failing drive (smartfail). VHS is enabled by default. For more information about VHS, read the knowledgebase article, “OneFS: How to enable and configure Virtual Hot Spare (VHS) (88964)” (requires login to the EMC Online Support site).
  • Enable Spillover
    Spillover allows data that is being sent to a full pool to be diverted to an alternate pool. If you have licensed EMC Isilon SmartPools™ software, you can designate a spillover location. For more information about SmartPools, read the OneFS Web Administration Guide.
  • Add nodes
    If you want to scale-out your storage to add more free space, contact your sales representative.

If you have questions or feedback about this blog or video described in it, send an email to isi.knowledge@emc.com. To provide documentation feedback or request new content, send an email to isicontent@emc.com.