Insights

10 NetApp iSCSI Best Practices

If you're using NetApp iSCSI, it's important to follow best practices in order to ensure optimal performance and avoid potential issues. Here are 10 of the most important best practices to follow.

iSCSI is a block-based storage protocol that uses an Ethernet network to connect storage devices to servers. NetApp iSCSI solutions offer high performance, scalability, and compatibility with a wide range of applications and operating systems.

This article provides an overview of 10 NetApp iSCSI best practices, including configuring iSCSI, optimizing performance, and troubleshooting common issues. By following these best practices, you can maximize the performance and efficiency of your NetApp iSCSI storage solution.

1. Use a dedicated network for iSCSI traffic

iSCSI is a block-level storage protocol that uses TCP/IP to transmit data. This means that iSCSI traffic competes for bandwidth with other types of traffic on the network, which can lead to performance issues.

By using a dedicated network for iSCSI traffic, you can ensure that your storage traffic has the bandwidth it needs to perform optimally. This NetApp best practice will help you avoid potential performance bottlenecks and ensure that your storage infrastructure is running as efficiently as possible.

2. Configure the MTU size to 9000 bytes on all switches and hosts

The default MTU size for Ethernet is 1500 bytes, which is fine for most traffic. However, iSCSI uses larger packets than regular Ethernet traffic, so a higher MTU size is required to avoid fragmentation. A lower MTU size will result in increased latency and CPU usage as the packets need to be reassembled.

Configuring the MTU size to 9000 bytes on all switches and hosts ensures that the iSCSI traffic can flow without being fragmented, resulting in improved performance.

3. Enable jumbo frames on your ESXi host

Jumbo frames are larger than the standard Ethernet frame size of 1,500 bytes, and they can be as large as 9,000 bytes. When you enable jumbo frames on your ESXi host, you’re essentially increasing the amount of data that can be transferred in each frame.

The benefits of using jumbo frames include increased performance and reduced CPU utilization. In terms of performance, jumbo frames can help reduce latency and increase throughput. And in terms of CPU utilization, jumbo frames can help reduce the number of interrupts per second, which can free up resources for other tasks.

To enable jumbo frames on your ESXi host, you’ll need to edit the vmkernel settings. To do this, open the vSphere Client and navigate to the Configuration tab. Under Hardware, select Networking. Select the vmkernel adapter that’s being used for iSCSI traffic, and click Edit.

In the Adapter Settings dialog box, select Jumbo Frames (MTU 9000). Click OK to save your changes.

4. Disable TCP offload engine (TOE) on your ESXi host

The TOE feature is designed to offload certain TCP/IP processing tasks from the CPU to dedicated hardware. However, in practice, this feature can cause a number of problems, including:

– Reduced network performance
– Increased latency
– Connection timeouts
– Packet loss

For these reasons, it’s generally recommended that you disable TOE on your ESXi host when using NetApp iSCSI storage. You can do this by editing the advanced system settings for your host in the vSphere Client.

5. Create separate vSwitches for iSCSI traffic

When you create a vSwitch, by default it will carry all traffic types for the virtual machines (VMs) connected to it. This includes iSCSI traffic, which can negatively impact performance if not properly isolated.

Creating separate vSwitches for iSCSI traffic ensures that this type of traffic is given priority and doesn’t have to compete with other traffic types for bandwidth. This results in improved performance for your iSCSI-based storage system.

It’s also a good idea to configure jumbo frames for your iSCSI vSwitch. Jumbo frames allow for larger packets of data to be sent, which can further improve performance.

6. Ensure that you have at least two physical NICs configured in an active/passive configuration

If you only have a single physical NIC configured, and that NIC fails, your entire iSCSI storage array will go offline. This can lead to data loss and downtime for your applications.

By having two physical NICs configured, you can ensure that if one NIC fails, the other NIC will take over and keep your storage array online. This will minimize downtime and prevent data loss.

7. Set up multiple paths between the ESXi host and the storage system

If you only have a single path between the two, and that path goes down for any reason, your ESXi host will lose access to its storage. This can cause all sorts of problems, from data corruption to complete data loss.

By setting up multiple paths, you ensure that there is always a working connection between the ESXi host and the storage system. If one path goes down, the other(s) will still be up, and the ESXi host will still have access to its storage.

There are a few different ways to set up multiple paths, but the most common is to use multipathing software, like VMware’s Native Multipathing Plugin (NMP). NMP comes pre-installed on all ESXi hosts, so you don’t need to do anything special to get it.

8. Do not use the same subnet for management and iSCSI traffic

When you use the same subnet for both management and iSCSI traffic, your controllers have to process all of the iSCSI traffic as well as the management traffic. This can lead to performance issues because the controllers are not able to dedicate their full attention to either type of traffic.

Additionally, if there is ever an issue with the iSCSI traffic, it can impact the management traffic and vice versa. By keeping these two types of traffic on separate subnets, you can avoid any potential problems.

9. Do not configure multipathing software on the storage controller

When you configure multipathing software on the storage controller, it can cause problems with the iSCSI connection between the server and the storage system. This is because the multipathing software will try to use both paths (the storage controller’s Ethernet port and the iSCSI initiator’s Ethernet port) to send data, which can result in data being sent out of order and causing errors.

To avoid this problem, you should only configure multipathing software on the server, not on the storage controller.

10. Use CHAP authentication if possible

When you use CHAP authentication, the initiator and target each have a secret password. The initiator sends a challenge to the target, which the target responds to with a hash of the challenge and its password. The initiator then compares the response to its own hash of the challenge and password. If they match, the initiator knows that the target is who it says it is, and vice versa.

This process prevents someone from intercepting the traffic and masquerading as the target or initiator. It also means that if the passwords are compromised, the attacker still wouldn’t be able to authenticate without also knowing the challenge.

CHAP can be used with both iSCSI and Fibre Channel SANs, but it’s particularly important for iSCSI because it’s typically run over IP networks, which are more vulnerable to attack than Fibre Channel networks.

Previous

10 Entity Framework Core Best Practices

Back to Insights
Next

10 WSUS Best Practices