Interview: Tegile Introduces New Storage Arrays To Address Evolving Market Needs
This week, Tegile announced the introduction of two new flagship storage arrays. Dubbed the T3400 and T3800, they provide Tegile with a solution that can address more upmarket needs than was possible prior. Whereas the company’s previous hybrid arrays were imbued with just a small amount of flash compare to HDD capacity, the two new arrays take into consideration the constantly changing economics of flash storage. The T3400 is outfitted with 1/2 flash and 1/2 HDD storage while the T3800 takes an all-flash approach to enable raw performance for those customers that demand low latency and high levels of performance.
I had the opportunity to discuss the new arrays with Rob Commins, Vice President of Marketing for Tegile. My thoughts and comments are included in green.
Scott: D. Lowe Can you give me the 30,000 foot narrative around the new T-series arrays you just announced? Are there any major architectural changes?
Rob Commins: The underlying architecture is the same as our existing product line. The main difference is that we are leveraging very high density flash to position a new flagship hybrid array and new flagship all-flash array.
My take: Tegile has found a formula that is working for them. Their underlying ZFS plus lots of new features wrapped around commodity hardware approach has been working for them from the beginning, so it’s not a surprise that no major architectural changes are included in these products.
Scott: Are there any significant enhancements in the software that powers these arrays?
Rob: The software is exactly the same.
My take: In May 2014, one month prior to the release of the new arrays, Tegile announced a new version of their management software that includes a number of new features, including deep support for SMB3.
Scott: What was the market need that led to the release of the new arrays, particularly the T3400, which apparently has ½ flash and ½ HDD? What does this mean for all flash-only players in the market?
Rob: As Tegile has matured in the market, and channel partners bring us into larger and more latency sensitive environments, the need for a heavier flash biased hybrid was apparent, as well as a high density all-flash array. You can get up to 1PB of effective flash capacity in the 3400 and up to 1.6PB of effective flash capacity in the T3800 (both assuming a 5X data reduction factor).
My take: As more organizations look to flash as their storage of choice, Tegile’s previous SSD:HDD ratio was too low for those with high end needs and that didn’t really need all-flash. The T3400, with its 1:1 SSD:HDD ratio will fill a performance niche between the companies lower end arrays and their all-flash systems. Customers will be able to scale their systems through the use of expansion shelves.
Scott: Are there scaling opportunities – up or out – with either of the new arrays?
Today, we use scale up. The T3400 starts at 26TB of Flash and can go up to a PB of flash or 850TB of flash and 720TB of HDD in a mixed hybrid configuration.The T3800 starts at 48TB and scales up to 1.6PB of flash. This all assumes 5X data reduction factor, which is a conservative figure based on what we actually see in the field.
Scale-out is on our roadmap, but customers are not clamoring for it, so most of our development efforts are in data management and ISV integration.
My take: I’m not all that surprised to hear that Tegile customers aren’t clamoring for scale-out. Scale-out is great for environments that really need that extra CPU and network that comes with each node, but isn’t always necessary, particularly in small and medium size environments. I could see lack of scale out as being a major downside had Tegile not release the new arrays as it would have been impossible to scale the environment to capacity levels demands by many modern enterprises. The new arrays break this capacity barrier without compromising on performance. That said, I do believe that Tegile will need to address scale out at some point in the near for those organizations that really do need a cluster of nodes as their storage environment. Scale out is becoming almost a de facto standard capability in many modern storage systems, needed or not.
Scott: The press release indicates that Tegile has managed to hit that $1/GB sweet spot after 5x deduplication. Is that an achievable target in most cases?
Rob: Math for the T3800 comes around $1.00/$1.10, and is a very achievable number even in mixed use environments. The T3400 comes in at $1.30. Believe it or not, with all this in mind, the all-flash array is cheaper per GB than the hybrid!
My take: Bear in mind that these are post-reduction figures, but are still good.
Scott: Can you tell us a bit about IntelliFlash?
Rob: IntelliFlash is the metadata engine that optimizes our use of flash (both performance and endurance) as well as runs our caching algorithms that mask the performance challenges rotating disk has in hybrid configurations. It optimizes the many data management functions that are very IO intensive, such as dedupe, compression, RAID, snapshot pointers, etc.
My take: Tegile’s IntelliFlash architecture is a good solution for ensuring ongoing storage performance and efficiency. I’ve done research with some Tegile customers and they’ve all reported happiness with this overall system.
Scott: Can you tell us a bit about your consumption-based pricing model? Are the new arrays eligible for this program?
Rob: Our consumption based pricing model, called Agility Pricing, looks at capacity physically written to an array in a given month and charges a very low price per GB:month rate. That rate is even below Amazon’s EBS IOPS storage price. Customers who leverage Agility Pricing get the economics of the public cloud in their own private cloud infrastructure. This gives them the peace of mind from a security, control and performance integrity standpoint, but with a flexible consumption based model. All of Tegile’s arrays are availble with the Agility Pricing program, including our two new models announced today.
My take: This is a very cool idea! Providing IT departments with traditional ownership models with cloud-based economics seems like it could provide the best of both worlds to IT departments that are looking for ways to do more with less.