SANSlide doesn’t change the data, it helps host applications to use all of the pipe. It’s not WAN optimization software, an intelligent switch or a SAN accelerator. It’s a unique and innovative approach to accelerating data transport across different storage devices at the block-level below the operating system and across data centers.
The first generation of products that tried to solve the latency problem went after the data not the pipe. If the application was sending too much data too often, the common sense approach was to eliminate as much of the redundant data as possible before sending it. This approach required additional storage at either end of the connection to hold the redundant data in a cache for retrieval later. This strategy worked on “warm data” that the application or host system had seen before. The 2011 Gartner WAN Optimization Controller Report details the players in the first-generation space.
This was a good approach but a lot of data transport occurs on “cold data” that has NEVER been seen before by the application or host. Every batch process, every new video or medical image, or data that changes rapidly like 3D motion imagery is “cold.” This cold and mainly unstructured data is growing rapidly and represents 80-90% of information being created today. IBM suggests that unstructured data is growing 15 times faster than the rate of data in all structured databases.
How do you get this “big data” where it needs to be with greater velocity?
SANSlide went after the pipe not the data. It’s the next generation of solving the latency problem. It’s latency-busting artificial intelligence software and devices send all the blocks of data across the pipe allowing almost near real-time remote replication across vast distances. It utilizes standard TCP/IP protocols, bridges disparate storage protocols (iSCSI, Fibre Channel, SCSI, and SAS), allowing relay acceleration across multiple data centers.