Report from FAST 2007: Data ONTAP GX Paper
The night before the presentation, Peter Corbett, Dan Nydick, and I worked on the slides Peter was to present. Peter then fine tuned them, arrived exactly on time to present the slides (much to the relief of everyone involved). But the wait was worth it as Peter definitely improved the product (I later presented the paper to data storage class at a university in northern California. You can view that version of the slides [sans performance data, for now at least] on my personal web site).
At the FAST presentation, there were several questions, which I feverishly attempted to paraphrase. Here they are, with the answers given, and in some cases, my color commentary (in italics):
Q: Was a single file system used in the performance charts (given during the presentation)?
A: A single namespace, at least one volume per D-blade, was used.
Q: Why doesn't it scale beyond 24 nodes? What happens at 25?
A: We stopped at 24 because we achieved our initial one million operations/second goal. We believe it will scale beyond 24.
Q: What can limit scaling?
A: The replicated coherent database can potentially be a limiter.
Also, I think the other limiter can potentially be the cluster interconnect, but so far switch vendors can build devices more than capable of switching dozens to low hundreds of nodes.
Q: What benchmark is used for CIFS numbers?
A: Currently there is no standard CIFS benchmark, and we didn't prepare CIFS number for the presentation.
Also our CIFS benchmark numbers use aggregate read and write as NFS do, and will be similar. Note that SFS 4.0 will provide CIFS performance measurements.
Q: Why is write throughput half the read throughput?
A: READs are faster because the benchmark uses sequential I/O, and READs can benefit from read ahead.
Q: For the load balancing mirror feature, aren't you worried about writing multiple mirrors?
A: The load balancing mirrors are read-only. Only the master of a mirror family is writeable.
In the presentation slides I've posted, I've attempted to make this
You can read the paper at my personal website.