New York Stock Exchange at IBM IOD 2011: Managing the ever-increasing data footprint
Emile Werr, of NYSE Euronext, joins the IBM Break Free forum at IOD 2011 to discuss the challenges of managing your ever-increasing data footprint. (See more from Emile in Part 2: The benefits of using IBM Netezza).
- Transcript of Video
00:33:21 EMILE: Sure. Good afternoon everyone. First of all, I work for Global Data Services, which is a matrix organization that cuts across all our business units. At NYSE, we support multiple platforms. All our matching engines, basically we have multiple NYSE Classic, Archipelago, AmEx. We obviously integrated with Life which is a derivatives market in Europe. We have Euronext, which is cash business. So there’s a lot of complexity in our environment, and there’s a sheer problem with volume in managing the data footprint that we have.
00:33:53 In addition to that, we’ve evolved very quickly by changing our business model to not just only be a transactional-based business, but really get involved with co-location business, packaged solutions, market data. So we’re sort of diversified, and that really provides a lot of complexity around data footprint, whether from an integration perspective, or just a transactional volume perspective.
00:34:15 So, when we started looking at solutions that traditional database technologies just didn’t cut it for us. Obviously, there were fires that had to be put out. We had stringent requirements from the SEC and from FINRA and from our own sort of regulation policies about how to sort of manage the market, and how to basically make sure there was orderly trading going on.
00:34:36 That type of analysis cannot be done in regular environments, traditional database technologies, so we started looking at solutions. And the solution that we wanted to go with was an MPP type of appliance. And we looked at several, and we decided at that point to bring in Netezza, and we started evaluating it.
00:34:56 And the thing that we like about Netezza, when we brought it in, was the time to market, how quickly we were able to sort of resurrect a multi-terabyte database that was actually queryable in less than two weeks. In our example, I actually headed that POC, that evaluation. We had a six terabyte database up and running in two weeks. And we were actually running some surveillances in a prototype mode out of it.