.
.--.
Print this
:.--:
-
|select-------
-------------
-
What’s taking so LONG?

Damn the fancy analytics. Now the catchphrase is “Time-to-Market.”

By Andrew Webb

Veterans of early risk management implementation projects sometimes liken their efforts to Napoleon’s invasion of Russia. There was usually a gargantuan allocation of resources, then a massive mobilization of troops made with the expectation that you were waving your soldiers goodbye for several years—if not eternity. Often, the results were no more successful than Napoleon’s.

Nowadays, the grand campaigns of the past are being replaced with Special Forces hit-and-run missions, with smaller project teams and shorter lead times. Growing im-patience with endlessly extended implementation schedules and generalized shell shock from last year’s market crises have put a new focus on the phrase of the hour, “time to market”—or, the time it takes for a system to get up and running.

“During the various crises last year there was a flood of calls from business managers simply wanting to know the exact position with a counterparty as soon as possible,” says Stuart Farr, director of London-based Beauchamp Financial Technology and former vice president of global risk technology at Credit Suisse First Boston in London. “They didn’t want to know what the latest hot model was predicting—they just needed to know their mark-to-market.”

“Business managers didn’t want to know what the latest hot model was predicting—they just needed to know their mark-to-market.”
Stuart Farr
Beauchamp Financial Technology

These days, more banks are focusing on getting a risk management system that can do the basics up and running as quickly as possible. “The feedback that we’re getting from banks is that they are now choosing risk management systems based on speed of implementation rather than on whether they have the most sophisticated analytics,” says Farr.

Others concur. “Lead time is becoming much more of an issue, since many institutions are happy with their relatively unsophisticated analytics,” says Keith Bear, solutions executive at IBM Capital Markets. “Just being able to get the basic numbers globally across business lines can be far more valuable than a figure that may be 5 percent more accurate but comes from a system that may take much longer to be implemented.”

The system vendors have picked up on this new need for speed. Press releases for completed projects almost invariably now include the phrase “in less than X months.” “I think vendors are trying to differentiate themselves in terms of how fast they bring things to market,” says Catherine Morley, principal consultant at TCA Consulting in London.

Not so fast

Are projects actually being completed more quickly? Some certainly think so. “Over the past four years, implementation time for a typical risk management project has dropped from one–two years to one–six months,” says Don van Deventer, president of Kamakura Corp. He credits the speedup to the fact that most clients have at least some of their data in a central relational database. “Much of the work in the past involved getting the data out of legacy systems and into a format that can be used by the risk management system,” adds Mark Rodrigues, general manager in the financial industry group at American Management Systems. He believes lead times will continue to fall, as data becomes less of an issue, although the problem of how to use the information to make money will remain.

Others, however, are less convinced. “I’m not sure that time-to-market has become shorter,” says Paul Hassey, head of the development infrastructure group at Macquarie Bank in Sydney. “I think it’s a different set of problems now, with the old snags and hindrances being replaced with new ones.”

A case in point: demands for more frequently updated risk management reports are placing new burdens on risk management development projects. In the past, it was acceptable to process the data overnight and deliver a report in the morning. Now the demand is for reporting every few hours or even in real time. That sort of speed requires building real-time data interfaces and other complex and time-consuming infrastructures that only extend development times.

TCA’s Morley also doubts that lead times are falling. “Risk management facility is being improved but implementation times are not being cut down,” she says. “There is a push and a pull here.” Vendors may be speeding the process up with things such as data mapping and middleware tools, but on the flip side, business issues, data quality, and new types of risk (such as liquidity) all conspire to even the score.

Data trouble

While implementation times may or may not be improving, data remains the largest hurdle. The bulk of an implementation’s budget and schedule is still taken up by having to source, validate and clean up data from disparate sources and then present them in a format that is acceptable to a particular risk engine. “If you went into a firm at random, you’d probably still find that data is the hardest issue,” says Charlie Cortese, managing director of technology and systems for Europe and Asia at Lehman Brothers in London. “An operation with a dozen separate business product lines that has been running for several years will need two or even three years to get the necessary data consistency for effective risk management.”

Does the Choice of Vendor Make A Difference to the Lead Time?
Can the choice of the right vendor speed up development time? Although banks may base an increasing portion of their purchasing decisions on implementation time, there is limited consensus as to how much effect a particular vendor’s product can have. “I don’t think the system you buy makes a lot of difference to the implementation schedule in the bigger scheme of things,” confesses one major risk vendor. “Other factors such as data and the quality of the implementation team are far more important.”

“Any vendor is obviously keen to be seen as the fastest to implement, both as a marketing advantage and from the standpoint of getting to the next incremental sale,” adds IBM’s Bear. “However, the real factors lie in the scope of the project and how it is implemented. Trying to compare like-for-like implementation times is pretty meaningless, since so few vendors would actually be appropriate for any particular set of circumstances.”

Others disagree. “Choice of vendor still makes a big difference in terms of implementation speed,” says Cortese. “At one end of the spectrum you have vendors offering a tool-kit approach, which you will have to assemble or have someone else assemble. That takes time. At the other end of the spectrum you have those who will implement a more turnkey solution with you far more rapidly.”

Simplistically speaking, there’s often a trade-off between flexibility and lead time. If you want the elbowroom, be prepared to wait for it. On the other hand, don’t make the mistake of believing that the phrase shrink-wrapped actually means something.

Vendor choice may be meaningful if you are simultaneously shopping for a major trading system as well as a risk management system. If one vendor offers both (or even a few other trading systems that you may need in the future), you may be able to spare yourself enormous data integration hassles by one-stop shopping. Sadly, this utopian scenario will apply only in a minority of cases, but the potential time (and hence cost) savings may even justify sacrificing some peripheral functionality on your trading systems—assuming that your traders don’t notice.

—A.W.

Many legacy systems were never built with the expectation that they would need to share data, and getting them to cooperate often takes massive effort. Documentation, moreover, is usually nonexistent. The result is usually a high-headcount project-management challenge, aggravated by the fact that it is not a task that can be skimped on or shortcut with any degree of safety. “Any risk manager in a bank would tell you that a large part of the job and the head count in the department is dedicated to data and little else,” says Farr. “Attempting to perform the interesting analytics without being sure of the quality and granularity of the data is like building a house on sand.”

“Too many people think risk management is all about the glamour of the calculation and measurement,” says Jim Gertie, director of global capital markets and risk analysis at BankBoston, who has recently completed a three-month project designed to test and clean all the underlying feeds and add new ones. “It’s really about the blocking and tackling of data integrity.”

Ironically, attempts to improve the situation by enforcing common data standards across a bank can also extend implementation times by disrupting other systems. For example, there are still banks that lack common customer identifiers, so blithely changing the identifier on, say, a letter of credit system in Singapore so it can feed into a risk warehouse in New York may imperil its previously contented monogamy with a local settlement system. The risk warehouse may be happy, but the jilted settlement system will no longer be able to communicate properly with the letter of credit system and will fail all its trades with gay abandon.

“The knock-on effect of ensuring common standards cannot be underestimated,” says Dennis Oakley, managing director of global markets at Chase Manhattan, who completed an initial rollout of the bank’s global credit risk management system last March. “Normalizing all our data into a single Oracle database took a long time. In order to do that, we established common data standards and requirements, which meant that some existing systems had to be changed in order to conform. That further increased the project time frame.”

The dynamic nature of the data problem is another impediment. “You’d like to think that once you’ve done all the data mapping it stays done, but you often find that people have changed their systems in the intervening period,” says Macquarie’s Hassey. “You have to build allowances for that constant evolution into the project—it’s not something that you can tick off and forget about. The only consolation is that each time the challenge arises you hopefully have a continually improving method to fall back on that has evolved by experience.”

“It’s still the same old problem, and, quite frankly, it’s never solved, because the environment in which you operate is constantly changing,” adds BankBoston’s Gertie. “If all the other systems and products were staying the same, fine, but every time one of those factors changes, the game starts all over again.”

Mergers and acquisitions can throw another monkey wrench into data integration efforts. For example, BankBoston recently completed its acquisition of Robertson Stephens, an investment banking firm. The intention is to merge the bank’s existing high-yield debt operations and Robertson Stephens’ equities operations together on the same back office, which will in turn feed BankBoston’s TRMS risk management system. The biggest problem is that while BankBoston currently processes 250–400 trades a day from its high-yield debt operations, Robertson Stephens typically processes 9,000–12,000. The integration will require a complete reevaluation of the existing feed structure to make sure that nothing gets lost.

Limitations in a firm’s existing technology infrastructure can also slow down implementation efforts. For example, the risk management system may run fine, but may crash the network when the ignition switch is turned. This becomes particularly important when the goal of the risk management system is to go beyond mere aggregation of risk exposures to distributing its functionality to those on the front line. “The more robust you make your risk management system and the more widely you try to distribute it, the more pressure you put on the infrastructure,” says Gertie. “It’s not an insurmountable problem, but I think it’s one that people overlook. In fact, it’s interesting to note that you always seem to get these wonderful demos for great tools on a stand-alone machine, but once you make it interactive with all the other network applications, Bang!”

Keep It Simple

So what will get your risk management lights glowing green faster? According to many participants, resisting the temptation to go for the big-bang approach will cut your lead times—and save your job. An implementation manager trying to do the whole shebang in one shot runs up against the problem of market and organizational volatility during a project. Your solution may be a state-of-the-art piece of ecstasy when finally completed, but if the structure of the bank, the marketplace or umpteen other factors have changed during its implementation (as they will have done substantially if the system has a two-year gestation), it will end up as the proverbial white elephant.

“Lead time is becoming much more of an issue, since many institutions are happy with their relatively unsophisticated analytics.”
Keith Bear
IBM Capital Markets

One solution currently gaining credence is to break projects into a number of smaller components that can be delivered faster. That ensures a better interaction with a dynamic environment and ultimately makes the overall project faster to complete.

“I think it makes a lot of sense to get something up and running, get the first pass of your positions through it and see what does and doesn’t come out,” says Kelsey Biggers, executive vice president of MMA Ventures. “Once you do that, you can set about fixing anything that doesn’t work. I think the ‘get it all perfect before you throw the switch’ school of thought is asking for trouble, since you can spend years tinkering with it.” The other advantage to this approach is that the experience gained in implementing the earlier stages will probably expedite the later ones.

Political mess

Another way to speed up the implementation timetable is to dampen the internecine squabbles that inevitably slow down enterprise-wide implementations. In most cases, the individual product lines that manage their own risks will not see a lot of benefit in being aggregated with that of other product lines. To make everybody more territorially cooperative, it’s usually necessary to get real commitment and ongoing support from senior management—not just lip service.

The best time to solicit this support is in the immediate aftermath of a major crisis. After all, nothing helps to focus the mind of the average CEO better than the spectacle of a counterpart being ushered into the tumbrel. And once the needed management support is secured, expect the worst. “Allow a realistic time premium for bureaucracy, plan for it, but try to minimize it,” says Eric Reichenberg, managing director of Askari. “For example, think about who owns the system you’re trying to get data from. It may not be you—so who will actually be responsible for getting the data to you? On any risk management implementation you come across all these dependencies that can slow it up.”

It’s particularly important to establish a senior risk management committee that sets the parameters for margining—measuring client, market and country risk—and defines acceptable levels for those. “Without that kind of committee, the project will slip,” says Cortese. “You also need a key sponsor for the project clearly defined. If you don’t have one, you can pretty much forget the whole thing.”

There’s also the matter of whom to exclude—at least partially. “I think you need to sit quite firmly on your quants, since risk management implementation is not about the mathematical purity of your models,” says Morley. “Don’t let them choose the system or design how to put it in. Getting your system up and running is a business issue—one to which quants should, by all means, contribute, but not dictate.”

Another obvious factor that will affect the speed of an implementation is the quality and experience of those doing the implementing. Most system vendors have forged implementation alliances with certain consultants. These alliances often include training and certification programs in that vendor’s technology designed to push newly minted junior consultants up the learning curve. While these efforts are laudable, assembling a team of battle-scarred veterans could be vastly more efficient and may even help cut a two-year implementation to six months.

Hiring a major consulting firm as an integrator will certainly ensure that you have the available manpower for implementations during busy periods. But it may lead to overkill. “For effective risk management implementation, you need an experienced team of maybe less than 10 people,” says Biggers. “I think the largest consultants have trouble deploying that small a team.”

Richard Walker, vice president at Infinity, is another fan of the “small is beautiful” school when it comes implementation teams. “There is almost a negative correlation between implementation time and the number of people on the project,” he says. “The more people, the tougher the project management, with the gain in concurrent work being outweighed by the extra complexity introduced. I think three-figure project teams are less than optimal since they’re not only an example of the law of diminishing returns but because the law of negative returns applies beyond a certain point.”

The prognosis for speedier implementation times looks good. More vendors are writing standard interfaces to the major back-office and trading systems. As risk management becomes a more widespread discipline, it’s also likely that the available data within banking systems will continue to improve in terms of quality and granularity.

But don’t assume that systems implementation will be a snap in this brave new world. Many of the latest, most open trading and back-office systems will have been on the receiving end of some “customization” during their life cycle. There are enough headaches ahead to give any qualified systems integrator a permanent employment contract.

Does Mapping and Middleware Work?
The vendor response to the data problem has largely centered on mapping tools and middleware. These have undoubtedly helped with many of the mechanical aspects of database translation, such as converting data formats or mapping data fields in a legacy database to the correct field of a risk database. Unfortunately, the hyperbole associated with some of these tools has masked the fact that much of the data integration challenge requires human intervention and business understanding.

“My feeling is that the data-mapping tools are fairly worthless and add little value to the process,” says Charlie Cortese of Lehman Brothers. “There’s not much technology involved in converting data—the reality lies in understanding what the data mean.”

“I think middleware has been touted as the universal solution, but much of it is a repackaging of technology that has been around for more than 10 years,” adds Farr. “It is a useful tool but it’s not a universal panacea.”

The data problem requires people with business knowledge who understand the meaning of the data. Without this, there is the risk that the mapping will only be done at the jargon level, with one local data interpretation mapped to another local data interpretation and completely missing the underlying point in the process.

A fairly common example of this phenomenon can be found in the conversion ratio for a convertible bond, which can be defined in a number of different ways. While a strict definition might be to calculate it as a function of the conversion price and a fixed exchange rate, it’s also often defined as being per $1,000 of the face value of bonds or whatever the face value happens to be. Within one bank it is perfectly possible for differing definitions to be applied in various centers. A data-mapping tool will be unable to spot that nuance, seeing only the data field labeled “conversion ratio.” While these may seem like subtle differences, they can have an enormous impact, and unless those implementing risk management systems are prepared to go down to that sort of detailed data level, they will be overlooked.

“You can’t have the software do it for you—you have to have people on the software, systems or credit side who understand exactly what the data requirements are,” says Dennis Oakley of Chase Manhattan. “Data labels can often be ambiguous or misleading. Programmers must understand the nuances.”

Others take a more optimistic view. “The middleware vendors have done a great job in connecting processes and providing assistance in mapping data,” says Richard Walker of Infinity. “However, the data-integration process still requires a step that people seem reluctant to take—analysis. I think progress has been made in data integration, but I don’t think this is simply a result of advances in technology. It’s more a result of human expertise honed by experience.”

Another prevalent criticism of data mapping and middleware tools in general is their coverage. “I don’t think the data-mapping tools are where they could or should be,” says Kelsey Biggers of MMA Ventures. “Part of the problem is that outside the top tier of risk management vendors, an insufficient number of implementation tools are available. And even where tools are available, there is an increasing number of securities in core areas that remain uncovered.”

—A.W.

Was this information valuable?
Subscribe to Derivatives Strategy by clicking here!

--