Winningthe Systems Integration Game
Systems integration orchestrates disparate technologies,
data and people across increasingly broad functional and geographic horizons.
The music can be sweet or horrid.
By Karen Spinner
Plumbing is not a glamorous field. But if your plumbing goes bad, it
can lead to costly leaks, flooding and nasty-smelling sewage. At many financial
institutions today, the consequences of incomplete infotech plumbing are
all to obvious. Sometimes the smell can be suppressed, but sometimes it
The dysfunctions are encyclopedic. Front office systems that don't seamlessly
feed into middle office systems; incomplete risk management information;
back offices in the dark ages of manual processing. Little wonder that many
financial institutions are willing to pay a lot of money to high-income
plumbers-otherwise known as systems integrators-to connect hundreds of specialized,
local systems together into something approaching coherence. According to
one frustrated IT manager, this task is akin to cleaning the Aegean stables
with a toothbrush.
According to most estimates, the software and hardware components of
a typical front-to-back office architecture make up only 2040 percent
of the total project; the rest of the costs stem from integration tasks,
such as storing, mapping and distributing massive quantities of data; the
development of custom interfaces between various systems; and modifying
new systems to meet each institution's particular proprietary needs. Thus
it behooves anyone in the plumbing business to get it right first time.
Systems developers and users alike dream of a fully integrated system,
encompassing an entire firm's trading, commercial and asset/liability management
units across multiple geographies and spitting out an endless variety of
financial reports in real time. But in almost all cases, it remains a dream.
According to David Gilbert, president of C*ATS software and chairman of
California Federal Bank, "At the top 50 or so global banks, 'integrated'
systems in reality refers to a very delicate framework involving perhaps
hundreds of data feeds." What are the chances that such a Rube Goldberg-esque
construction will consistently produce completely accurate position, risk
management or global customer activity reports?
"Zero!" declares Keith Bear, the business manager of IBM's
recently formed, U.K.-based risk management consulting organization. Bear
likens the typical enterprise-wide system to a heavily patched collection
of "stovepipes" leading into and out of many different data-crunching
furnaces. The real issue for banks is not to create the Ultimate System,
he says, but rather to identify at which points it is critical to reduce
the complexity of this fragile, touchy framework.
Integration is not exclusively an "enterprise-wide" phenomenon.
Instead, systems are often integrated vertically within trading groups,
meaning that, for example, the New York equity desk has straight through
front-to-back-office processing-and so does the London swaps desk, the Sydney
futures desk and so on. Likewise, firms introduce "horizontal"
integration for risk management purposes. For example, considering all their
U.S. interest rate exposures to commercial loan, swap, deposit and mortgage
transactions within a single reporting engine. Such a task, even if it does
not quite represent a Unified Field Theory of risk reporting, can be daunting
Systems integration demands an incremental approach in which short-term
success builds the foundation for a long-term strategy. According to Rahul
Merchant, senior vice president for New York-based Sanwa Financial Products
and a veteran of several successful systems integration projects, "Many
integration projects run aground when companies try to fly before they have
learned to walk. It is important to have concrete, attainable systems objectives
and to avoid something that is too ambitious in terms of its functional
or technological scope."
Such a Big Bang approach, he stresses, can be incredibly wasteful. By
the time such a "blob-like" systems initiative gets off the ground-in
many cases as long as two years after the initial implementation plan has
been drafted-it may be shut down by cost-conscious CEOs frustrated by the
project's high bill and lack of demonstrable business benefits. A successful
integration project focuses on business needs first, remains attainable
in scope, and proceeds according to a step-by-step plan that should yield
some real results within weeks.
A number of business drivers spur financial institutions toward some
form of systems integration. These include regulations mandating enterprise-wide
risk management reporting on all derivative financial transactions; the
desire to assemble a firm-wide data warehouse to permit better strategic
analysis; the need to reduce the costs associated with shaky legacy systems,
manual back-office processes and funds transfer errors; credit rating agency
standards for systems and controls; and, recently, an interest in looking
at the interrelationships between credit and market risk.
Regulation is a particularly powerful motivation for developing enterprise-wide
risk measurement techniques according to Robert Glauber, who heads up Derivatives
Associates and serves on the Boston Federal Reserve Board. Regulators require
the tying together of a large number of deal-capture and position-tracking
systems. "The Federal Reserve requires banks to either develop their
own model for assessing their market-risk-based capital requirements,"
he says. "Otherwise they must multiply a generic, Fed-endorsed value-at-risk
number by three. This is a powerful incentive for dealers to develop more
accurate risk measurement techniques and the systems to put those techniques
into production." The Office of the Comptroller of the Currency has
equally powerful motivators for its constituents.
In Europe it is much the same story. Regulation is fueling demand for
risk management systems, as most banks and dealers are acting to comply
with BIS standards. Dan Rissin, a principal at Toronto-based TrueRisk, notes
that there is an increasing demand for "intraday" risk management.
Rissin observes that this is just as much a procedural issue as it is a
systems matter. "Let's say your trader inputs his deal first, becomes
occupied for a few minutes, and then puts on a hedge transaction,"
he says. "Your 'intraday' risk report might only reflect that first
transaction, possibly encouraging managers elsewhere to inadvertently 'double
hedge.' There must be procedures in place to avoid this scenario and ensure
'intraday' risk management is productive."
Many firms are interested in developing integrated risk management systems
as insurance against "franchise" risk. "A widely publicized
loss can have great repercussions in terms of a bank's reputation,"
says C*ATS' David Gilbert. "Also, Moody's and S&P are now looking
at banks' systems and controls when determining their credit rating. Triple-A
dealing subsidiaries in particular must meet stringent systems requirements."
C*ATS, which offers the CARMA risk management system, has been working with
both credit rating agencies to ensure their system complies with these standards.
Credit Lyonnais is one C*ATS client that has gone through-and survived-this
Gilbert believes companies are taking a more systematic approach to evaluating
credit risk in particular because "much more money is lost to credit
risk problems than to trading losses." Credit risk needs to be weighed
across a variety of business areas, by applying quantitative modeling techniques
and defining ways in which credit and market risk relate. "For many
banks, trading operations make up only a tiny sliver of their total risk,"
says RMT's Alan Tobey, regional marketing manager at California-based RMT,
an asset/liability risk management consulting firm. As a result, institutions
are considering how quantitative techniques such as VAR can be applied to
business and credit risks and, what's more, how to develop enterprise-wide
asset/liability management (ALM) systems. Both of these tasks transcend
the limits of most over-the-counter financial ALM and risk-management packages.
While methods like VAR work well for traders and others with short-term
investment horizons, most institutions also seek more long-term type analyses.
Asset classes like deposits, customer credit and mortgages that can be irrationally
exercised do not lend themselves well to VAR, which is why rather than basing
analyses on historical data, RMT builds systems that zero in on forward-looking
simulations. These simulations also incorporate broader economic modeling
to consider how each business area might perform under a variety of future
market conditions generated through Monte Carlo analysis. This approach
does not assume either any particular historical market conditions or that
market factors are necessarily normally distributed.
Many banks are also using data warehouse techniques and integrated systems
to consider "profits-at-risk." Profits-at-risk over the long term
might include the risk of falling short of "maximum profit potential";
how various market conditions might affect whether or not various businesses
achieve peak performance; and how new business initiatives are likely to
generate risk as well as returns.
Finally, the march of integration is being pushed by a number of institutions
that simply want to automate the flow of deals from the front office to
the back office more effectively and reduce the associated costs of manual
processing and systems errors. According to IBM's Keith Bear, "Given
the 'stovepipe' configuration of most trading systems, it is no surprise
that in many instances you'll still see middle office and back office staff
inputting handwritten tickets and running manual confirms. Integrating systems
on the desk and department level-particularly for large firms-is still a
Certainly there are many institutions in which the back-to-front linkages
are lousy. Risk management and back office systems can, over time, can affect
firms' bottom lines by tens of millions-or even hundreds of millions-of
dollars. Today's markets are so cutthroat, says a former operations manager
for one of the world's top 50 banks, that there is no longer an unwritten
code that counterparties return a misdirected payment. "Previously,
you could count on getting your wrong payment returned plus interest as
soon as the error is discovered," he observes. "Today, because
spreads are so tight, you are lucky to see the principal again-and forget
about interest. At our bank, thanks to the receipt and investment of wrong
payments, our department managed to make a considerable profit-so much that
we escaped a downsizing initiative in the rest of the bank."
Setting priorities requires an exact identification of which systems
are in production across the trading area, department or enterprise; how
information flows between these systems; and what parts of this configuration
need to be replaced first. David Osborne, managing director of Micro Modeling's
New York application-development practice, explains that such an inventory
is crucial to a successful project: "You have to know where you are
before you can figure out where to go." Some important questions to
ask at this point include: Which software, databases and hardware platforms
are in use throughout the firm? Does corporate culture emphasize centralized
or decentralized operations-and how does the integration plan fit into the
organizational structure? How much does it cost to maintain each piece of
the existing infrastructure? Where do existing technologies most noticeably
under-perform on the local, regional and/or global levels? How fast and
accurate are the current reports? What is the bare minimum set of information
that is needed to achieve the most pressing reporting objectives?
Many projects come in over budget or long past their initial deadlines
because of a phenomenon known as "scope creep." Like the "mission
creep" that keeps military leaders up at night, scope creep happens
when, after the project has already been defined, managers begin adding
new development objectives. This might include a new report here, another
application screen there, but often a project can collapse under the combined
weight of these small modifications.
Jos Stoop, president of New York-based Intuitive Products International,
a systems integration and consulting firm, stresses that the emphasis of
any systems integration of a development project should be to get a usable
system to market as soon as possible rather than delivering a perfect system
the first time out of the gate. He says, "Once the product is in the
hands of the users, and it is in production every day, you can begin fine-tuning
it. When you are development, you can avoid scope creep by keeping a wish
list, where everyone's additional requests are kept, rated by priority and
then periodically reviewed."
Software: Build vs. Buy
Integration can occur by happenstance as users acquire components and
sub-systems. When companies opt to build their own, they typically aggregate
a system from a selection of object-oriented components, which is often
tantamount to a step-up in integration. Alternatively, they will bring in
experienced developers who come up with a basic software "template"
and build on a solid base of preexisting code and expertise. On the object-oriented
side of the fence, one of the leaders in the financial arena is Infinity,
known for its comprehensive library of C++ objects which encompass most
front- and middle-office functions. These objects can be mixed and matched
with a company's proprietary objects. Panorama is another instance of a
risk manage- ment product that has been moving toward a more "component-based"
architecture that allows the embedding of objects within Panorama and thus
establish links to proprietary systems.
There are also many companies which, while they do not offer a software
package per se, specialize in developing integrated systems from their own
libraries of components. Risk Management Technologies, for example, develops
custom, enterprise-wide asset/liability systems for its clients. New York-based
Micro Modeling specializes in developing financial analytic applications.
This firm has recently announced a joint venture, called MeasuRisk, with
Associates, a Boston-based risk-measuring consulting firm. MeasuRisk
will focus on helping clients to identify their risk management objectives
and build appropriate systems. Triple Point, a Rowayton, Conn. energy systems
company, builds systems based on its TEMPEST system, which it offers as
a starting point rather than as a turnkey product. Other players include
Oracle, IBM and Cambridge Associates.
As might be expected, the term "systems integration" is open
to different definitions. A great deal of custom development goes under
the alias of integration, according to Atul Jain, president of Tech Hackers,
which offers a best-selling spreadsheet-linked analytics package as well
as consulting and development services. "Building a custom system is
not a politically correct option these days even if it is, in many cases,
necessary. Systems development often conjures up images of large consulting
bills and systems effectively held hostage by developers. Therefore, many
managers might refer to a custom development project as 'integration' to
avoid getting this response from their colleagues," says Jain.
Some integration efforts adopt a "best of breed" approach,
i.e. selecting the very best packaged software in each critical niche-such
as interest rate derivative trading, VAR, general ledger and so on-and then
building the interfaces between these systems. Stephen Leegood of U.K.-based
Logica, stresses, however, that firms should not underestimate the cost
and effort involved in building these links. He says, "With best of
breed, you do save on development costs, and you can bring new systems on
line sooner, because you do not have to wait for large amounts of programming.
But these interfaces can become quite complex, particularly when you begin
to have 10, 20, 30 or more systems, and mapping data from System A so that
it can be accessed by System B on a timely basis is a critical item and
its difficulties should not be underestimated."
Some providers hope to lure customers that have been put off by the complexity
of multiple interfaces by offering different front-, middle- and back-office
systems that have already been engineered for compatibility. One prime example
of this trend is Reuters which, according to Gabriel Bousbib, offers a range
of compatible systems which, when taken together, can handle transaction
processing and reporting from "cradle to grave." Reuters offerings
start with Dealer 2000, a well-known electronic foreign exchange dealing
system, and Insitinet, an equity trading system for institutional investors
for the front office. Reuters offers the popular Kondor Plus, which covers
a wide range of equity, foreign exchange and money market instruments and
is appropriate for high-volume operations that do not emphasize exotic deals.
In the middle office, Reuters' recently acquired Sailfish, a data mapping
and risk management engine, will soon blend seamlessly into Kondor Plus.
Likewise, Reuters also maintains a risk management consulting group that
will help clients fill in any gaps that are not already covered by Reuters
rapidly growing range of products.
Philadelphia-based FNX is also attempting to capitalize on the appeal
of 'pre-integrated' systems. Farid Naib, FNX's CEO, notes that FNX offers
modular front-, middle- and back- office systems for energy markets, precious
metals, base metals, options, bonds, fixed-income instruments, money markets
and foreign exchange. Naib argues that a lot of the systems integration
costs many institutions incur would not be necessary if banks and dealers
purchased integrated modules that comprise the single system he believes
should be the industry's ultimate goal. He says, "It is absolutely
untrue that you cannot have straight processing across many trading desks
without resorting to a massive integration project. We have clients bringing
in a single system right now." FNX clients include Sakura Bank and
For smaller institutions, the concept of buying just one system to handle
all trading, risk management and back office needs has a compelling logic.
FNX, for example, can be scaled down to meet these firms' needs. Financial
Software Systems, also in Philadelphia, offers integrated front-to-back-office
software. New Jersey-based INSSINC offers both front and middle foreign
exchange and interest rate software, which is easily blended with the firm's
successful FUTRAK back office. Principia, a new arrival in the financial
software market, now offers an easy- to-install integrated front-though-back-office
package aimed specif-ically at the regional bank, fund manager and small-dealer
The third critical leg to the integration stool is data management tools.
Assembling all relevant data in a single physical or "virtual"
location is often more critical than choosing the 'perfect' software. Logica's
Leegood emphasizes that interdisciplinary financial systems adhere strictly
to the garbage-in-garbage-out rule, and that many banks err by over-emphasizing
the creation of models at the expense of the data-gathering process. He
says, "There are lots of clever people out there coming up with ever-more
advanced models. However, if you become committed to a model before you
actually know what data is available, you set yourself up for a failed,
or at least a very expensive, undertaking."
Most banks by necessity rely upon literally hundreds of data feeds to
run their enterprise-wide reporting systems. The chance that one of these
feeds will be inaccurate or incompatible with the central system because
of, say, a local systems upgrade is high. However, some feeds are more important
than others. Let's say you miss the exposure data on your Indonesian office
for one day. If this tends to be a negligible number, then the harm done
to your overall portfolio assessment is small. If the North American or
U.K. feed fails to come in, however, then chances are the problem will be
caught and quickly remedied. Says Gilbert, "This is a very delicate
framework. It is not 'industrial strength' by any stretch of the imagination."
According to IBM's Bear, the disparate inflows of data need not be a
major headache. "Practically speaking," he says, "it is extremely
difficult to create uniform data across a company that might include offices
in more than 100 countries and extensive legacy systems attached to recent
acquisitions. Many times, the better decision is to focus on a piece of
this puzzle rather than the whole thing, particularly if your operations
are large and sprawling." Josie Palazzolo, product manager for Reuters'
Saifish Risk Manager, explains that in some cases a systems quagmire can
be bypassed if a bank provides consistent instructions to all its local
offices, instructing them to send, say, the results of a particular market
risk scenario at the end of each day. She says, "In this case, you
will get broad rather than deep information, but for some institutions this
is an acceptable trade-off. For others, it's not."
Palazzolo emphasizes that when creating enterprise-wide systems, many
banks err on the side of asking for too much information. She says, "Sometimes
you will see people at the home office who literally want an exact duplicate
of all the information residing in all their local systems. At some point,
to get a workable system in place, there usually has to be some kind of
consolidation, or at least a 'weeding out' of data which is not relevant
to a global management viewpoint."
Micro Modeling's Osborne adds: "It is important for companies to
define the minimum amount of data that will be necessary to do the analysis
they want. While replicating and propagating every last journal entry might
address a certain psychological need for security, it does nothing for creating
fast, efficient processes and timely reports."
There are, to be sure, many technologies that can help companies "translate"
various types of data, create industrial strength data warehouses, speed
users' access to data and build localized "datamarts" to store
niche-specific data that may not be required in company-wide warehouse.
One technology associated with inte- gration is middleware that includes
two distinct types of tools-data mapping tools and resource management tools.
The former include Reuters' TIBCO and Neon, which basically translate and
reformat data from many diverse sources into a common language. However,
sometimes these generic translation packages are not a perfect fit. Says
Infinity's Lang: "Middleware is plumbing. It is not a seamless, elegant
solution." Often, for enterprise-wide risk management systems, companies
are forced to choose between an extremely complex mapping procedure and
consolidation involving a great many assumptions. Consider a bank that wants
to run risk analysis on its entire portfolio of assets and liabilities.
Positions may be consolidated on the country or regional level that produces
a number, but the bank can't drill down through the consolidated data to
pinpoint sources of risk. It may also run into problems attempting to "translate"
unconventional instruments into terms that a mainstream risk model can process.
Resources management tools, sometimes called object request brokers (ORBs),
allow programs to "share" the resources of various machines and
for objects to access data residing in different locations and on different
platforms. The result is a "virtual data warehouse." Both Neon
and Isis (recently acquired by Stratus) produce ORBs. But their widespread
use in the finance world still seems a long way off. Says Lang, "Most
middleware today lacks the security and reliability protocols to handle
In the next several years both forms of middleware may actually be incorporated
into "industrial strength" database products. Informix, through
its recent acquisition of Illustra, is releasing a universal server that
allows users to embed objects within the database. Likewise, the next version
of Oracle-Oracle 8, due out later in 1997-will include object-embedding,
messaging and resource management.
So far most financial institutions are only experimenting with the virtual
warehouse model. Instead, they are developing industrial strength warehouses
using either Sybase, Oracle, Informix or Microsoft's SQL server. And they
are using standard replication technology to bring new transactions into
the warehouse at near-real-time speeds.
These industrial strength databases, as discussed earlier, typically
do not include extremely detailed information because data on its way up
the food chain gets consolidated. To give departments and other interest
groups access to highly flexible, detailed data analysis, many firms are
also beginning to create "data marts"-or smaller databases-using
something known as multidimensional OLAP (on-line analytical programming).
Multidimensional OLAP is a modular database that is routinely populated
according to a predefined data model. It's strength lies in users' ability
to modify data models rapidly and to "slice and dice" the data
contained within the OLAP database across a wide variety of variables-or
dimensions-such as time, locationand so on.
Yet another critical tool in the integration kit is consulting assistance
of various depth and intensity According to Robert Cullen, Oracle Consulting's
senior industry director for financial services, there are so many interlocking
relationships among integrators and vendors who provide integration consulting,
that these firms are not so much engaged in competition as "co-opetition."
Says Cullen, "Systems integration requires an incredible breadth and
depth of expertise, and therefore it is necessary for 'competitors' to work
together." During the past two years, he explains, Oracle's financial
industry unit has succeeded in forming relationships with 180 of the 250
top financial software vendors they initially targeted. One of these relationship-companies
is TIBCO, owned by Reuters, which works with Oracle's technology and competes
with Oracle's consulting group for risk management integration assignments-a
perfect example of "co-opetition."
Barry Gane, program manager for Hewlett Packard's risk management solutions
group, concurs. "We work with multiple partners, including C*ATS and
Sybase, to offer bundled solutions including everything from hardware through
integration consulting. By bundling we can reduce the time and cost associated
with discovering on your own which solutions may mesh well together."
The integration will only succeed where there is an effective project
team. According to the head of operations for one large U.K.-based international
bank, it is absolutely necessary to reassign people from their regular duties
to work on the system full time. He says, "You cannot expect people
to do two jobs. If you ask them to add, say, testing a new system to their
already busy agenda, the system is going to be the very last thing on this
person's list of things to do."
He also emphasizes that a successful project team should include both
information tech- nology experts and business people who are most concerned
with what the project is actually going to accomplish for them. He says,
"If you a leave a project to the IT department alone, you can run into
various issues where their goals are not necessarily congruent with those
of the firm at large. For example, if you are an IT person concerned about
being downsized, you might advocate an investment in C++ based objects because
you want to learn C++ and become more marketable yourself. Conversely, business
people, when left to their own devices, might tend to ignore the technology
component and say, 'Well, we're used to the old system. Let's make a couple
of small changes and go home.'"
A systems integration project sweeping across different business units
also requires a project champion to keep it on track. Says Leegood, "Without
support from top management, these projects can become mired in internal
politics." Gilbert adds that a high level "project champion"
can help the firm make the cultural changes necessary to accommodate integration.
He says, "There's a basic resistance to giving your own information
to someone else. In some cases, such a request for data could be interpreted
as lack of trust, as in, 'You must be doing something wrong. We need your
data to identify what that is.'"
Who Needs Requests For Proposals?
Can software vendors and users get through this process
with their budgets and sanity intact?
By Karen Spinner
Now that the New Year has begun, software buyers and vendors alike are
bracing themselves for this year's batch of RFPs, the systems-selection
tool that everyone loves to hate. RFP is short for request for proposal,
and it is through these detailed documents that most buyers assemble their
"short list" of vendors to seriously consider.
"The RFP process has become much more extensive over the past few
years," says Daljit Saund, a senior vice president at SunGard Capital
Markets. "Today, you will see the same RFP sent to 50, or even a hundred,
vendors. And the level of detail asked for in these RFPs has grown tremendously."
The clear and present danger is that the RFP process uses up valuable
time and human resources in order to select the latest and greatest technology.
Yet by the time all these RFPs are collected and collated, vendors are contacted
and quizzed, and demos are scheduled and attended, the original RFPs could
be out of date. "Sometimes, the software selection process can drag
on for so long that the technological landscape changes considerably from
when the RFPs are first released to when a system is actually purchased,"
says Stephen Leegood, who heads up the financial services group at Logica,
a London-based systems intgeration specialist. "Rather than overemphasizing
the search for the most cutting edge technology, financial institutions
should try to find a solid, industry-accepted package that, optimistically,
will be in place for about five years." He cautions against going overboard
on the RFP process: "RFPs should be limited to the community of vendors
that have a reasonable chance of selection."
One alternative to sending out, say, a hundred RFPs is the RFI, or request
for information. The RFI is a miniaturized version of the RFP, and can help
potential buyers come up with a short list of 20 or so vendors to undergo
more intensive scrutiny.
In many cases, executives in the market for financial software hire consultants
to handle the RFP process. In some instances, these consultants, who have
a vested interest in obtaining lots of vendor information and in prolonging
the RFP process, may cast a wider net than necessary, including vendors
with virtually no chance of landing the deal.
Saund notes that a new trend in software selection is to make a short
list of vendors and ask those individuals to conduct week-long seminars
at their own expense, bringing software to the consultant's office and training
them in how to use and install the product. These seminars can be a very
useful selection tool, but only if potential software buyers attend these
seminars and benefit from the free training.
Roger Lang, president of Infinity, also warns that consultants who routinely
organize RFPs sometimes have a hidden agenda. "If your consultant has
a relationship with a vendor such that he or she receives a kickback every
time a client buys that software, you are probably not getting an objective
evaluation," he warns. Lang suggests investigating your consultants'
relationship with vendors before asking them to handle your RFP.
Despite these drawbacks, no one has found an acceptable alternative to
the RFP, and it is likely to remain a fixture for years to come. "While
RFPs are certainly arduous, what potential customers are asking about on
their RFPs gives us valuable insight into where our market is heading and
what our customers want," says Lang. "We routinely collate and
analyze the questions on the RFPs we receive. In a way, it is free market
research." Saund adds: "If you are a vendor and you want to go
after the big projects, you have to participate in the RFP process. It's
just part of doing business."
The Hardware Integration Challenge
Most banks have an eclectic collection of mainframes, AS/400s, VAXes,
UNIX servers, NT machines and PCs of varying vintage. Rather than replacing
old hardware en masse, the preferred strategy today is to make limited,
or strategic, hardware purchases and then use sophisticated networking technology
to create linkages between different hardware environments. This practice
is known as a "multi-tiered" architecture.
A multi-tiered architecture does not only help you save on up-front hardware
costs; gradually phasing out of older platforms is typically less expensive
over time. The alternative of imposing a single firm-wide hardware configuration
has clear intellectual and economic attractions in terms of purchase discounts.
This approach can backfire, however. Consider the case of a top-20 global
investment bank. In 1990, this firm decided to head towards the vanguard
of the client/server revolution by going to a 100-percent distributed UNIX
computing environment. After five years, this firm accumulated more than
900 separate UNIX servers and an annual IT budget topping $1 billion. Several
months ago, this firm actually bought a next-generation mainframe product
to handle its back office and general ledger processing.
New lines of specialty hardware which are designed to fit into a multi-tiered
environment also promise to lead to better systems integration. For example,
Sun Microsystems' "Java Station"-a pared down terminal capable
of connecting to multiple servers via a Web interface and running Java applications-has
been deployed by at least one bank to replace PCs in its back office, where
users do not really need the whole suite of PC-based applications. Though
on a different order of scale, IBM has come out with a "modular mainframe"
or "enterprise server" known as System/390, which combines flexible
networking with scaleable processing power.
Networking data residing on different hardware platforms is a crux integration
issue. Oracle, which is aggressively building up a systems development and
consulting business in the finance industry, now offers a tool known as
the Network Computing Architecture specifically designed to facilitate communications.
Tales from the Trenches: One Bank Gets It Right
Just a year and a half ago, Sanwa Financial Products, which has trading
offices in New York, Hong Kong and London, had a hodgepodge of different
systems operating independently. According to Rahul Merchant, the firm's
senior vice president of technology, "We had about two or three systems
for every product we traded. That meant separate systems for caps, floors
and swaps; still more systems for bonds and repos; and another group of
systems for futures and options." Each office maintained its own data
warehouse. Users in the front, middle and back office were all frustrated
by aging technologies. Multiple back offices fought with each other over
valuations, and middle-office staff members spent a great deal of time reconciling
reports produced by multiple systems. Sanwa's Japanese parent company wanted
more comprehensive risk reports on its growing financial products division.
It was at this point that Merchant began to develop a systems integration
plan designed to take a phased approach to addressing all these issues.
"The first phase," he says, "was to consolidate all our transaction
data within a single data model, and to start working from a single source
of data as opposed to three different databases, each with its own conventions."
By addressing the data issue first, Merchant explains that his intention
was to eliminate the endless reconciliation and collating among the various
offices that frustrated staff and made it difficult to produce timely, operation-wide
risk and position reports.
Merchant's team selected Sybase as the standard database for Sanwa Financial
Products, and brought in Infinity's Montage data-mapping model in order
to translate information residing in the three local offices into a single
language. According to Merchant, this data modeling was perhaps the most
important part of the project. "It's garbage in, garbage out. Without
the painstaking process of mapping and cleaning the data, we would not have
been able to produce anything worthwhile." Now, all new transactions
are entered into the warehouse residing in New York, and transactions are
available to users worldwide within seven seconds of their inception; at
any given time, this database may contain more than 35,000 active trades.
The three offices are connected through a wide-area network, and the central
warehouse is populated through Sybase's replication technology. He says,
"So far, we have had problems with performance, and we felt this was
a better decision than to go with one of the latest 'virtual warehouse'
technologies which, although they are the latest and greatest, are less
mature," says Merchant.
The data mapping exercise also included taking a look at how transactions
were valued across the front, middle and back office, and setting firm-wide
standards for P&L and risk-reporting purposes, ensuring consistency
across all three local trading operations.
After looking at a short list of five or six vendors and going through
the request-for-proposals process, Merchant's team decided to go with a
library of Infinity's C++ objects to use as a starting point to build a
consistent front office system using the same models and information codes.
"We liked Infinity because of its open architecture and its maturity."
The group then proceeded to build proprietary objects and blend them with
the Infinity objects in order to create a custom system that would be acceptable
to our traders."
The first phase of the project-including the new front-office software
and bringing existing middle and back office systems on line using the single
data warehouse-went live about three to four weeks before press time, and,
according to Merchant, the users are pleased with results. He says, "Now,
all our trades are entered into a single system, and we have one standard
source of data. The traders do not miss the old systems, perhaps because
they were such a 'kluge.'" He also notes that the data warehouse piece
of the project made it possible for the firm to consolidate three back offices
into a single back office in New York, a considerable cost savings that
has already paid for the cost of the new technologies.
Now Merchant is gearing up for the second phase of the integration process,
which will replace the old systems in the middle and back offices. This
phase will include building middle-office analytics that will allow risk
managers to "slice and dice" data, incorporate Sanwa's proprietary
value-at-risk model and other risk measurement tools, and allow transaction
information to be mapped automatically into the firm's off-the-shelf general
ledger package. Firm-wide credit analysis is also part of this package.
This phase of the project will also allow for intra-day risk management,
which will help traders fine-tune their strategies throughout the day. Says
Merchant, "Our portfolio includes a lot of highly structured deals.
Right now, real-time analysis is not necessary-or humanly possible to do
accurately on our portfolio of instruments."
According to Merchant, there are a number of reasons why his project
is succeeding where so many others have failed. First, by tackling the data
management issue before considering software, his team has been able to
create a solid base upon which to build integrated systems using consistent
methodologies. Second, his team worked closely with users and focused on
getting new modules into users' hands at approximately two-week intervals,
thus keeping the users involved in the process. He says, "We kept the
scope of project in check, and we had very, very aggressive deadlines."
And, third, he notes that project team members have retained control of
the project. He says, "Although we have brought in some consultants,
these are consultants who have been recommended by the team members themselves.
The internal IT staff and the users own the project, and its success is
matter of pride."