Category Archives: Info

15 years in this industry – a Murex retrospective

Sometimes I look back at how things have changed since I started in this industry. It’s always interesting to see how things have changed as it can hint at to where things will head next. So join me on this Murex retrospective.

I started working for Murex in 2000: Matrix, internet bubble, Y2K bubble burst, Nirvana (hum, no, they were already gone). I was quite impressed with my mentor at the time as she knew everything about Murex and finance. Murex was about to release its first Java based version (the first of mxg2000 generation).

Banks were not expecting so much from vendors: they had the financial knowledge and all they expected was a software that could do what they wanted to do.

Over the years, this has changed quite a lot. They still have expertise domains but often will use vendors as a source of information as to what is being done on the market and what they should be doing. Let’s drill into some specifics:

Rate curves

Back in 2000, people were satisfied with 1 curve per currency without much focus being put on basis curves and risk.
Then came the USD basis curves and later the single currency basis curves with a major focus on OIS curves. Now you can add to the mix funding curves and from 1 to 2 curves per currency, you have now 5-6 curves for each currency.

Volatility

Volatility also changed over the years. Started quite simply with an ATM curve and smile. But complexity started to kick in. Normal volatility came first as it offered a more stable measure of vega without any noise from prices. It accelerated when interest rates started to become negative and one had to use either shifted lognormal or normal models.
Then the volatility models became more popular and a must have: SABR for instance where traders start to measure their risk against the parameters.

The changes also occurred in Murex when trying to cater for a demand of always more flexibility: workflows, viewers, eTradepad, system architecture.

What’s next?

Can we expect the same trend to keep going: an always more flexible software and an industry always more complex?

I’m not sure. In regards to the industry, the move towards clearing for derivatives might push the banks to investigate more complex products that would fall outside clearing. But would there be a market for them, especially if more and more derivatives are cleared and benefit from a better price?
Or would the market data and pricing models increase in complexity in order to offer a price that is always closer to where the market actually is?

Regarding the software, it could always offer more flexibility but with flexibility comes complexity. Software complexity has a cost in terms of upgrade, configuration and maintenance. So past a certain point, the benefits are basically not worth the cost. Have we reached that point yet, I’m not sure at all?

And you? What do you think about the future?
And how was your learning experience: daunting? Challenging? Or a walk in the park?

Sybase vs Oracle

This is the question one often hears when the decision has been to go with Murex: Sybase vs Oracle. Which one is better? Which one do you recommend, etc…

To first repeat what has been said numerous times: Murex works very well with either and if you need to use one or another due to bank policy or any reason, you can’t get wrong. Murex will deliver results and everything will be A-OK.

But there are differences and both have pros and cons. Historically Murex only supported Sybase and many customers feel that they will get better support from Murex if they go with Sybase. Oracle is quite well known at Murex nowadays and there is no change in the quality of support regarding Oracle. PAC team especially is knowledgeable on both fronts and can provide configuration recommendations for both systems.

Even in performance, that’s not where the difference is really going to lay (many people would disagree here and give reason to go for one or another). I feel the difference is pretty in the actual usage of each: they each work slightly differently. Not from a Murex front end of course, to the end user, Sybase or Oracle does not make any difference, system looks the same, functions work the same way. It is really when you start using SQL where you can see differences.

I graduated from SQL school with Sybase as a teacher, so I do know more about Sybase than Oracle.
Sybase wise, identifiers are directly attributed (the good old M_IDENTITY). When writing SQL, no need to take care of that field, it handles itself. With Oracle, it’s a different story, one need to call the sequence (TABLENAME_DBFS) to retrieve the latest number in order to update it. A bit more painful.
SQL clients with Oracle are for some reason always more of a pain especially if you mix direct commands and stored procedures. I used SQL developer and not seing the results of my stored procedures is a pain. I also use a lot SQuirreL. The later works great for everything EXCEPT the initial connection to the Oracle servers. When the server is distant, the initial load of tables took couple of minutes (started at 15 minutes and went down to 2-3 minutes once the link to other offices got upgraded).  Oracle also was a pain with the username/password for each schema. Not too sure why it was like that but while in Sybase one can easily switch from one DB to another with the same user, the way it is configured to work with Murex Oracle forces to log out/log back in (or log in multiple times to each schema).
But I had my fair share of issues with Sybase. DB corruption happened quite a few times (I suspect it happens also with Oracle but I did not experience it firsthand). The worse DB corruption was when receiving a dump from a customer which contained a trigger (triggers are not your friends). That trigger was attached to a different user id which we did not have when we loaded the dump. So we had to reset the user id for that trigger before deleting it. When updating that user id, it caused a DB corruption which could only be solved when stopping/restarting the server. There were other cases but nothing repeating as easily as that one.

I’d be interested to hear from Oracle experts to give me all good sides of Oracle as from my point of view, I usually found Sybase easier to work with and often wasted few hours trying to adapt a stored procedure that I wrote in Sybase to work with Oracle. Usually PAC team were the ones able to set me straight and get the procedure up and running.

Murex performance – the chicken and egg story

Murex performance is often in the spotlight: how quickly can Murex do XXX or create YYY. (Replace XXX and YYY with your choice of tasks)? The problem is that the list of requirements between 2 customers varies and results in very different timings.

So to take out the main question first (if you’re the sort to prefer short answer): Can you get good performances out of Murex? Absolutely!

How you’re going to achieve depends on few things (which makes answering the question how long does it take to do something impossible to answer):

  • Hardware is the first one to come to mind. With great hardware comes great performance. Well, not really, you also need to have it tuned right but yes it is a major factor
  • Requirements. This one tends to be overlooked: “I want to get real theta for my whole portfolio over the next 10 days, along with a spot shock and at any time rewrite the spot levels. And it needs to be fast!” (you have similar questions with trade input, reports, etc…). Of course, if you ask for time consuming tasks (or put it many consistency checks), you will slow down the processes.
  • Maintenance. If all works fine on day 1 but not 10 days later, clearly there’s some maintenance that was not done properly
  • Software. I put this one last as it is very rarely the software the issue. Very rarely (feels good to repeat it)

For most of this issues, the PAC team is the go-to team. They can size the hardware you need based on your system usage, advise you on maintenance procedures and debug if something runs too slow.

In general if you believe that a process is taking too long given the configuration (inserting a deal takes 5 minutes, report still running after 1h, etc…) you need to do the following.
If it is an isolated occurrence, it could well be a lock either at DB level or at system level. For locks at DB level (rare but it happens), check with your dbas, also check if no heavy process is currently running. For locks at software level, Murex has you covered with ipmonit. Login to ipmonit from the monit tool, and you can access a lock report showing you all the locks put in by the system (for example if someone is editing a trade, it is locked to avoid 2 modifications at the same time). Check the documentation for ipmonit as screenshots are very helpful when navigating the screens.

If it happens all the time, then it is unlikely to be a lock and you need to generate performance traces. The first ones are generated with /TIMER slash command. This slash command will generate mxtiming files into your log directory (you can put the slash command if required in the launchers for services). The mxtiming file will show the time spent on CPU and while waiting for the DB. If time spent on DB is too high, indexes could be missing on the tables. So you need to run a DB traces (shameless link to my older post for how to). These DB traces can be sent to Murex and they will give you the number of logically read on each table. A number too high indicates (likely) that a table is unindexed. Indexing that table should improve performance.

If the system is slow, the reason lies either in the hardware or the configuration. Rarely the problem is due to a bug.

There are also cases where Murex develops a new feature to speed up a process that is known to be always slow due to the sheer amount of computing/data crunching it requires. Parallelization or pre-crunching are the 2 big methods to do so. But this applies when you start to have a volume: inserting a single deal should always be fast!

Comments, experiences are welcome!

Murex database – Hack your problems away!

Alright, today let’s crack open this black box that is the Murex database! While all of you know that Murex doesn’t publish its database organization, sometimes there is no choice than go directly where the data is.

My rule of thumb is that if one can avoid it, going direct to the database should be avoided. Any problem caused while browsing will have impacts and cause problems in the environment. For reporting, dynamic tables or viewer reports are your friends. For filtering, list of fields is actually quite exhaustive. In many cases, you will find all the information you need without opening any single SQL client. But sometimes, for some filters (back to RQWHERE post!), for some reporting or for some DB cleaning, you’ll need to go through the database.

Working with Murex database is the same as working with any other trading system database: backup, test in test environments, test again, backup and it should work. The problem is that sometimes some fields are not very clear as to what their roles are and when trying to populate lines (insertion or update), this could turn out to be a real problem. Murex consultants are then the best suited to help you out, especially if you’re not sure your request is safe. In case of migrations, again, Murex consultants should be the ones to provide you with the right scripts, only write yours when you’re absolutely confident of what you’re doing.

Now from a Murex consultant point of view, it is not always easy either to determine what fields have what roles. But the first step is to understand what the other party is trying to do. Maybe SQL is not the best way forward and there could be an easier solution?
Then you can check what other people have done. It is rare to have a problem with only 1 customer that has not been encountered by somebody else.

I learned SQL while working at Murex and many times it actually sped up processes tremendously:

– Inserting in bulk some data (or duplicating records)

– Cleaning up unwanted data. Especially logs (or market data, much much faster)

– Building my own extractions when doing reconciliation reporting

But it also happened that my scripts did not work as expected (and lucky I had a backup and was doing it on a test environment): updates/delete without a correct where condition. I once removed all records from the transaction header!

If you’re working on a limited set of tables and you don’t want to call upon the DBAs to do the backup, then you can should use the following tools: Help-Monitor-DPI info-Transfer from RDB to DBF. You will need an authorization code to proceed but then you can transfer the table from the database to a file in the application server file system. The step Transfer from DBF to RDB does just the opposite. So it gives you the flexibility to backup any table you want from the database to the file system and bring it back whenever required.
Note that you can use jokers in the name of the table you wish to transfer and you should not put _DBF but .dbf.

And you? What’s your relationship with SQL? Comments and experiences below if you wish!

Volatility … going the distance

Today I’ll cover a bit about volatility and the different topics relating to it. If there’s one topic, you want me to dive into, let me know and I can then make a full post on it.

  1. What’s volatility?

Volatility is a measure of price variation over time. The higher the volatility the more likely the price will vary (either up or down). Volatility can be measured historically by computing the standard deviation of the prices. Or it can be measured implied, that is to say by solving for the volatility of a quoted option.

2.  Smile

When the volatility is solely time bound, we’re calling it the at-the-money volatility (or ATM vol if you prefer shortcuts). But often you’ll consider that volatility is not a constant for different strikes and it will change as you step away from the at the money point. Usually volatility increases as you move away from the central point effectively giving you a smile shaped curve. The driver behind this is that options further from the ATM point have effectively a higher price than if they were using the ATM vol point.

3. Interpolating and shaping the smile

When working with smile curves, you need to decide for an interpolation method. You can choose between parametric and geometric. Geometric interpolations take into account the pillars that you have provided to interpolate in between. Parametric requires for some parameters to be provided (correlation between spot and vol, shape of RR, etc..). SABR is getting used more and more for IRD products and traders start also to monitor the sensitivity of their positions to the SABR parameters.

4. Dynamic smile

It means that the smile is not a constant when spot rate is changing. So in terms of total vol, it is like defining a convexity to the volatility (smile being your first level). Murex can produce such effects when calibrating Tremor.

5. Short date model

Very popular in FX world, the idea is that you can attribute certain weights to specific days: Fridays are less volatile than the rest of the week, Weekends have a very low weight, Mondays are a bit higher, etc but also to specific events (Fed announcement, Big numbers publication, etc…) The short date model has really an impact on the shorter dates (the shorter the bigger), so while it goes up to 1 year, it is effectively really important up to 3 months.

6. Cut-off spreads

This one is more a workaround than a real need on the market. One should consider that an option with  a Sydney 5pm cut has less time to mature than an option NYC 5pm. So the idea is that the NYC option should have a higher price. Ideally, one would increase the time value of that option (and t would then be able to cater for fractions of days). As this is not currently possible in the system, the volatility is effectively increased to mimic the increased price for later cuts.

That’s all that comes to mind right now but I’m sure I’ve forgotten a lot about it. Volatility is a rich topic and I just wanted to give her a flavor of the different functions which are attached to it.

Comments, requests below!

Version releases, upgrades and all that jazz

Murex release policy is actually quite simple:

Major versions : 3.1.x are released once/twice a year. They contain new developments, data model changes, etc…

Minor versions: 3.1.99.x are released much more frequently. They do not contain new developments or data model changes only bug fixes.

The idea behind is that minor versions can be quickly deployed or even put in a separate launcher running in production in order to address issues. There is usually no problem having 2 different minor versions pointing to the same database and being in used at the same time. But as always, check with your Murex consultant if it is all good.

Major versions are a bit more work and contains more novelties. Frequent upgrades to major versions is strongly recommended so that the gap between production version and latest Murex version remains small. It translates into a shorter turnaround between official release and production.

The other healthy habit is to have a test environment running a build version. Of course, no one can use a build version to use in production but it can be useful to test new features and work with endusers. Endusers only really care about the version that they have in production. So the focus on time to market is quite important.
Let’s take for example the need from an enduser for a new development. First steps consist in clarifying the requirements and getting clean specs. Then one needs to liaise with Murex to get a projected timeframe. It is usually best to get commitment as to which version the development will be available. Testing on the build prior to the release is highly recommended in case something is not right, not as expected as the timeframe between 2 major releases is actually quite long.
Once the development is all good and the official release received, then the non regression testing starts and finally deployment to production.

End to end, this can mean 18 months, so it is important to nail it the first time and ensure that there is no extra delay. Also, back what was said above, if endusers can see a test version with the development, it brings them confidence that they haven’t been forgotten and things are moving their way.

 

Experiences, comments? Feel free to share!

Murex documentation

This very topic is the reason why I started this blog.

Asking for Murex documentation is maybe the number one request, coming from consultants, customers or even internally.

Don’t be fooled, while Murex is a great system it is also complex. The system has loads of configuration options to cater for the numerous market conventions found across the globe. Luckily, as part of the project process, most configurations are already preset or adjusted to answer the requirements.

So let’s address this question that I’ve heard so many times: Can you send me Murex documentation? Or in shortened form: mx doc? k thx bi.

All customers or people in contract with Murex, can access Murex documentation. It is deployed similarly to the application and if properly setup can be accessed with F1 while logged in. It happens sometimes that you’ll have a shortcut on your desk to open it.

The Murex documentation is within a proprietary system and is strongly protected. Don’t expect to print it all out! Most of the time, the documentation is read on the screen with no other support. You have categories, search functions. With a little bit of effort, you should be able to find the information you need.

If the documentation made massive progress over the years and often encompass a lot of information, new or little used functionalities are sometimes undocumented… What to do then?

Ask your preferred Murex contact! He will try to source the information and deliver it to you.

If you do not work for a customer, integrator or Murex directly, then you simply cannot access any Murex documentation. Murex is very protective of its IP and it applies to documentation as well.

To summarize:

“- Hi, Hi need Murex doc asap!

– Press F1 (smart ass reply, often adapted to the question above)”

 

“- Hi, I can’t find a doc about defining a 4 dimension GMP structure. Can you help?

– Let me come back to you” (and you’ll get the info, example or apologies if this is not possible)

 

Funding curves

In the extension of the previous post, I’ll cover a bit the funding curves: what it means and how they’re being used.

  1. What are the funding curves?

A funding curve crystallizes the  funding spreads over the market rates for the bank. Usually you will find that these spreads are over the O/N rate and the funding curve is often in USD.

That curve represents the cost to the bank to fund their positions. When it needs to be expressed in different currencies the current approach is to ratio discount factors: Df(XXX funding curve)/Df(USD funding curve)=Df(XXX/USD basis curve)/Df(USD OIS curve). The assumption behind is that swap points are a constant.  So you end up with the discount factor you’re after (Df(XXX funding curve) as Df(XXX/USD basis curve)*Df(USD funding curve)/Df(USD OIS curve).

2.  How to set them up?

In Murex, this can be setup using the Automatic CSA curve setting. You’ll need to define a CSA category (such as No collateral) and attach the proper curve assignment to USD (assuming you’re using USD for funding).

3.  When are they used?

Trades can be collateralized or not. To be collateralized, they need to be covered by an agreement with your counterpart and in that case, any large market value for that transaction will trigger a margin call and thus limit the risk for both sides.
When the trades are not collateralized, this is when you want to use the funding curves as the bank is much more exposed and the returns expected are higher.

In terms of pricing, this also means that when trading with a non collateralized counterparty, the prices will be worse given the increased rates.

4.  Anything else? Risk management maybe?

This is where the funding desk comes into the picture. The role of that desk is to determine the funding spreads but also to manage the risk coming from these curves. For traders, they will also look at their portfolios as if all trades were fully collateralized BUT any trade with a counterparty with no agreement will effectively use the funding desk as an intermediary : the funding desk trades the same trade back to back, with the counterpart using the funding curves and with the trading desk using normal curves (OIS or basis curves).

In a nutshell, that’s what funding curves are about. As always, questions or comments are more than welcome!

Rate curves

Today’s post is about rate curves, what they are, their purposes and where things are at the moment (Of writing this post)

  • What’s a rate curve?

The point of a rate curve is to produce discount factors (or discounting rates) in order to discount cash flows, estimate rates or capitalize a cash flow. In school, you tend to consider that there is an unique flat rate for all your discounting needs but in fact the rate is not constant over time and tend (it’s not a guarantee at all) to increase over time as your risk increases.

  • What’s inside a rate curve?

The main problem is that you need to find rates which are the best representation of where the risk free rates are sitting. So you’ll want to use the most liquid interbank instruments. For a standard curve, you’re usually looking at deposits on the short term, rate futures on the mid term and swaps on the longer term.

  • Is there more than one curve?

Sadly, yes. When I started people were using only one curve making risk management a hell lot easier. Then the cross currency basis curves appear. The expectation was that risk  is different for transactions which used more than a currency (FX deals and currency swaps for instance). So one would build a curve with liquid cross currency deals (today’s trend is swap points up to 1y, 18m and currency swaps on the longer run).
But it did not end there! Afterwards estimation curves were introduced as forecasting a 3m rate, 6m rate and 1m rate on the same curve was not returning correct results. Depending on the curve, the instruments are different but usually deposit for the first pillar as it also anchors properly the fixing rate for the day once published, futures or FRAs if available/liquid and basis swaps (also when available/liquid).
Finally, the curve that is maybe the most popular now: Overnight curve (Also called OIS Curve (for Overnight Index Swap). It is used for forecasting the overnight rate but also discounting many transactions (normally all collateralized ones, but I’ll detail that in another post). OIS curve is built using a depo for the O/N pillar and then either futures when available/liquid and OIS for the longer run.

While all these curves return a price which tends to be much closer to the market, they also make risk management a lot more complex especially as curves are related. So non rates traders are forced into this world and need to hedge their basis risk where before they were just looking at a single IR risk figure per currency.

Feel free to discuss further in the comments below or the forum, but it is a vast subject and worth actually multiple posts rather than just one.