Category Archives: Tips

Formula debugger… More addictive than heroin

Hum, probably not the best title in the world. But that’s the truth: for any Murex support person, the formula debugger has become very hard to live without.

What is Formula debugger?

I’ll have to assume that you are from another planet . The formula debugger is THE tool to use whether you are actually working (so useful) or just pretending (looks very serious and on the scale of Lazy Fridays Hangover I am Coming to Work but I am Still Asleep (LFHICWISA), it scores a 9 (only beaten by the Excel screensaver)).

As the formula debugger name could have led you to believe: It is used to debug formula. When you turn it on, whenever you are using functions which calls the pretrade, you will see a pop up with all the formulas executed as well as their resolution.

How does it work?

To turn it on, it is very simple, from your enduser session, do Help-Monitor-Formula debugger. You will get a pop-up stating: Formula debugging is now on.  (do the same steps to turn it off)

Now, very simply price a trade from ePad, or whatever you want to do, and a pop up will come up showing you the different formulas executed.

Even better (it’s like a double rainbow) if shows you the resolution of formulas. For example (first line is my formula, second one is the resolution details always starting with //)

IF fLFHICWISA>9. THEN sResults:=’Better than Excel Screensaver’

// IF 8.5>9. statement result is FALSE

You also get the content of the variables when they are assigned:


// legGetString(pointer,”Counterpart”) returned “HSBC”

And of course you also get the name of the formula being executed so you can check the formula you have been working on is properly being used by the pricing.

And the whole text is exportable to Notepad or you can also search through it.

Any more tips?

You are getting quite demanding, but I actually have 2 more things:

One is the obvious Clear button, to make sure that you only have as debugging output what you need to focus on (between 2 tests, just clear the window)

And the other one is to couple with the action button wf:XML refresh in the strategies/notpads (right click on the folder,Properties, Actions. Then you can add a button wf:XML refresh. Usually it is the last one in the list.

This button lets you reload all the changes you made in the formulas in a CONFIG session without having to exit eTradepad and re-enter.

Time for you to get cracking on new strategies and formulas!

Market data – technically what can be done

Using Gogond’s question about market data import/export : I’ll give today an overview of all interfaces (well, all might be oversell as there are quite a few, I could have written most of them but that doesn’t sound as good as all).

The first one and probably the most use:

I. The Universal Interface

…. your hands. Sorry about that! But indeed manual action on market data is actually quite useful and probably your first stop. All market data can be copied/pasted from/to Excel. Everyone knows how to use it.

If one complains that not all market data is in the same place or using the same format than their spreadsheet, you can then use the market data viewer. You need to define a loader (market data loader to be exact) which tells Murex which query you want to run : Fx vol, rate curves, etc… You can then look at your market data from a viewer point of view. Very useful for looking at all rate curves at once for example. Even better, you can still copy/paste making it a very nifty tool.

I would also add that in pricing (eTradepad) or simulation, you can also bring viewers or popups of market data and modify them live to perform what-if scenarios.

So don’t discount the manual action as it is very simple to import/export market data manually and often people want a visual check at the same time. Two birds with one stone!


Or Market Data Contribution Service. MDCS is basically a memory service (often called the cache) which holds XML representations of market data or fixing data. MDCS is used mostly for realtime, you can load into it XMLs (usually called MDML for market data) using either RTBS (real time interface to Reuters, Bloomberg or any other provider), your own XMLs (Murex provides a series of scripts to query or upload into MDCS) or even Murex DB (you can export Murex market data into MDCS).

You can define multiple folders in MDCS so you can end up with different rates for a same instrument across these different folders. These folders are called users but I found the name a bit misleading as its main purpose is to let you segregate your cache data.

The cache is also actively retrieved from sessions (using activity feeders) to give live rates. The advantage of this method is that sessions are notified when new rates are available and you don’t need to save this data in DB, you can simply let the activity feeders push or retrieve that data to your session.

While one can also export data from the DB and then download it from the cache, that would not be the most efficient way to go about it as there is a much better solution:


Or Market Data Repository Service.  Interestingly, MDRS is a pure interface, it does not hold any data into memory like MDCS so the label repository is a bit misleading.

MDRS lets you query or update the Murex DB directly for market data in a very similar manner to what you would do when querying or updating the cache.

Note that for both MDCS and MDRS you can query on stored data (like market rates, correlation, etc…) but also on calculated data such a ZC rates, or total vol (if you upload spreads). For calculated data it is limited though, you will not get any interpolation for instance.

So if you didn’t find what you were looking for, hopefully the last one will be of some help

IV. Miscellaneous

And then you have a fair bit of old, not so used anymore interfaces some of them are being retired or are clearly in End of life mode meaning that Murex will not even provide support for them.

Among these you find the good old ascii import via reports, this was the historical import way (and for a long time the only automatic one).

You will get also the menu item Market data import/export. It has the advantage to be graphical compared to MDRS but the data you can get from it is limited and except if advised otherwise by Murex people I would strongly recommend to skip it and stick to MDCS/MDRS.

And one nifty command I really like is the export daily discount factors. You will find it at curve level and to my knowledge it is only available from the GUI (you can script it with a macro though). It basically dumps into a file all daily discount factors for a given curve. In a way it is more reliable than the previous method which consisted in pricing a trade with daily cash flows, make it discounting on the desired curve and then copy all the discount factors from the evaluation flows.

I will stop there for my Miscellaneous bag as I feel I covered the main ones, you have of course processing script to copy data from one data (or desk) to another but that’s no longer really an interface. Of course, if you are outraged that I forgot your favorite interface or anything, feel free to comment and I’ll adjust!

Repos in Murex

Thought about this one and follow up on some questions I answered, I thought it could be of interest to some of you.

Repos and Buy/Sell backs used to be quite difficult to work with, bugged till effectively Murex reworked them into what was called the “new module” (agreed it sounds a bit like the promised land!).

The repos (I use that word loosely to cover sec lending, BSB, repos, etc… too much typing otherwise) are first defined as a template (similar to generator): you define some basic rules knowing that what you don’t provide will be provided by either the bond you choose or by the currency you choose. For instance, if you don’t input a start delay, the bond settlement delay will apply instead (usually 3OD, so you will need to define the start delay as most repos are start 1OD or 0 day).

Once you’re done with the template, you can then start pricing them and inputting them, it is simple to use, got everything you need to have: sec for fee, haircuts, etc… It pretty much looks like a swap: physical on one leg (bond, security) and cash on the other leg.

Open repos? Not a worry, just enter open as the maturity and that’s it.

I know it sounds simple but it is actually. Very simple to use, pretty much like a walk in the park.

And finally repos need bulk event and bulk events Murex has: global reneg (rate changing) either as absolute or as variations (e.g. 25bp for all your repos), termination, increase/decrease nominals, etc… And all is integrated in the Murex workflows.

Again, I know sounds simple and it is (I guess they could put that in a brochure! )

But the best thing about repos is a much more recent new transaction type: the basket repo.

This transaction is incredibly flexible and opens the door to so many opportunities, well, I just hope your repo desk is big enough! The basket repo, as the term could suggest, mean that on the first leg you have a basket instead of 1 line. Best of all the basket can be made of bonds, equities, commodities or cash, completely flexible. Input is done through a viewer, price, yield, haircut and quantity can be input for each line and you can check the valuation versus the other leg of the repo. You can even copy paste directly from Excel.

The specific event called repo basket substitution basically lets you remodel the basket the way you want: you can change quantity, remove lines or add new lines. Of course you can also use the other repo events.

It is a great tool for collateral management or for triparty repos where you can import the actual position at the end of the day.

In summary, before, repos in Murex meant you like making your life a challenge. Nowadays, they are very simple, very easy to use with most if not all functions your repo desk will need.

Hum, this article definitely sounds like advertising, which in a way is true: I really like that module!

PL reports – solutions

Shazmd asked a question through the forum about how to make a report giving out PL as of 3 dates.

There are couple of solutions to this question and which one you need to use is totally up to you 🙂

First thing to keep in mind is that when people mention reports it should not always equal datamart. For instance, simulation is a live report showing the risk and which could fetch data as of couple of days in the past.

Trade query is also in a way a report, with no calculated data. So it is important to understand what is the end user after so you give him/her the most appropriate solution.

  1. User definable P&L notepad

This first solution is probably the simplest and is perfect if the request is just a one off that would not repeat very frequently. User definable notepad is a dynamic table based on PL, there is just a direct menu to access it. So if you search for it (it is under middle office, P&L related procedure (or just type in the search notepad)) and get into it, Murex will ask you to choose a filter. Space bar on the default one. You then see a filter screen. You have 3 dates you can choose from (Inception means not calculated) and you can pick a portfolio or any other criteria. The data for the 3 dates will be computed when you validate the filter.

You then get a matrix will all the deals selected and the data you need. Just change the view to be what you need using Tools-View and choose the columns which make sense. You will see some fields available 3 times, they are the data computed as of the 3 different dates. Just change from description to field name. The field ending in 0 corresponds to the data for the first date selected, the one ending in 1 corresponds to the data for the 2nd date selected and the last one ending in 2 corresponds to the data for the 3rd date selected.

You can then copy paste (ctrl-shift-C to copy all lines) the results or export them from the file menu. You can then work out the data in Excel.

2. Datamart

Datamart for 3 PL dates is the 2nd solution. It is easy to re-use and the format will be up to you.

First of all, you need to choose which engine to use for computing the data (note that it is using the same code underneath but there could be differences in usage). The first one is a dynamic table of type PL (TRNRP_PL). You need to define one and choose the fields you want. Remember that you can test a dynamic table to check if the content is ok. The dynamic table should produced only 1 set of PL result not 3.

Once you’re happy with the dynamic table, you need to wrap it so that it will produce data into the datamart. To do so (and in high speed): you will need to define a feeder and then inject that feeder into a batch of feeders. The feeder will go through creating a table in the datamart and attaching data to it (data coming from your dynamic table). When you define your batch of feeders make sure that you can have at least 1 set per date so that data in the datamart is historized.

Just run the batch of feeders as of the 3 dates you need to compute (you might need to amend the global filter within the batch of feeders to choose which date).  Once your datamart is populated with the data (you can query it and check by SQL for example). You need to build an extraction to retrieve that data. The extraction will be an SQL piece of code which you can make fancy by letting the user choose the 3 dates he wants to query on (or you can hardcode it).

Finally, you need to format the output, csv file format requires no work, but if the user wants to look at the results on screen you’ll need to have a viewer with your field to be nicely formatted. I usually don’t recommend to format the excel file via the viewer as dates come back sometimes not as wanted, or fields formatting might require extra work.

And that’s it you’ve got a datamart with your 3 dates. The first time you will do it, you will get it wrong and it will take some time but once you start getting the hang of it, it becomes much much smoother.

3. Simulation

And yes there is another option: simulation. Simulation is a live report showing live position, cash flow, PL, etc… In simulation, build a viewer to show your PL with as much breakdown as you would like.

You can then define a dynamic table based on that simulation view. Create a feeder and batch of feeder on top. Then compute the PL data as of the 2 days in the past you need. Simulation will always compute PL as of today.

Then back in your simulation view, you can add more of the same PL fields and right click on them, output settings and select stored data. You can then choose to retrieve the data from your datamart directly in simulation and display either the datamart number or the variation with the current data. You can also choose to retrieve based on a shifter (like the day before, 1 month before, first day of the month, etc…) and you can do it as many times as you like (and as you have data of course!).

In a nutshell, that’s how you can solve your problem of a report showing PL as of 3 dates. I am sure you will have tons more questions, feel free to experiment, check your environment for already existing examples and ask here if you’re stuck!

Psychology 101 or remembering that we help people

Working on Murex, we found ourselves in a support role. People are coming to us with problems and requests and we need to help/guide/handhold/advise/explain/assist/… others. As such a little bit of psychology never hurts.

I see sometimes support people with the finesse of a sledgehammer hitting a glass table at full speed. Just being  blunt, not interested really in the whole problem. As support role, at the end of the day, we are the ones who will be evaluated based on the feedback of people who we support. If you have the nasty habit of making others feeling dumb and try to show that you know more than them, you’re either in the wrong career path or the current career will come to a close quickly. Put in your mind once and for all that your role is to help others achieve what they need.

For those of you with the social skills of an oyster (and I’m not talking about the Caribbean oyster well know for its social gatherings, its after church discussions and its theater skills), couple of pointers:

– Don’t hide behind email/messaging system/portal, a face to face or a phone call builds a more personal relationship. It also helps understanding if something else is at stake

– Try to feel the mood of the people and adapt. If the trader just hit the jackpot or if he is over exposed, then the news that you won’t work on his/her issues for the next 6 months is probably not going to be received the same.

– Interest yourself to what they are doing. This is essential. Even if your work is different, understanding, caring about what they do will not only help you build stronger bonds but also make you a much better support person.

This common sense might seem obvious to most of you, but I have seen quite few times people which must believe that they are of a different breed to the rest of us and we should accept what they say as gospel. Support is a 2 way street, building a strong relationship is essential.

Performance, point of failures

Today’s post is really about performance and more importantly how to ensure that you do not reach a point where your Murex implementation fails. I’m going to mostly focus on Application server, so you don’t need to read it all if you’re not interested!

You can have multiple cause for a slower (or badly) Murex installation. I will share my experience and hopefully you’ll know what to look for.


CPU is usually the one that will give you less problems especially if you followed Murex recommendations. You might run into CPU related problems if you multiply environments running on the same machine and running heavy CPU bound jobs: reports, simulations for instance. But in production, the only times I’ve seen performance degradation due to CPU were due to aging hardware and an increased bank activity.


Memory on Unix is quite cheap and it is rare to run out of it. The problem is that as soon as Unix/Linux/DOS (I’m kidding for the last one) runs out of memory, it starts swapping (using hard disk space to temporary storing memory content). It slows down quite a fair bit the processes. But Murex processes do not consume much memory server wise, so except if you start spamming lots and lots of processes with heavy memory usage (large viewers, simulations), again another area that should be fine.

Disk space

This one is most often the culprit. Services starting to fail, not starting, etc… Very likely the server has run out of disk space. It could be log files being too detailed, database traces turned on for all sessions causing a large amount of files to be generated. Core files if not sent to another segment also will cause massive disk usage.

It is very easy to correct and your system admin will monitor usually that one closely. But I don’t know how many times we ran out of disk space due to log files/traces taking too much space.


Normally this one is not so much an issue in terms of hardware. Just accept that if you’re far from the server, your ping time will increase leading to a not so smooth experience especially in screens such as Mxmlexchange.

But firewalls between clients and servers (or in between servers) can lead to slowness, issues which unfortunately are not so easy to track. Had a fair share of these issues and I start to get a nervous reaction whenever someone mentions firewall around each server!


Alright, I hope this post would have help some of you. In a general manner, performance issues are rare especially when you keep the 4 points above in checks.

Realtime market data – Is there a too much?

When one think about realtime market data, you can think about spot rates, security prices, swap points, market quotes, etc… But there are couple of things to be aware of when you work with them.

Let’s start with swap points. Swap points are effectively a forward FX rate such as swap points = FX Forward – Fx spot.  In Murex, you can use swap points directly into the curve (and then there is no issue) but you can also use swap points to deduce a zero coupon rate and then deduce a deposit rate.

For instance, if you consider: Fx Forward = Fx spot + Swap points and Fx Forward = Spot * Df(USD)/Df(XXX) (XXX being another currency, note that the ratio of discount factors might be inverted depending on the quotation of the pair). As such, you end up with

Fx spot + Swap points = spot*Df(USD)/Df(XXX) and then Df(XXX) = Fx spot * Df(USD) / (Fx spot + Swap points).

So if you have Spot, the USD rates and the swap points, you can then deduce the currency discount factor. The problem is that you are feeding all parameters at the same time and depending on your refresh cycle you might end up computing the swap points using an old fx spot (or an old USD rate) and because you are not storing the swap points, the swap points value recomputed by Murex is then different to the one you imported initially. Most of the time it is fine as all the market data is refreshed every x seconds, so even if you don’t hit the exact initial swap point value, you will be very close.

The problem is actually more vicious when you start combining feeding market quotes from multiple sources. I have seen it before and people don’t always realize that it can cause errors:

To feed a market quote in a curve, you can use the market rates sheet, the rate curve directly and the swap points. While I would advise against using the rate curve directly through realtime (rtcu node), you need to decide for each instrument/pillar if you will be using swap points OR market rates sheet. You cannot have both otherwise they would be superseding each other and results could sometimes become inconsistent as Murex would read the lastest updated one, while you expect a specific source.

I know in the past, we had a tough time trying to understand why 2 refreshes of market data would cause the market data swap points to be different even if the cache was not changing.

Forex volatility

Last week, we looked into rates volatility, this time let’s dig into FX volatility.

Forex volatility – ATM volatility

This one is actually quite simple, you simply have a volatility for each pillar and each currency pair. As per other volatilities, you can link them together with spread and factor. The pillar set tends to be common across multiple currency pairs and is defined under the volatility groups.

But to make this paragraph more interesting, you can define a pair as not liquid and define a split currency. For instance if you’re heavy into TRY/ZAR (yep pretty extreme), you’ll be struggling to come up with a volatility curve for it. You can define volatility for USD/TRY and USD/ZAR. By also providing correlation (it can have different values depending on pillars), Murex can then compute TRY/ZAR volatility. (cross effect). While quite handy, providing correlation is also quite difficult as a task.

It is getting more frequent for rates, but for FX you usually interpolate on variance rather than volatility (vvt interpolation)

Forex volatility – Smile volatility

Smile for FX volatility is usually defined on a delta ladder. Usually you have a 10, 25%, 75 and 90 pillars.  Call 10, call 25, put 25 and put 10. A call with 90% delta has the same volatility than a 10% put.

But more interestingly, the smile is usually quoted in Risk Reversal and Strangle (or fly)

RR25 = Delta call 25 – Delta  put 25

(so effectively that’s the difference between call and put for a given delta level)

STR25 or FLY25: (Delta call 25 + Delta put 25) /2

You can easily switch from one to another within Murex or even display a smile with 5% delta increment in case you need a better view of the volatility (Murex can also display corresponding strikes)

Interpolation can be any of the usual: linear, spline, polynomial, etc…


I realize that I still have a fair bit to talk about in regards to Fx volatilities: Cut off spreads, smile dynamics, short date smile… So I’ll split this post in 2 with the part 2 next Tuesday!

Rates volatility

Following the request from last week, let’s discuss this week about IRO volatility.

While the later can encompass many different volatilities: bond vol, future vol, etc… I will focus on 2 for today: cap/floors and swaptions.

Rates volatility –  ATM volatility


The ATM volatility of swaptions is already 2 dimensions: option volatility and underlying (swap) maturity. It makes things a bit more complex than others when you throw in on top the smile structure. (more about that later)
The interesting bit about swaptions volatility is that you can choose to interpolate the underlying maturity. You can choose to interpolate based on time, but this might need to be corrected if you have an option on an amortizing swap for instance. As such you can choose to interpolate based on BPV where Murex computes the BPV of the reference swap of the vol group.


While cap/floor vols are defined on a selected index (and you link index vol) there is another thing about cap/floor vols: are they forward/forward or for the whole cap (what’s called par)? One thing to understand is that a cap or a floor is a series of options rather than a single one. For instance a cap on EURIBOR 3M means that every 3M you have an option on the EURIBOR 3M. So if you look at a 2Y cap on the EURIBOR 3M, you effectively have 8 options.

So when you choose vol nature forward/forward, Murex expects that you will provide caplet volatility for each pillar of the vol curve. Nature cap means that you provide a volatility that would be the same for each caplet. In the case of our 2Y EURIBOR 3M cap, this means that the 8 options would share the same volatility. Murex can also calibrate the forward/forward volatilities.

Calibrating the fwd/fwd volatilities mean that the 3m fwd/fwd vol is equal to the par vol 3m (as you have only 1 caplet in that case). Then for the 6m pillar, you know the total price of the cap as the 6m par vol could be applied to both caplets to drive the price. But you also have already found the 0m/3m caplet volatility. You can then backsolve the second caplet volatility so that the sum of the premium of each caplet using fwd/fwd vol is the same than the premium using par vol.

This mechanism is very important as in the pricer this will explain why you see 2 volatilities: one on the main pricer screen (par vol) and one (well, multiple) in the flows screen: fwd/fwd volatilities.

Rates volatility –  Volatility nature

Volatility nature has been for a long time lognormal for rates products. Unfortunately the models consuming lognormal volatility have one major flaw: they do not work with negative rates. And given the current rates state, this is quite a problem.

So 2 solutions emerged:

– Shifted lognormal: the idea behind this is to shift all your rates by a certain amount when using the model (ideally you ensure that your lower strikes of your smile are far off the 0% boundary). So for example you work as if your strike at 0% is a strike at 10%. The advantage of that method is that the work to move away from lognormal is light

– Normal volatility: this is actually quite different and there is a fair bit of work to adapt models to accept normal volatilities. Normalized volatility is a volatility that is not at all correlated to interest rates. Lognormal volatility (and vega by extension) actually changes quite significantly if rates are moving by a large amount in one direction. Normal volatility is very stable. It can also be applied to negative rates without any problem. While more work than shifted lognormal, one main advantage for traders is that when you’re hedged on normal vega, your hedge should prove very stable

Rates volatility –  Smile


You define a smile curve for each underlying swap maturity (I often see a fair bit of linking between maturities). The interpolation is often interesting for swaptions as you can fall between 4 points rather than 2.


The smile is defined for each index, pretty standard. You can (should?) do linking for less traded indices.

Rates volatility –  Smile dynamics

Alright, this is the interesting bit: smile dynamics.

The smile dynamics is how your smile moves when the rates are changing:

– Lognormal

Lognormal dynamics is basically no dynamics at all. Your curve does not change when the rates shift.

– Normal

Normal smile dynamics is that the corresponding lognormal volatilities do change when the rates change (the conversion from normal to lognormal does use the actual rates). So even if your smile is money based, your lognormal volatility can be different for an option at the money


SABR is a parametric volatility calibration model. While SABR would deserve a post all for itself, in a nutshell, basically you can assume that the SABR parameters are constant when rates are changing and you can re-calibrate the volatility based on the new rates


More questions, something I need to dig further into? Let me know!

Murex logs and nothing to do with lognormal

When you work with Murex, it is likely that you will encounter issues. (if you don’t you’re indeed very long sighted or you should definitely buy some lottery ticket).

Anyway, it is very likely that you will soon end up in the logs directory of the app directory and then what… You have the launcher logs, process pids, many folders, etc… Knowing which logs to open or check is quite daunting but fortunately Unix/Linux has so many tools to do the search for you that you’re in for a treat.

When browsing the logs, you have  2 best friends : a more experienced consultant (if he has a PAC background, you definitely need him as a friend) and… Google! Google is my go-to whenever I have question as to how to find something based on any criteria.


Last night I was running our EOD script and wanted to check the answer files which had terminated successfully. Quick google search (“how to find files containing string unix”) and I got: fi

find / -type f -exec grep -l "text-to-find-here" {} \;

Transformed it into:

find . -name '*answer*' -exec grep -l "Successfully" {} \;

Well, it worked but then I realize that I actually only cared about the ones which failed and that was getting too painful to do it myself (I was also getting tired which does not help). Another quick google search and

find .  -name '*answer*' -exec  grep  -H -E -o -c  "Successfully"  {} \; | grep 0

And it gave me the list of logs which failed making it much easier to investigate the failing ones.

So that was my example for end of day. But you could also find the files which were modified in the last 5 minutes, narrowing down a lot the number of items to browse through.

find . -cmin -5

And then you can do another search for OutOfMemory or similar if the filename/filepath is not enough to tell you which logs you should check.

And I would recommend to have a go through the logs quickly. If indeed you can’t find anything, of course you can call in a friend as there is no 50-50 or ask the audience sort of joker. My problem with calling in a friend, is that it gets always more tempting as a quick way to solve a problem and you don’t build up your skills as much.

What’s more frustrating than having a problem, calling Murex and checking the very first file: <servicenamelogfile>.log and get : user XXXX not authorized to log in? (or any similar error which could have been fixed in 2 minutes once you have the error cause).

What about you dear reader? Any other command/tips you would recommend?