Tag Archives: Market data

Realtime market data – Is there a too much?

When one think about realtime market data, you can think about spot rates, security prices, swap points, market quotes, etc… But there are couple of things to be aware of when you work with them.

Let’s start with swap points. Swap points are effectively a forward FX rate such as swap points = FX Forward – Fx spot.  In Murex, you can use swap points directly into the curve (and then there is no issue) but you can also use swap points to deduce a zero coupon rate and then deduce a deposit rate.

For instance, if you consider: Fx Forward = Fx spot + Swap points and Fx Forward = Spot * Df(USD)/Df(XXX) (XXX being another currency, note that the ratio of discount factors might be inverted depending on the quotation of the pair). As such, you end up with

Fx spot + Swap points = spot*Df(USD)/Df(XXX) and then Df(XXX) = Fx spot * Df(USD) / (Fx spot + Swap points).

So if you have Spot, the USD rates and the swap points, you can then deduce the currency discount factor. The problem is that you are feeding all parameters at the same time and depending on your refresh cycle you might end up computing the swap points using an old fx spot (or an old USD rate) and because you are not storing the swap points, the swap points value recomputed by Murex is then different to the one you imported initially. Most of the time it is fine as all the market data is refreshed every x seconds, so even if you don’t hit the exact initial swap point value, you will be very close.

The problem is actually more vicious when you start combining feeding market quotes from multiple sources. I have seen it before and people don’t always realize that it can cause errors:

To feed a market quote in a curve, you can use the market rates sheet, the rate curve directly and the swap points. While I would advise against using the rate curve directly through realtime (rtcu node), you need to decide for each instrument/pillar if you will be using swap points OR market rates sheet. You cannot have both otherwise they would be superseding each other and results could sometimes become inconsistent as Murex would read the lastest updated one, while you expect a specific source.

I know in the past, we had a tough time trying to understand why 2 refreshes of market data would cause the market data swap points to be different even if the cache was not changing.

Fx volatility part II

Alright, time to go on about FX volatility. Some of you might have been waiting all week long for this post, while others already knowing it all might have to wait another week for another topic.

While last week we covered ATM and smile volatility, we are going to cover more advanced functions for FX volatility:

Cut-off spread

Cut-off spread is a spread added to the option volatility depending on which cut-off the option is on. One could also choose to have a time ladder for the spread (with a higher spread on the shorter term and smaller spread on the longer term for example). The idea behind the cut-off spread is to say that an option with NY cut should be worth a bit more than an option with TKY cut. You get indeed couple of hours more.
While Murex does not account for time when computing t for option valuation, the idea is then to increase the volatility (or decrease) to represent the difference in premium.

Short date model

The idea behind this is to say that each day is weighted differently for volatility. For instance, weekends could have a lower weight (one would argue that it could actually be 0), Fridays are usually quieter so you can define lower weight also for Fridays and Mondays might have a bit higher weight. You define what weight you want for each day and you see directly the impact in terms of daily volatility and interpolated volatility.

More importantly, you can define specific days (like fed announcements, US holidays) to have a very different weight to represent the special weight of such a day.

Short date model has a larger impact on the very short term (<3m), in Murex it goes up to 1 year but on the later months, the impact is minimal.

Smile dynamics

This could be input in the system or it can be computed using Tremor. The idea behind smile dynamics is to define the smile convexity compared to spot. So you define how much your RR and FLY (if this means nothing to you, you should read FX volatility part 1) will move by when the spot moves.

I haven’t seen it used by many people but it is there and ready to use.

I think that’s about it for extra functions in regards for FX volatility. If I forgot one you’d like me to cover, let me know!

Forex volatility

Last week, we looked into rates volatility, this time let’s dig into FX volatility.

Forex volatility – ATM volatility

This one is actually quite simple, you simply have a volatility for each pillar and each currency pair. As per other volatilities, you can link them together with spread and factor. The pillar set tends to be common across multiple currency pairs and is defined under the volatility groups.

But to make this paragraph more interesting, you can define a pair as not liquid and define a split currency. For instance if you’re heavy into TRY/ZAR (yep pretty extreme), you’ll be struggling to come up with a volatility curve for it. You can define volatility for USD/TRY and USD/ZAR. By also providing correlation (it can have different values depending on pillars), Murex can then compute TRY/ZAR volatility. (cross effect). While quite handy, providing correlation is also quite difficult as a task.

It is getting more frequent for rates, but for FX you usually interpolate on variance rather than volatility (vvt interpolation)

Forex volatility – Smile volatility

Smile for FX volatility is usually defined on a delta ladder. Usually you have a 10, 25%, 75 and 90 pillars.  Call 10, call 25, put 25 and put 10. A call with 90% delta has the same volatility than a 10% put.

But more interestingly, the smile is usually quoted in Risk Reversal and Strangle (or fly)

RR25 = Delta call 25 – Delta  put 25

(so effectively that’s the difference between call and put for a given delta level)

STR25 or FLY25: (Delta call 25 + Delta put 25) /2

You can easily switch from one to another within Murex or even display a smile with 5% delta increment in case you need a better view of the volatility (Murex can also display corresponding strikes)

Interpolation can be any of the usual: linear, spline, polynomial, etc…

 

I realize that I still have a fair bit to talk about in regards to Fx volatilities: Cut off spreads, smile dynamics, short date smile… So I’ll split this post in 2 with the part 2 next Tuesday!

Rates volatility

Following the request from last week, let’s discuss this week about IRO volatility.

While the later can encompass many different volatilities: bond vol, future vol, etc… I will focus on 2 for today: cap/floors and swaptions.

Rates volatility –  ATM volatility

Swaptions

The ATM volatility of swaptions is already 2 dimensions: option volatility and underlying (swap) maturity. It makes things a bit more complex than others when you throw in on top the smile structure. (more about that later)
The interesting bit about swaptions volatility is that you can choose to interpolate the underlying maturity. You can choose to interpolate based on time, but this might need to be corrected if you have an option on an amortizing swap for instance. As such you can choose to interpolate based on BPV where Murex computes the BPV of the reference swap of the vol group.

Caps/Floors

While cap/floor vols are defined on a selected index (and you link index vol) there is another thing about cap/floor vols: are they forward/forward or for the whole cap (what’s called par)? One thing to understand is that a cap or a floor is a series of options rather than a single one. For instance a cap on EURIBOR 3M means that every 3M you have an option on the EURIBOR 3M. So if you look at a 2Y cap on the EURIBOR 3M, you effectively have 8 options.

So when you choose vol nature forward/forward, Murex expects that you will provide caplet volatility for each pillar of the vol curve. Nature cap means that you provide a volatility that would be the same for each caplet. In the case of our 2Y EURIBOR 3M cap, this means that the 8 options would share the same volatility. Murex can also calibrate the forward/forward volatilities.

Calibrating the fwd/fwd volatilities mean that the 3m fwd/fwd vol is equal to the par vol 3m (as you have only 1 caplet in that case). Then for the 6m pillar, you know the total price of the cap as the 6m par vol could be applied to both caplets to drive the price. But you also have already found the 0m/3m caplet volatility. You can then backsolve the second caplet volatility so that the sum of the premium of each caplet using fwd/fwd vol is the same than the premium using par vol.

This mechanism is very important as in the pricer this will explain why you see 2 volatilities: one on the main pricer screen (par vol) and one (well, multiple) in the flows screen: fwd/fwd volatilities.

Rates volatility –  Volatility nature

Volatility nature has been for a long time lognormal for rates products. Unfortunately the models consuming lognormal volatility have one major flaw: they do not work with negative rates. And given the current rates state, this is quite a problem.

So 2 solutions emerged:

– Shifted lognormal: the idea behind this is to shift all your rates by a certain amount when using the model (ideally you ensure that your lower strikes of your smile are far off the 0% boundary). So for example you work as if your strike at 0% is a strike at 10%. The advantage of that method is that the work to move away from lognormal is light

– Normal volatility: this is actually quite different and there is a fair bit of work to adapt models to accept normal volatilities. Normalized volatility is a volatility that is not at all correlated to interest rates. Lognormal volatility (and vega by extension) actually changes quite significantly if rates are moving by a large amount in one direction. Normal volatility is very stable. It can also be applied to negative rates without any problem. While more work than shifted lognormal, one main advantage for traders is that when you’re hedged on normal vega, your hedge should prove very stable

Rates volatility –  Smile

Swaptions

You define a smile curve for each underlying swap maturity (I often see a fair bit of linking between maturities). The interpolation is often interesting for swaptions as you can fall between 4 points rather than 2.

Caps/Floors

The smile is defined for each index, pretty standard. You can (should?) do linking for less traded indices.

Rates volatility –  Smile dynamics

Alright, this is the interesting bit: smile dynamics.

The smile dynamics is how your smile moves when the rates are changing:

– Lognormal

Lognormal dynamics is basically no dynamics at all. Your curve does not change when the rates shift.

– Normal

Normal smile dynamics is that the corresponding lognormal volatilities do change when the rates change (the conversion from normal to lognormal does use the actual rates). So even if your smile is money based, your lognormal volatility can be different for an option at the money

– SABR

SABR is a parametric volatility calibration model. While SABR would deserve a post all for itself, in a nutshell, basically you can assume that the SABR parameters are constant when rates are changing and you can re-calibrate the volatility based on the new rates

 

More questions, something I need to dig further into? Let me know!

Naughty naughty spreads

It happened to me (again!) yesterday. The curve spreads are naughty naught critters!

Let’s explain:

In some cases, the curve spread indicator (the small 1 or 2 on the left of the market quote) is sometimes bugged and does not work properly. Usually nothing to worry about as it is quite easy to see if there are spreads or not (just need to play with the combo box on the far right).

Unfortunately when you’re working on a new environment or with rates which source is not clear to you, they’re not always your first target.

So what happened? (I know this story is more thrilling than a Stephen King book (yep, just compared myself to Stephen King just like that))

Basically a curve was not calibrating and I followed my previous post steps to understand what was going on. Removing curve spreads, reducing the number of instruments (the quotes were ok). Finally it was calibrating to a crazy rate (-50%). Could not understand why, market quote was correct. Tried to overtype the zero rate and got a crazy market quote.

As the instrument had the market quote as a margin on the secondary I was suspecting that something was amiss there. The pricing maybe was reading incorrectly the margin. Started to look at other curves to see if the problem was there as well, but everything was ok.

So went back to my curve and somehow (imagine a spot light above my head) started to check my curve spreads! Et voila, eureka moment, there were convexity spreads (in the range of -50%). Zeroing them brought everything back inline, the curve calibrated ok, problem solved.

As usual with these kinds of problems, even if you’re proud to have solve the issue, you always feel bad not to have checked the guilty part first!

What about you dear reader? Any similar experience?

Market data interface – How to get them in!

In this post, I won’t cover the different types of market data you can have in the system. Think about one and Murex will have it already either natively or through the user definable market data. Today, we’ll focus on Market data interface and the different means to load that data in!

Market data interface – The UIM

UIM does not stand for Ultimate Iron Man but Universal Input Method aka manual input (sounds less fancy that way). While many people would look down at the manual input, it is actually often relevant. If you’re considering a very small amount of data, that requires sanitization and is limited to a few times through the day, manual input is a good solution. Especially when you can set it up so all you need to do is copy/paste. I don’t say you should always push back on interfaces (well, actually I do) but you should actually consider it in many cases.

Market data interface – MDCS

Let’s dive into more acronyms (nothing to do with Marvel DC comics Series), MDCS stands for Market Data Contribution Service. This interface is mostly used for realtime market data feed.

How does it work? You have a service: RTBS (I’m running out of Superheroes jokes (already!)), export, direct publication which published data to a memory : MDCS (also called cache for aficionados). You can query the content of the cache either through XML request scripts or via the monitor.

The data is structured into multiple pages, with multiple nicknames.

From there, you can push that data either to database via a processing script. You can retrieve it via XML requests (note to all tinkerers: do not use that service as a way of distributing market data to all bank systems, you’ve been forewarned!) or have Murex sessions accessing it via activity feeders. Activity feeders give you the opportunity of choosing which type, page and nickname of market data can the session access.

So as you should have understood now: MDCS is the realtime service in Murex. And it is better used that way. For end of day, you should rather stick with the one after: MDRS.

In case of issue with realtime, you first need to check via monit or xml request scripts if the market data in the cache is properly updated. Which one to use should not even be a question as monit is easy and user friendly to use. If you can’t get the monit password, let the one with the password handle the issue!

If the data in the cache is properly updated. Your issue comes then from the activity feeders. You can restart them if required after trying with a new session (you should get an error message if the session can’t connect to them). If there is no error message, no realtime, check that you have indeed assigned an activity feeder to your session.

That’s the basic debugging of MDCS: check the cache, check the activity feeders and check the user settings.

Market data interface – MDRS

Market Data Repository Service. First of all, some good news! XML request scripts are almost identical to MDCS ones.

MDRS lets you retrieve and update the database directly via XML request scripts. It is effectively more efficient than MDCS-Processing script as you do everything in one go without having a stopover in the cache.

MDRS is actually pretty robust and never had too many problems with it especially if you read properly the documentation AND put the right tags in the right order. The usual trick with MDRS is to first query some market data to get the structure of the XML. And then add your Update command and modify the values as desired.

Again, Murex role is not to be the Market data repository for the whole bank. Murex primary role is to provide pricing, risk, processing services not centralize the bank market data.

Debugging MDRS is usually quite simple as you get an answer file and a log file whenever you kick in the XML script. Just fix as appropriate and you’ll be good to go!

 

That’s the 3 means of getting data in Murex, note that the services are always improved to cater for newer type of market data and especially when you’re working on new type of market data, you need to ensure that MDRS/MDCS properly support it. Otherwise, the UIM will always work!

Rate propagation – Curve relationships

When working on Murex rate curves, one quickly faces the problem of curve relationships and what is called as rate propagation. Once understood, it seems very simple. But before you get to that “Eureka” point, it might be confusing. So let’s dig into it and hopefully bring you some “Eureka” moment!

First of all, rate propagation only makes sense when you are working with multiple curves in the same currency. The rate propagation determines how are the other curves going to move when one curve moves. The setting sits under the rates general settings and can take 3 different values:

– Keep market quotes constant (KMQC)
– Keep zero rates constant (KZCC)
– Keep market quotes constant/Impact sensitivities

The first one and the 3rd one have the same results when perturbing one curve but the 3rd one tries to show the sensitivities due to other curves perturbation. Effectively you should hesitate between the second and the third one, as the first one does not show you the right sensitivities.

In the mode KMQC, the rate curves are recalibrated after each perturbation. Let’s take the following example:

USD DISC
USD 3M
USD 6M

USD DISC has no dependency on other curves. It can self calibrate.
USD 3M depends on USD DISC to calibrate its swap pillars as they are estimated on USD 3M but discounted on USD DISC. USD 6M depends on the 2 other curves as it contains basis swaps estimated on both 3M and 6M curves and discounted on DISC curve.

Rate propagation mode : KZCC

In the mode KZCC, you basically assumes that if any rate changes, then the zero rates of the other curves do not change. So if your USD DISC curve changes, then the zero coupon rates of both USD 3M and USD 6M will remain the same. It means that the market rates of the USD 3M and 6M curves will change. Your ZC rate for the curves does not change, so the estimated rates will remain the same. But your discounting rate has changed (USD DISC has changed), so you need to change the fixed leg rate (aka your market quote) or your margin (basis swap market quotes) so that the NPV of the swaps remains 0.

Rate propagation mode : KMQC

In this mode, you assume that the market quotes remain the same when one curve is perturbed. So your ZC rates should be recomputed. Let’s see why! Using the same example as above and perturbing the USD DISC curve will yield the following:
– USD 3M ZC rates will change
– USD 6M ZC rates will change

When you’ve changed your USD DISC curve, the discounting rates will change. So in the case of your IRS in the 3M curve, the discounting rate will change, the fixed leg rate remains constant (Keep Market Quote Constant!), so your fixed leg has a different NPV. As such, you need to modify your estimation rate (USD 3m ZC) to reach a NPV of 0.
Similarly for the 6M curve, the 3M leg will have changed NPV, the 6M leg has different discounting rates, so you need to adjust the 6m estimation rates to keep a NPV of 0, so the 6M zero rates will change.

STOP there. There are more complexities that you can lay on top of these propagation modes but the above will always stay true: you need to maintain a NPV of 0 in every instrument in your curves and that’s the only way to do it.

1 more question:

I can’t reproduce the same behavior with manual shifting, why is it so?

What I wrote above is true and what should happen BUT sometimes the variation in values are actually quite small or 0. For instance, in KZCC, if your USD 3M curve is quite flat, then you won’t see much difference after the change in the discounting rates. Why? All flows fall on the same date for the fixed and floating leg. So you can sum both flows before applying the discount factor. If your estimation curve is flat, then prior to the shift of the discount rate, your sum of the 2 flows was already close to 0. As such, the impact of the discount factor is limited.

Rate relationships have changed a lot in the more recent versions with bugs and enhancements. So if there is something you can’t explain check with someone with more experience or a more recent version if you can.

Questions and comments are more than welcome!

Spring cleaning, Purge the database

Spring cleaning, I know I’m a month early but purge is an important task and sometimes you need to make sure it is adapted to your environment and needs.

Purging the database will let you keep the database growth under control and ensure that you get the maximum performance out of the system. But there’s often a fear that purging will result in data loss and quickly you find yourself with massive retention periods, 7 years for trade,  2 years of daily market data and all logs.

The first to keep in mind: Murex is a production system for trading and processing, it is not a data repository system. You need to keep it running in top shape as to maximize the benefits you get from the system. If you need to retain some data stored in Murex, export and store it on your own system. It is much cheaper and more appropriate.
This might sound obvious but when talking about purge, regulation is often the first topic that comes and it blocks any further discussion as long as a solution for storing all the data to be purged has not been implemented.

Once everyone is convinced of the importance of the purge, there are multiple items to purge by importance:

– Documents and their entries (usually ranking at number 1 in DB usage)

– Market data (normally ranking at number 2)

– Trades

– Logs

– Static data

– The forgotten ones: view, layouts, filters

Documents

Purging Mxmlexchange is actually quite straight forward and is done through scripts provided by Murex. Just be very careful with the scripts and ensure that proper testing is done on test environments before deploying to production.

But if you test it properly and only purge the intermediate documents, it is quite straight forward without surprises

Market data

Market data is made of 2 parts. The visible side of the iceberg where you purge market data for dates you no longer need (good practice tends to let people keep the month end only for older dates and daily market data for few months (1-3 depending on your aggressiveness). This can be done through the GUI if you want, quite straight forward.

But there’s also a second part of market data  purge which helps a lot: expired instruments (read Bonds and listed options mainly). By default, Murex automatically copies all market data entries from today to tomorrow as part of EOD. This automatic copy means you also have entries for expired listed options (ETOs), futures or bonds which keep being rolled. It might not sound like much but ETOs can quickly snowball especially if you trade very short dated ones such as intradays and overnight. Here, Murex can provide you with a script to clean them out. Symptom for this second one are tables such as MP*_GLOB and MP*_PRIC being large in size.

Trades

Trade purging makes sense especially when you do volume trading. The trade purging is done through the GUI (very important) and in such a fashion that all purged positions are getting aggregated to avoid any jump in cash balances.

The trade purge occurs in 2 steps: a logical one, where the trade is no longer read for reports and simulation but is still present in the database. All its contributions are stored and aggregated with other purged deals. It can be undone if required
The physical purge will effectively remove the trade from the system, you cannot anymore query it and it cannot be reversed.

Position and cash balances testing needs to be performed after each purge step. After the logical purge, it is the most important as Murex will no longer evaluate the trade but read directly its stored contribution. After the physical purge could almost be skipped as it does not affect anymore the aggregated results, it is simply removing the unused trade records.

Trade purging depends on the trade complexity, simple spot forwards can (and should) be purged much more aggressively than more structured deals

Logs and audit

Murex will give you the scripts for these, purge as requested and make a copy if you feel the need upfront. They don’t consume much space but clean logs make browsing through them a lot easier!

Static data

I am actually an advocate against purging static data. Murex often references static data under the purged deal contributions or in other places and removing them, will remove that link for Murex. One could always try to fix all the problems which ensure out of it but in my opinion, it is simply not worth it. The amount of problems generated (and which could come later during or after an upgrade) is not worth the small amount of DB it occupies.

Filter, layouts, views, etc…

These items should not be purged per se but should be kept under control. Restraining users from creating, duplicating is probably the way to go.
To clean them up would probably have not much of an impact on the database but you risk that an EOD report or a process would fail. Except if you have kept a very precise list of which items are used by what process (and if you did, kudos!), you probably have to leave them where they are or start a massive campaign identifying and decommissioning the unwanted ones.

 

In summary, if you concentrate on the top 4 items of this list, your DB should grow as expected when the hardware was planned with Murex and performances will remain optimal. Just keep an eye on the DB usage by table and if something grow too quickly, Murex will always be happy to sort you out!

If I forgot something or if you feel like to add something, please feel free to!

Volatility … going the distance

Today I’ll cover a bit about volatility and the different topics relating to it. If there’s one topic, you want me to dive into, let me know and I can then make a full post on it.

  1. What’s volatility?

Volatility is a measure of price variation over time. The higher the volatility the more likely the price will vary (either up or down). Volatility can be measured historically by computing the standard deviation of the prices. Or it can be measured implied, that is to say by solving for the volatility of a quoted option.

2.  Smile

When the volatility is solely time bound, we’re calling it the at-the-money volatility (or ATM vol if you prefer shortcuts). But often you’ll consider that volatility is not a constant for different strikes and it will change as you step away from the at the money point. Usually volatility increases as you move away from the central point effectively giving you a smile shaped curve. The driver behind this is that options further from the ATM point have effectively a higher price than if they were using the ATM vol point.

3. Interpolating and shaping the smile

When working with smile curves, you need to decide for an interpolation method. You can choose between parametric and geometric. Geometric interpolations take into account the pillars that you have provided to interpolate in between. Parametric requires for some parameters to be provided (correlation between spot and vol, shape of RR, etc..). SABR is getting used more and more for IRD products and traders start also to monitor the sensitivity of their positions to the SABR parameters.

4. Dynamic smile

It means that the smile is not a constant when spot rate is changing. So in terms of total vol, it is like defining a convexity to the volatility (smile being your first level). Murex can produce such effects when calibrating Tremor.

5. Short date model

Very popular in FX world, the idea is that you can attribute certain weights to specific days: Fridays are less volatile than the rest of the week, Weekends have a very low weight, Mondays are a bit higher, etc but also to specific events (Fed announcement, Big numbers publication, etc…) The short date model has really an impact on the shorter dates (the shorter the bigger), so while it goes up to 1 year, it is effectively really important up to 3 months.

6. Cut-off spreads

This one is more a workaround than a real need on the market. One should consider that an option with  a Sydney 5pm cut has less time to mature than an option NYC 5pm. So the idea is that the NYC option should have a higher price. Ideally, one would increase the time value of that option (and t would then be able to cater for fractions of days). As this is not currently possible in the system, the volatility is effectively increased to mimic the increased price for later cuts.

That’s all that comes to mind right now but I’m sure I’ve forgotten a lot about it. Volatility is a rich topic and I just wanted to give her a flavor of the different functions which are attached to it.

Comments, requests below!

Volatility news

I was starting typing a post about volatility (you’ve understood that it will come later) but getting some material I got into quite a few articles about the expected rise of volatility for 2015.

For instance CNBC  or here .

It’s important in our line of work to keep updated about these articles as they’re often going to translate into concerns or demands from customers. For instance, the interest rates going down led to requests for shifted lognormal volatilities and normal volatilities.

I’ll try to keep posting whenever I see some articles of interest and feel free to do so too!