Tag Archives: upgrade

Reconciliation or rec – what should I do for my trades

Whenever migrating or updating, one task will consume a fair bit of manpower: reconciliation.

When doing trade reconciliation, the following is to be checked:

– Financial trade details reconciliation. This is usually the first one as it will be used afterwards as the key for reconciliation. In that task, you ensure that trade numbers, deal type, price/rate, portfolio, etc… all the data considered financial is correct.

– Cash flow/P&L/Sensitivities. You normally need to rec all 3. In my experience, it tends to be better to start with P&L (less figures to check, 1 trade 1 number). Then move on to Cash flows, depending on the DB side, you sometimes need a very good tool capable of handling loads and loads of lines. Finally sensitivities. For that one, I would recommend against trade by trade reconciliation. It is too demanding, use a top down approach.

– Non financial reconciliation. This one is a bit up to you. If you have some reports which extract that data, this reconciliation could be considered done. If you do not reconcile it (or even fully) this is the kind of data that can be easily updated post go live.

Reconciliation is a very heavy task. It is demanding in manpower and you need people comfortable with the both applications (the old and new ones). It is as essential to have a good tool capable of assigning defects to trades matching a certain pattern and to re-apply the root causes when performing another rec.

While I don’t think anyone really enjoys doing reconciliation and browsing through lines and lines of breaks, it is an excellent exercise. I often included more junior people to the task as it gave them a chance to go through the different screen, understand how different figures are being calculated. Of course, you cannot leave them alone to do it but they can contribute to the effort and afterwards they will have built a good understanding of the issues and the application which is perfect if they need to support it going forwards.

Feel free to share your tips and bits about reconciliation!

Reconciliation, how to get it right

Reconciliation is the bane of upgrades and migrations. It’s hard work where one needs to go down into details, find patterns and then find solutions. Plus you always have the stress that something new will emerge.

I can’t cover all types of reconciliation, I will focus (at least in this post) on trade reconciliation between 2 versions of Murex (be it upgrade or migration).

To get valuation (risk and PL) for transactions, you need to have static data and market data. Static data is usually very stable and I would recommend not to do any reconciliation on it. If it breaks on an instrument, it will show clearly into the reconciliation report and you can focus on it straight away.

Market data is a different story. It is usually very easy and quick to check: open both environments and check the calibrated values for curves (rates, commodities). For volatility, if you’re using a volX, check that the calibrated values are identical. Check that normal/lognormal and price vols match if you do input your volatility in one nature but consume volatility of a different nature in your models.

When rate curves are not returning the same values when calibrating, you will have a total break on all deals using that curve. Usually the difference is actually quite small (lower than .0001) so with a correct tolerance level in your checks, you should match results easily.

You can then move to trade reconciliation once the market data is ok. To do so, the usual practice is to run dynamic tables on the trade sets (of different types: PL, Cash and sensitivities), put in exclusion criteria less than $100 ignored, .01% ignored. Run the tables on both environments and then using sql compare both results and output breaks into a table.
There are quite a few solutions to do this job but it is actually quite straight forward up to this stage.

Now you have a long list of breaks (and missing entries!) that you need to reconcile. There is no shortcut and you need to get moving. I usually break down by deal type and instrument when relevant. Start then with the largest breaks and try to understand why the 2 are different (market data, change in customization, improved behavior, etc…). The important bit is to find the root cause and with experience you find it more easily.

Then you check if your root cause applies to other trades (usually it does). If it’s an isolated issue, bad luck, move on to the next trade. If it is not and other trades in your list seems to have the same issue, you need to establish a rule. That rule will flag all trades with the criteria to that root cause. There a good knowledge of SQL (and Murex data structure) often helps, if you don’t/can’t, 2 options:

– Write down the root cause of the issue and move on to another trade. Once you have enough root causes, check with someone more knowledgeable to teach you how to build your rule.
– Run a dynamic table outputting the data you believe isolate reconciliation breaking trades from others. In this later case, you might waste a significant amount of time building that dynamic table and it might not even work.

Finally comes the solving/accepting part. Some issues have to be accepted as they’re enhancements or correction. Some others are regressions or changes of behavior that will require correction. Sometimes a simple configuration change can fix these issues. Otherwise, you might need Murex help to figure it out.

The important part is to automate it as much as possible and end up with rules that you can re-use in the next reconciliation re-run. Ideally, the solutions can also be automated and make your reconciliation go smoother.

Now, maybe the most important bit: never underestimate reconciliation. It is a difficult task especially if you want it done well. It is hard to estimate how long it will take and the criteria, in my experience, to take into account: time difference between the 2 Murex versions, complexity of deals, exotic and yield based bonds (lasting scars from these).

Murex go live – High times fun times

Everyone who worked on Murex has been exposed to Murex go live. This is a crucial moment where many different things can happen:

  • Legacy system replacement by Murex
  • Upgrade (or major version migration)
  • New functions launch

Regardless of the reasons for a go live, they are moments of stress, pressure and (hopefully) joy. But most of them (especially the first ones and the big ones) will leave lasting memories.

A go live happens usually on weekends. Once New York finishes trading, the EOD runs and depending on timings, migration might or not start yet. The idea is to have a fallback environment, all ready to go in case the go live won’t happen.

Migration, configuration, checks and regression testings. They happen more or less consecutively with (and that’s mandatory) at least one thing going wrong. Then there’s the rush to get everything all ok before the endusers coming in to approve the results (if required) and finally the go/no-go decision.

The first times one goes through this exercise is highly stressful. But with experience, one starts to learn and stress much much less. During one of my first Murex go live weekends, I met someone really relaxed. He had a few go lives under his belt and was able to take a step back and advise what to do. I remember being very stressed and stumped on a problem but he took him 2s to suggest a report that would help me. You indeed need small hands during these events but you also need knowledgeable people who can keep a cool head.

More recently, I was only on call for the migration/config part (the privilege of experience and seniority) and onsite when the endusers were coming in. I have to admit that I missed the long Saturday nights sitting in front of the computer to get it all working. And catching some Z’s early on Sunday morning to be back for the endusers. I think that’s the downside of experience, you get less thrills.

And you, dear reader, what’s your experience with Murex go live? Got some great stories or some horror ones you want to share?

Version releases, upgrades and all that jazz

Murex release policy is actually quite simple:

Major versions : 3.1.x are released once/twice a year. They contain new developments, data model changes, etc…

Minor versions: 3.1.99.x are released much more frequently. They do not contain new developments or data model changes only bug fixes.

The idea behind is that minor versions can be quickly deployed or even put in a separate launcher running in production in order to address issues. There is usually no problem having 2 different minor versions pointing to the same database and being in used at the same time. But as always, check with your Murex consultant if it is all good.

Major versions are a bit more work and contains more novelties. Frequent upgrades to major versions is strongly recommended so that the gap between production version and latest Murex version remains small. It translates into a shorter turnaround between official release and production.

The other healthy habit is to have a test environment running a build version. Of course, no one can use a build version to use in production but it can be useful to test new features and work with endusers. Endusers only really care about the version that they have in production. So the focus on time to market is quite important.
Let’s take for example the need from an enduser for a new development. First steps consist in clarifying the requirements and getting clean specs. Then one needs to liaise with Murex to get a projected timeframe. It is usually best to get commitment as to which version the development will be available. Testing on the build prior to the release is highly recommended in case something is not right, not as expected as the timeframe between 2 major releases is actually quite long.
Once the development is all good and the official release received, then the non regression testing starts and finally deployment to production.

End to end, this can mean 18 months, so it is important to nail it the first time and ensure that there is no extra delay. Also, back what was said above, if endusers can see a test version with the development, it brings them confidence that they haven’t been forgotten and things are moving their way.

 

Experiences, comments? Feel free to share!

Upgrading from 2.11 to 3.1

I often see posts about migration from 2.11 to 3.1, how difficult it is, what are the real benefits. So I’d thought a quick post here to demystify it might be worth it.

First of all, Murex is the best source of truth (as usual) for the migration and the more time passes, the easier the migration itself is as more and more cases are documented.

2.11 is the previous version of the Murex software. It has reached its end of life and new features are not developed onto it anymore.  All customers are strongly encouraged to move to 3.1.

3.1 is the newest version developed by Murex. Its main advantages over 2.11:

– Better workflows and better consistency of these workflows across the board. There are 3 workflows during a transaction life: pretrade workflow (which gets triggered while pricing), booking sequence (triggered when the trade is being booked) and post trade workflow (triggered after the deal is booked)

– Different consolidation process and Livebook as a new functionality

– Stronger rate curves framework/functionalities

– More models for pricing or volatility interpolation

– Better user management

– Improved pricing structure build

– Loads of smaller changes and bug fixes

For a migration to keep as identical, the main efforts of the migration will reside around workflows and working with Murex and their predefined templates. This is a great opportunity to revise the workflows but also a significant amount of work.

The trades then needs to be reconciled as 2.11 can sometimes return an incorrect valuation for specific trades  (I’m looking at you Buy/Sellbacks).

End of day will need to be revised, reports (extracts) can work from 2.11 but you might want to move them to datamart at some stage.

All in all, it is a migration but which is very streamlined and for which most issues have already been handled by Murex previously. If you have a pure front office implementation of Murex, the migration will be much quicker than if you have processing workflows involved.

More questions? Forum or comments below!