A great article: video games and the economy

Sadly, this article is in French but it’s about the new Greek finance minister. He basically worked for Steam (video game producer also well known for their digital marketplace for video games) for 2 years and was given the chance to have a look at an economy where all data is known. No more statistics, all the information is available and known.

It has been a while since an article struck me as much as that one so I felt like sharing here it today. And for you non French readers, google translate is your friend.

Link to Le monde article (article was free to read for me at the time of posting)

Comments/debate very welcome 🙂

Sybase vs Oracle

This is the question one often hears when the decision has been to go with Murex: Sybase vs Oracle. Which one is better? Which one do you recommend, etc…

To first repeat what has been said numerous times: Murex works very well with either and if you need to use one or another due to bank policy or any reason, you can’t get wrong. Murex will deliver results and everything will be A-OK.

But there are differences and both have pros and cons. Historically Murex only supported Sybase and many customers feel that they will get better support from Murex if they go with Sybase. Oracle is quite well known at Murex nowadays and there is no change in the quality of support regarding Oracle. PAC team especially is knowledgeable on both fronts and can provide configuration recommendations for both systems.

Even in performance, that’s not where the difference is really going to lay (many people would disagree here and give reason to go for one or another). I feel the difference is pretty in the actual usage of each: they each work slightly differently. Not from a Murex front end of course, to the end user, Sybase or Oracle does not make any difference, system looks the same, functions work the same way. It is really when you start using SQL where you can see differences.

I graduated from SQL school with Sybase as a teacher, so I do know more about Sybase than Oracle.
Sybase wise, identifiers are directly attributed (the good old M_IDENTITY). When writing SQL, no need to take care of that field, it handles itself. With Oracle, it’s a different story, one need to call the sequence (TABLENAME_DBFS) to retrieve the latest number in order to update it. A bit more painful.
SQL clients with Oracle are for some reason always more of a pain especially if you mix direct commands and stored procedures. I used SQL developer and not seing the results of my stored procedures is a pain. I also use a lot SQuirreL. The later works great for everything EXCEPT the initial connection to the Oracle servers. When the server is distant, the initial load of tables took couple of minutes (started at 15 minutes and went down to 2-3 minutes once the link to other offices got upgraded).  Oracle also was a pain with the username/password for each schema. Not too sure why it was like that but while in Sybase one can easily switch from one DB to another with the same user, the way it is configured to work with Murex Oracle forces to log out/log back in (or log in multiple times to each schema).
But I had my fair share of issues with Sybase. DB corruption happened quite a few times (I suspect it happens also with Oracle but I did not experience it firsthand). The worse DB corruption was when receiving a dump from a customer which contained a trigger (triggers are not your friends). That trigger was attached to a different user id which we did not have when we loaded the dump. So we had to reset the user id for that trigger before deleting it. When updating that user id, it caused a DB corruption which could only be solved when stopping/restarting the server. There were other cases but nothing repeating as easily as that one.

I’d be interested to hear from Oracle experts to give me all good sides of Oracle as from my point of view, I usually found Sybase easier to work with and often wasted few hours trying to adapt a stored procedure that I wrote in Sybase to work with Oracle. Usually PAC team were the ones able to set me straight and get the procedure up and running.

Murex performance – the chicken and egg story

Murex performance is often in the spotlight: how quickly can Murex do XXX or create YYY. (Replace XXX and YYY with your choice of tasks)? The problem is that the list of requirements between 2 customers varies and results in very different timings.

So to take out the main question first (if you’re the sort to prefer short answer): Can you get good performances out of Murex? Absolutely!

How you’re going to achieve depends on few things (which makes answering the question how long does it take to do something impossible to answer):

  • Hardware is the first one to come to mind. With great hardware comes great performance. Well, not really, you also need to have it tuned right but yes it is a major factor
  • Requirements. This one tends to be overlooked: “I want to get real theta for my whole portfolio over the next 10 days, along with a spot shock and at any time rewrite the spot levels. And it needs to be fast!” (you have similar questions with trade input, reports, etc…). Of course, if you ask for time consuming tasks (or put it many consistency checks), you will slow down the processes.
  • Maintenance. If all works fine on day 1 but not 10 days later, clearly there’s some maintenance that was not done properly
  • Software. I put this one last as it is very rarely the software the issue. Very rarely (feels good to repeat it)

For most of this issues, the PAC team is the go-to team. They can size the hardware you need based on your system usage, advise you on maintenance procedures and debug if something runs too slow.

In general if you believe that a process is taking too long given the configuration (inserting a deal takes 5 minutes, report still running after 1h, etc…) you need to do the following.
If it is an isolated occurrence, it could well be a lock either at DB level or at system level. For locks at DB level (rare but it happens), check with your dbas, also check if no heavy process is currently running. For locks at software level, Murex has you covered with ipmonit. Login to ipmonit from the monit tool, and you can access a lock report showing you all the locks put in by the system (for example if someone is editing a trade, it is locked to avoid 2 modifications at the same time). Check the documentation for ipmonit as screenshots are very helpful when navigating the screens.

If it happens all the time, then it is unlikely to be a lock and you need to generate performance traces. The first ones are generated with /TIMER slash command. This slash command will generate mxtiming files into your log directory (you can put the slash command if required in the launchers for services). The mxtiming file will show the time spent on CPU and while waiting for the DB. If time spent on DB is too high, indexes could be missing on the tables. So you need to run a DB traces (shameless link to my older post for how to). These DB traces can be sent to Murex and they will give you the number of logically read on each table. A number too high indicates (likely) that a table is unindexed. Indexing that table should improve performance.

If the system is slow, the reason lies either in the hardware or the configuration. Rarely the problem is due to a bug.

There are also cases where Murex develops a new feature to speed up a process that is known to be always slow due to the sheer amount of computing/data crunching it requires. Parallelization or pre-crunching are the 2 big methods to do so. But this applies when you start to have a volume: inserting a single deal should always be fast!

Comments, experiences are welcome!

Murex database – Hack your problems away!

Alright, today let’s crack open this black box that is the Murex database! While all of you know that Murex doesn’t publish its database organization, sometimes there is no choice than go directly where the data is.

My rule of thumb is that if one can avoid it, going direct to the database should be avoided. Any problem caused while browsing will have impacts and cause problems in the environment. For reporting, dynamic tables or viewer reports are your friends. For filtering, list of fields is actually quite exhaustive. In many cases, you will find all the information you need without opening any single SQL client. But sometimes, for some filters (back to RQWHERE post!), for some reporting or for some DB cleaning, you’ll need to go through the database.

Working with Murex database is the same as working with any other trading system database: backup, test in test environments, test again, backup and it should work. The problem is that sometimes some fields are not very clear as to what their roles are and when trying to populate lines (insertion or update), this could turn out to be a real problem. Murex consultants are then the best suited to help you out, especially if you’re not sure your request is safe. In case of migrations, again, Murex consultants should be the ones to provide you with the right scripts, only write yours when you’re absolutely confident of what you’re doing.

Now from a Murex consultant point of view, it is not always easy either to determine what fields have what roles. But the first step is to understand what the other party is trying to do. Maybe SQL is not the best way forward and there could be an easier solution?
Then you can check what other people have done. It is rare to have a problem with only 1 customer that has not been encountered by somebody else.

I learned SQL while working at Murex and many times it actually sped up processes tremendously:

– Inserting in bulk some data (or duplicating records)

– Cleaning up unwanted data. Especially logs (or market data, much much faster)

– Building my own extractions when doing reconciliation reporting

But it also happened that my scripts did not work as expected (and lucky I had a backup and was doing it on a test environment): updates/delete without a correct where condition. I once removed all records from the transaction header!

If you’re working on a limited set of tables and you don’t want to call upon the DBAs to do the backup, then you can should use the following tools: Help-Monitor-DPI info-Transfer from RDB to DBF. You will need an authorization code to proceed but then you can transfer the table from the database to a file in the application server file system. The step Transfer from DBF to RDB does just the opposite. So it gives you the flexibility to backup any table you want from the database to the file system and bring it back whenever required.
Note that you can use jokers in the name of the table you wish to transfer and you should not put _DBF but .dbf.

And you? What’s your relationship with SQL? Comments and experiences below if you wish!

End of day – The joy of night shifts!

The terror of everyone working in this industry: end of day and especially their crashes. Because end of days happen at nights, most of the time everything is automated. Reports, market data copy, accounting generation are all these lovely activities which happen during the night and which are supposed to be a well-oiled mechanic.

Murex has come a long way in terms of End of day issues (EOD for the acronym lovers). When I started, the big fear was the future rollout date as the system would often crash and one would need to sort it out manually, undo what was done and redo manually. Fortunately in 15 years,  if the EODs have become more complex due to stronger requirements, the system has proven to be also a lot more reliable and resilient.

I won’t cover how to debug these issues (that was another post earlier, the search function is your friend!), but the funny cases (well, looking at them now is funny) that I had encounter.

Calling my wife. A customer had my wife number (I had given it to them when my phone was broken) but few years after, they were still calling her for EOD issues. Quite a classic, when it’s 11pm and my wife is awoken with questions she can’t understand.

Missing cutoff times. Such a big stress the first times. GL entries need to be posted by 12am or… Your first times, guesses are that it is the end of the world, bank will go bust, another black Thursday will happen. So when an issue happens, massive panic and how to get everything rolling before the cutoff. Later you learn that the world will keep going if the data to the ledger is late. Not ideal, but not a cause for WW3.

Leveraging of someone’s visit. This happened actually quite a few times. When visiting a customer and the day is closing, I got a call from the office asking to help someone for a small problem. I then spent few hours (finishing just before midnight) to solve that “small” problem that was end of day related. Other variations include saying bye to everyone and being asked if I could help on this tiny matter.

This being said, that’s an aspect of the job I like (and I hate as well). You need to be always on your toes and it’s actually good fun to be at the customer that late: very few people in (and most lights turned off) and quite a relaxed atmosphere.

And you? How much did you enjoy your night shifts?

Murex go live – High times fun times

Everyone who worked on Murex has been exposed to Murex go live. This is a crucial moment where many different things can happen:

  • Legacy system replacement by Murex
  • Upgrade (or major version migration)
  • New functions launch

Regardless of the reasons for a go live, they are moments of stress, pressure and (hopefully) joy. But most of them (especially the first ones and the big ones) will leave lasting memories.

A go live happens usually on weekends. Once New York finishes trading, the EOD runs and depending on timings, migration might or not start yet. The idea is to have a fallback environment, all ready to go in case the go live won’t happen.

Migration, configuration, checks and regression testings. They happen more or less consecutively with (and that’s mandatory) at least one thing going wrong. Then there’s the rush to get everything all ok before the endusers coming in to approve the results (if required) and finally the go/no-go decision.

The first times one goes through this exercise is highly stressful. But with experience, one starts to learn and stress much much less. During one of my first Murex go live weekends, I met someone really relaxed. He had a few go lives under his belt and was able to take a step back and advise what to do. I remember being very stressed and stumped on a problem but he took him 2s to suggest a report that would help me. You indeed need small hands during these events but you also need knowledgeable people who can keep a cool head.

More recently, I was only on call for the migration/config part (the privilege of experience and seniority) and onsite when the endusers were coming in. I have to admit that I missed the long Saturday nights sitting in front of the computer to get it all working. And catching some Z’s early on Sunday morning to be back for the endusers. I think that’s the downside of experience, you get less thrills.

And you, dear reader, what’s your experience with Murex go live? Got some great stories or some horror ones you want to share?

Dealing with angry users

This is actually quite an important topic, especially when one is more junior. Indeed, the business often relies a lot on Murex and it is a major source of revenue for the bank.

So when something goes wrong and given the high stress levels it gets sometimes quite violent. The first times, it’s a bit surprising but then you get used to and start to learn how to deal with it.  If you’re in a rush, you can simply read the last paragraph 🙂

In this line of work, you will one day or another be exposed to this sort of behaviour and you can choose how to react to it. As I can’t speak for you (well I can, but that would be a wild guess), I’ll share as to how I see things.

My state of mind starts as: we’re in all of this together. There is a problem, the enduser is frustrated: what can we do to solve the issue. I’m setting my mind into assistance/problem solving. At first, there is no point being in taking any side, you need fact, you want to solve issue. If one is angry, frustrated, etc… I don’t mind so long it does not impede solving the issue. I had a colleague once standing a full hour of complaint about Murex from one person, I think the only positive outcome from this was that the trader could vent all his frustration (in this case he was actually quite right to be). But that did not help a bit into improving the situation.

As we covered in an earlier post, you need to work out your problem solving skills and depending on the criticality, raise the issue higher.

When people are so angry, they stop making sense, I calmly explain that I just want to solve their issues and I need their help. It works 99% of the time. Once, I had a guy that told me there was no point, all was bad, etc… I asked what help I could provide. He replied none. I then left. Talked with other people and they actually brought him back to reason. We could then work it out.
Most of the time, you need information to do problem solving. Without it, you cannot do anything so make it work your way.

They usually just want a solution that works. Myself, if I know some solutions which are quick to test with limited impacts on other parts of the system. I suggest we try them first. If the solutions are more complicated or harder to find, I then ask for some time. Some people prefer to work it out and fully test it out before giving an answer, I’m not the patient type and explain that we’ll try some solutions. If they prefer I can do it offline and come back with the results. People in this line of work tend to be quite smart and quick thinkers, so leave them the choice (you work on their PC or on your own), they’ll appreciate being involved.

In short, be constructive and keep your eyes on the prize (the solution).  You’re bound to face one day someone with whom it won’t work. Forget the constructive part with him, work with someone else. As we say in french: “Le con ne perd jamais son temps, il perd celui des autres.”

Having fun with the system

For a lighter mood this Friday, let’s talk about the ways to have fun with the system. Murex is a complex system, not always easy to configure or to get familiar with.

But who says complex system also says lots of places to put this funny little touch that will bring a smile when spotted.

Here are a few I’ve encountered:

  • Classic but always good: the funny comment in code (pretrade or stored procedure for example). One of the best, was /* Added to please Ms Princess while it serves no purpose */ Had to tell that person that this code was going to prod and taking it off would probably be a good idea
  • UDF consistency rules messages: “Why did you forget to enter XXX” (this was when entering bonds). I could tell the person who wrote that bit must have been so frustrated that they had to vent some anger into the message. Had a smile on that one building back the story
  • Name of views and filters. One of my ex-colleagues was always putting insults into his filter labels (and normally was deleting them after use). Well, let’s say that some DBs still have these words on few places
  • Description fields. I have to admit that this one is best used on static data that only support people have accessed to, not everyone might agree on that one!
  • Documentation and label of objects used. Remembered that bond called NOTABOND, classic but gold 🙂

Did you encounter some too? Did you put some yourselves (voluntarily or not)?

Have a good weekend!

Volatility … going the distance

Today I’ll cover a bit about volatility and the different topics relating to it. If there’s one topic, you want me to dive into, let me know and I can then make a full post on it.

  1. What’s volatility?

Volatility is a measure of price variation over time. The higher the volatility the more likely the price will vary (either up or down). Volatility can be measured historically by computing the standard deviation of the prices. Or it can be measured implied, that is to say by solving for the volatility of a quoted option.

2.  Smile

When the volatility is solely time bound, we’re calling it the at-the-money volatility (or ATM vol if you prefer shortcuts). But often you’ll consider that volatility is not a constant for different strikes and it will change as you step away from the at the money point. Usually volatility increases as you move away from the central point effectively giving you a smile shaped curve. The driver behind this is that options further from the ATM point have effectively a higher price than if they were using the ATM vol point.

3. Interpolating and shaping the smile

When working with smile curves, you need to decide for an interpolation method. You can choose between parametric and geometric. Geometric interpolations take into account the pillars that you have provided to interpolate in between. Parametric requires for some parameters to be provided (correlation between spot and vol, shape of RR, etc..). SABR is getting used more and more for IRD products and traders start also to monitor the sensitivity of their positions to the SABR parameters.

4. Dynamic smile

It means that the smile is not a constant when spot rate is changing. So in terms of total vol, it is like defining a convexity to the volatility (smile being your first level). Murex can produce such effects when calibrating Tremor.

5. Short date model

Very popular in FX world, the idea is that you can attribute certain weights to specific days: Fridays are less volatile than the rest of the week, Weekends have a very low weight, Mondays are a bit higher, etc but also to specific events (Fed announcement, Big numbers publication, etc…) The short date model has really an impact on the shorter dates (the shorter the bigger), so while it goes up to 1 year, it is effectively really important up to 3 months.

6. Cut-off spreads

This one is more a workaround than a real need on the market. One should consider that an option with  a Sydney 5pm cut has less time to mature than an option NYC 5pm. So the idea is that the NYC option should have a higher price. Ideally, one would increase the time value of that option (and t would then be able to cater for fractions of days). As this is not currently possible in the system, the volatility is effectively increased to mimic the increased price for later cuts.

That’s all that comes to mind right now but I’m sure I’ve forgotten a lot about it. Volatility is a rich topic and I just wanted to give her a flavor of the different functions which are attached to it.

Comments, requests below!

XML Macros

If you’ve never heard of them, that’s probably for the best but they’re always worth discussing.

Most actions in the system that needs automation can be done through processing script (changing the date, running a report, checking DB results, etc…) or through the workflow or specific interface (importing trades, market data, etc…). But in some cases, you might need a function in the system to do something automatically and it is not automated at all.

It usually ends up in a meeting where people look for solution till someone mentions: XML Macros.

What are XML macros? XML macros allow to automate a certain path in the application as if someone was entering all the relevant information and clicking on the appropriate fields. Recording your macro generates an XML file that you can then load via a script (on paper sounds perfect for automation).
The extra bonuses of XML Macros are that you can modify the XML yourself (for instance, username/password groups or any other typed fields) very easily AND that they can open up a session up exactly at their very end. (For instance, choosing a portfolio and loading the simulation).

On paper, looks great and feels like a bulletproof solution. Unfortunately, I tend to run away when they’re mentioned for quite a few reasons:
– They each have their own JVM. So if you plan to use them to ease the user access into the system, the user machine might end up out of memory as each session will eat up 250-500 Mo (depends a bit as to what the user is doing)
– They’re completely dependent on path built: if the path changes for whatever reason (change of version, change of configuration), the macro will need to get rewritten to adapt to the new version
– If username/password changes, one needs to look into it as well and update them.

Automation is often a set and forget solution, but in the case of XML macros, at each change (and on a regular basis) they need to be checked as they might stop working.
If automation is absolutely required and XML macro is the only solution, you need to consider an extra check post the XML macro run to ensure that the job was done properly (usually a good old DB check processing script is perfect).

If you have stories or tips about using them, feel free to share below!