Tag Archives: database

Murex performance – is the system slow?

Murex performance: Is my system slow or is it just me? When you present the system to new users, it is a question you often hear.

There are many answers which can come up:
– Yes
– No
– Maybe
– Server is a little small
Etc…

But the only good answer is: “It’s all relative!”.  Indeed, what do you call slow or fast mostly depends on the user.
For instance, if opening a trade ticket takes 3s, is that slow? is that fast? Agreed, if it takes 2 minutes, it is slow!

So the problem with performance is that there is a big amount of subjectivity. But I don’t want to pretend or even try to do psychology and check what the human mind considers slow or fast. On the contrary, I’d like to take precise examples of what users frequently report:

– Actual session start. Since the new GUI, login, password, group are almost instant, but the actual session start when you enter a menu usually feels slow
– Opening details: bonds, trades, counterparts, etc…
– The feeling that the system is sometimes a bit sticky, where actions take 1s or so when you expect them to be instant
– Simulation loading
– Accounting generation
– Saving trades

There are, in my opinion, 2 types of Murex performance issues: the user experience type and the structural ones. One could argue that they’re both the same but let me explain:

The UX ones are the ones which 1s or 2, for which timing is hard or imprecise. In the list above, I’m referring to opening details, stickyness. This is usually this type of performance which is ignored as 1s to open a trade or .1s does not seem like a big change to the user. But what the user does not always understand is that when you press space bar to open a deal, lots and lots of things are happening in Murex: the trade number is sent from the client to the application server. And then the application server is sending DB requests to get all information. It is not one single request, we’re closer to hundreds than few ones: get trade header, get trade body, get contract details, get additional flows, check access rights. If we assume that the communication between the DB and the application is 3ms, 100 requests bring that to .3s, add the CPU time (to decide if the trade is part of a package, requires to open other tables), add actual request time from the DB and then sending all the information back to the client. And here you go 1s, 2s are very easily reached.

So the User Experience performance issues are also structural and there’s not much to do in terms of code optimization. (well, at least not in my non PAC-brain).

The actual structural ones, the ones you expect to be slow are basically like the above compounded many times, factored with extra calculation for risk. The good thing is that when you get to that level, you can usually find optimizations on the DB side when not on the code side to improve speed and loading/processing more information at once. Usually when you can contact support for Murex performance, people expect to help on this kind of issues.

So back to our original question: is the system slow? It is indeed relative. There are implementation of Murex where the system is lightning fast. But everything has been optimized with that objective. To get your ping times from 1ms to .1ms is no small feat, to have a large DB and very fast requests require massive and optimized DB power, etc… So it can be done and has been done, but people tend to be cost sensitive, so doubling (well, I suspect far more than doubling) your hardware/maintenance budget so that a trade does open in .1s instead of 1s does not really make sense.

And Murex also works hard in terms of improving the user experience where they can: Livebook is a great example. Livebook means that data is churned all day long by processes so that when you enter Livebook in place of simulation, you don’t wait 5-10 minutes but some seconds to have the information; screenset is another great example, defining screensets lets you have all the sessions you need opened in one go rather than starting them one by one.

To conclude, Murex performance is addressed where Murex can do something or think of something innovative. But there are some limits due to hardware that are next to impossible to improve from a Murex point of view. So Is the system slow? No, Murex worked and is working hard on it but in the end, it all depends on your implementation and how fast you want (can afford) it to be.

Updating SSIs in bulk – How I did it

Practical case: today, someone asked me if I could help: some trades were imported and for these ones, everything has to be considered fine: even if SSIs are missing, future flows have to be ok and not appear as missing SSIs. New trades on the other hand (even if they are on the same counterpart) have to have proper SSIs or return a missing SSIs message.

Putting some dummy SSIs (catch’all) would not work as it would affect new trades as well. Even putting a validity up to yesterday would not work either as future payments from existing trades should not show any missing SSIs error.

So the only solution I thought of: specific SSIs for imported trades. I was not completely sure of the table holding SSIs (I had to confirm it), so I entered a trade with customized SSIs and I put in one of the fields a very specific string (if you’re curious ABCDE, yes I know, very very original). Then I searched through the DB trace file (see earlier posts for how to) to find the table.

The problem I then had was that the table had a field Trade number that was not in line with my trade number (even if I was sure it was the right trade). So I searched the trace file for the trade number that I did not know about and could find it within the transaction header of the transaction. So I then had everything: from my deal number to intermediary reference (in my case it was only 1 but there could have been multiple) to final data.

Turning that into a script was then a breeze (for people curious enough I’m using the same specific dummy SSIs for all the imported trades) and voila! you now have specific SSIs for all imported trades and new ones will fetch automatically from the SSIs assignments.

Is that a perfect solution? Of course not, you can have issues upon performing events but still, it a very valid one.

Spring cleaning, Purge the database

Spring cleaning, I know I’m a month early but purge is an important task and sometimes you need to make sure it is adapted to your environment and needs.

Purging the database will let you keep the database growth under control and ensure that you get the maximum performance out of the system. But there’s often a fear that purging will result in data loss and quickly you find yourself with massive retention periods, 7 years for trade,  2 years of daily market data and all logs.

The first to keep in mind: Murex is a production system for trading and processing, it is not a data repository system. You need to keep it running in top shape as to maximize the benefits you get from the system. If you need to retain some data stored in Murex, export and store it on your own system. It is much cheaper and more appropriate.
This might sound obvious but when talking about purge, regulation is often the first topic that comes and it blocks any further discussion as long as a solution for storing all the data to be purged has not been implemented.

Once everyone is convinced of the importance of the purge, there are multiple items to purge by importance:

– Documents and their entries (usually ranking at number 1 in DB usage)

– Market data (normally ranking at number 2)

– Trades

– Logs

– Static data

– The forgotten ones: view, layouts, filters

Documents

Purging Mxmlexchange is actually quite straight forward and is done through scripts provided by Murex. Just be very careful with the scripts and ensure that proper testing is done on test environments before deploying to production.

But if you test it properly and only purge the intermediate documents, it is quite straight forward without surprises

Market data

Market data is made of 2 parts. The visible side of the iceberg where you purge market data for dates you no longer need (good practice tends to let people keep the month end only for older dates and daily market data for few months (1-3 depending on your aggressiveness). This can be done through the GUI if you want, quite straight forward.

But there’s also a second part of market data  purge which helps a lot: expired instruments (read Bonds and listed options mainly). By default, Murex automatically copies all market data entries from today to tomorrow as part of EOD. This automatic copy means you also have entries for expired listed options (ETOs), futures or bonds which keep being rolled. It might not sound like much but ETOs can quickly snowball especially if you trade very short dated ones such as intradays and overnight. Here, Murex can provide you with a script to clean them out. Symptom for this second one are tables such as MP*_GLOB and MP*_PRIC being large in size.

Trades

Trade purging makes sense especially when you do volume trading. The trade purging is done through the GUI (very important) and in such a fashion that all purged positions are getting aggregated to avoid any jump in cash balances.

The trade purge occurs in 2 steps: a logical one, where the trade is no longer read for reports and simulation but is still present in the database. All its contributions are stored and aggregated with other purged deals. It can be undone if required
The physical purge will effectively remove the trade from the system, you cannot anymore query it and it cannot be reversed.

Position and cash balances testing needs to be performed after each purge step. After the logical purge, it is the most important as Murex will no longer evaluate the trade but read directly its stored contribution. After the physical purge could almost be skipped as it does not affect anymore the aggregated results, it is simply removing the unused trade records.

Trade purging depends on the trade complexity, simple spot forwards can (and should) be purged much more aggressively than more structured deals

Logs and audit

Murex will give you the scripts for these, purge as requested and make a copy if you feel the need upfront. They don’t consume much space but clean logs make browsing through them a lot easier!

Static data

I am actually an advocate against purging static data. Murex often references static data under the purged deal contributions or in other places and removing them, will remove that link for Murex. One could always try to fix all the problems which ensure out of it but in my opinion, it is simply not worth it. The amount of problems generated (and which could come later during or after an upgrade) is not worth the small amount of DB it occupies.

Filter, layouts, views, etc…

These items should not be purged per se but should be kept under control. Restraining users from creating, duplicating is probably the way to go.
To clean them up would probably have not much of an impact on the database but you risk that an EOD report or a process would fail. Except if you have kept a very precise list of which items are used by what process (and if you did, kudos!), you probably have to leave them where they are or start a massive campaign identifying and decommissioning the unwanted ones.

 

In summary, if you concentrate on the top 4 items of this list, your DB should grow as expected when the hardware was planned with Murex and performances will remain optimal. Just keep an eye on the DB usage by table and if something grow too quickly, Murex will always be happy to sort you out!

If I forgot something or if you feel like to add something, please feel free to!

Sybase vs Oracle

This is the question one often hears when the decision has been to go with Murex: Sybase vs Oracle. Which one is better? Which one do you recommend, etc…

To first repeat what has been said numerous times: Murex works very well with either and if you need to use one or another due to bank policy or any reason, you can’t get wrong. Murex will deliver results and everything will be A-OK.

But there are differences and both have pros and cons. Historically Murex only supported Sybase and many customers feel that they will get better support from Murex if they go with Sybase. Oracle is quite well known at Murex nowadays and there is no change in the quality of support regarding Oracle. PAC team especially is knowledgeable on both fronts and can provide configuration recommendations for both systems.

Even in performance, that’s not where the difference is really going to lay (many people would disagree here and give reason to go for one or another). I feel the difference is pretty in the actual usage of each: they each work slightly differently. Not from a Murex front end of course, to the end user, Sybase or Oracle does not make any difference, system looks the same, functions work the same way. It is really when you start using SQL where you can see differences.

I graduated from SQL school with Sybase as a teacher, so I do know more about Sybase than Oracle.
Sybase wise, identifiers are directly attributed (the good old M_IDENTITY). When writing SQL, no need to take care of that field, it handles itself. With Oracle, it’s a different story, one need to call the sequence (TABLENAME_DBFS) to retrieve the latest number in order to update it. A bit more painful.
SQL clients with Oracle are for some reason always more of a pain especially if you mix direct commands and stored procedures. I used SQL developer and not seing the results of my stored procedures is a pain. I also use a lot SQuirreL. The later works great for everything EXCEPT the initial connection to the Oracle servers. When the server is distant, the initial load of tables took couple of minutes (started at 15 minutes and went down to 2-3 minutes once the link to other offices got upgraded).  Oracle also was a pain with the username/password for each schema. Not too sure why it was like that but while in Sybase one can easily switch from one DB to another with the same user, the way it is configured to work with Murex Oracle forces to log out/log back in (or log in multiple times to each schema).
But I had my fair share of issues with Sybase. DB corruption happened quite a few times (I suspect it happens also with Oracle but I did not experience it firsthand). The worse DB corruption was when receiving a dump from a customer which contained a trigger (triggers are not your friends). That trigger was attached to a different user id which we did not have when we loaded the dump. So we had to reset the user id for that trigger before deleting it. When updating that user id, it caused a DB corruption which could only be solved when stopping/restarting the server. There were other cases but nothing repeating as easily as that one.

I’d be interested to hear from Oracle experts to give me all good sides of Oracle as from my point of view, I usually found Sybase easier to work with and often wasted few hours trying to adapt a stored procedure that I wrote in Sybase to work with Oracle. Usually PAC team were the ones able to set me straight and get the procedure up and running.

Murex performance – the chicken and egg story

Murex performance is often in the spotlight: how quickly can Murex do XXX or create YYY. (Replace XXX and YYY with your choice of tasks)? The problem is that the list of requirements between 2 customers varies and results in very different timings.

So to take out the main question first (if you’re the sort to prefer short answer): Can you get good performances out of Murex? Absolutely!

How you’re going to achieve depends on few things (which makes answering the question how long does it take to do something impossible to answer):

  • Hardware is the first one to come to mind. With great hardware comes great performance. Well, not really, you also need to have it tuned right but yes it is a major factor
  • Requirements. This one tends to be overlooked: “I want to get real theta for my whole portfolio over the next 10 days, along with a spot shock and at any time rewrite the spot levels. And it needs to be fast!” (you have similar questions with trade input, reports, etc…). Of course, if you ask for time consuming tasks (or put it many consistency checks), you will slow down the processes.
  • Maintenance. If all works fine on day 1 but not 10 days later, clearly there’s some maintenance that was not done properly
  • Software. I put this one last as it is very rarely the software the issue. Very rarely (feels good to repeat it)

For most of this issues, the PAC team is the go-to team. They can size the hardware you need based on your system usage, advise you on maintenance procedures and debug if something runs too slow.

In general if you believe that a process is taking too long given the configuration (inserting a deal takes 5 minutes, report still running after 1h, etc…) you need to do the following.
If it is an isolated occurrence, it could well be a lock either at DB level or at system level. For locks at DB level (rare but it happens), check with your dbas, also check if no heavy process is currently running. For locks at software level, Murex has you covered with ipmonit. Login to ipmonit from the monit tool, and you can access a lock report showing you all the locks put in by the system (for example if someone is editing a trade, it is locked to avoid 2 modifications at the same time). Check the documentation for ipmonit as screenshots are very helpful when navigating the screens.

If it happens all the time, then it is unlikely to be a lock and you need to generate performance traces. The first ones are generated with /TIMER slash command. This slash command will generate mxtiming files into your log directory (you can put the slash command if required in the launchers for services). The mxtiming file will show the time spent on CPU and while waiting for the DB. If time spent on DB is too high, indexes could be missing on the tables. So you need to run a DB traces (shameless link to my older post for how to). These DB traces can be sent to Murex and they will give you the number of logically read on each table. A number too high indicates (likely) that a table is unindexed. Indexing that table should improve performance.

If the system is slow, the reason lies either in the hardware or the configuration. Rarely the problem is due to a bug.

There are also cases where Murex develops a new feature to speed up a process that is known to be always slow due to the sheer amount of computing/data crunching it requires. Parallelization or pre-crunching are the 2 big methods to do so. But this applies when you start to have a volume: inserting a single deal should always be fast!

Comments, experiences are welcome!

Murex database – Hack your problems away!

Alright, today let’s crack open this black box that is the Murex database! While all of you know that Murex doesn’t publish its database organization, sometimes there is no choice than go directly where the data is.

My rule of thumb is that if one can avoid it, going direct to the database should be avoided. Any problem caused while browsing will have impacts and cause problems in the environment. For reporting, dynamic tables or viewer reports are your friends. For filtering, list of fields is actually quite exhaustive. In many cases, you will find all the information you need without opening any single SQL client. But sometimes, for some filters (back to RQWHERE post!), for some reporting or for some DB cleaning, you’ll need to go through the database.

Working with Murex database is the same as working with any other trading system database: backup, test in test environments, test again, backup and it should work. The problem is that sometimes some fields are not very clear as to what their roles are and when trying to populate lines (insertion or update), this could turn out to be a real problem. Murex consultants are then the best suited to help you out, especially if you’re not sure your request is safe. In case of migrations, again, Murex consultants should be the ones to provide you with the right scripts, only write yours when you’re absolutely confident of what you’re doing.

Now from a Murex consultant point of view, it is not always easy either to determine what fields have what roles. But the first step is to understand what the other party is trying to do. Maybe SQL is not the best way forward and there could be an easier solution?
Then you can check what other people have done. It is rare to have a problem with only 1 customer that has not been encountered by somebody else.

I learned SQL while working at Murex and many times it actually sped up processes tremendously:

– Inserting in bulk some data (or duplicating records)

– Cleaning up unwanted data. Especially logs (or market data, much much faster)

– Building my own extractions when doing reconciliation reporting

But it also happened that my scripts did not work as expected (and lucky I had a backup and was doing it on a test environment): updates/delete without a correct where condition. I once removed all records from the transaction header!

If you’re working on a limited set of tables and you don’t want to call upon the DBAs to do the backup, then you can should use the following tools: Help-Monitor-DPI info-Transfer from RDB to DBF. You will need an authorization code to proceed but then you can transfer the table from the database to a file in the application server file system. The step Transfer from DBF to RDB does just the opposite. So it gives you the flexibility to backup any table you want from the database to the file system and bring it back whenever required.
Note that you can use jokers in the name of the table you wish to transfer and you should not put _DBF but .dbf.

And you? What’s your relationship with SQL? Comments and experiences below if you wish!

Having fun with the system

For a lighter mood this Friday, let’s talk about the ways to have fun with the system. Murex is a complex system, not always easy to configure or to get familiar with.

But who says complex system also says lots of places to put this funny little touch that will bring a smile when spotted.

Here are a few I’ve encountered:

  • Classic but always good: the funny comment in code (pretrade or stored procedure for example). One of the best, was /* Added to please Ms Princess while it serves no purpose */ Had to tell that person that this code was going to prod and taking it off would probably be a good idea
  • UDF consistency rules messages: “Why did you forget to enter XXX” (this was when entering bonds). I could tell the person who wrote that bit must have been so frustrated that they had to vent some anger into the message. Had a smile on that one building back the story
  • Name of views and filters. One of my ex-colleagues was always putting insults into his filter labels (and normally was deleting them after use). Well, let’s say that some DBs still have these words on few places
  • Description fields. I have to admit that this one is best used on static data that only support people have accessed to, not everyone might agree on that one!
  • Documentation and label of objects used. Remembered that bond called NOTABOND, classic but gold 🙂

Did you encounter some too? Did you put some yourselves (voluntarily or not)?

Have a good weekend!

RQWHERE

RQWHERE is probably the most useful function filter in Murex. It basically lets you filter based on SQL statements. This gives you complete freedom as to what criteria you want to use in order to choose a certain results population.

RQWHERE is also a pain in the neck to use the first times (and the times after!) as it is purely based on the datamodel (which as we discussed before is not documented). Based on the datamodel means that you need to understand how the data is structured and how the different tables are related together. It also means that if the datamodel changes, the filter will need to be adapted.

So if you tick the following boxes:
– Know how to make simple SQL statements
– Know how the data you need is organized
– Can’t make the filter you need with existing functionalities

RQWHERE is for you!

Basically RQWHERE calls for 2 arguments, the first one being a string and the actual select statement you wish to use and the second one being also a string but which I’ve never ever used. If someone recommends it, feel free to comment below 🙂

The way you structure your select statement is a bit up to you and while I can’t help you with it (your prefered Murex consultant can and will though 🙂 ) there is one very neat thing that can take your filter from being good to being very good: parser functions.

Indeed, you can enrich your string statement with interactive variables or parser functions. This means that the filter can prompt the enduser for a string/number/date prior to being executed which will then be used when building the string that will be sent to the function.

For example, you want to retrieve with a number higher than x. Your first argument will be something like “[start of your statement] where NUMBER>”+<interactive numeral variable>+”[end of your statement”. If you’re using strings or dates make sure that you use single and double quotes correctly: “[start of your statement] where STRING='”+<interactive string variable>+”‘[end of your statement]” (yes that STRING=<single quote> <double quote>). The built statement string will then be  “[start of your statement] where STRING='<input variable>’ [end of your statement]”. A perfectly built RQWHERE!

How to debug RQWHERE?

Sometimes your RQWHERE will not work as expected. Either it will return nothing with no error or sometimes it will spam errors. If the later, Murex will show you the statements which are failing and you can fix your RQWHERE by looking at the final result.
If there is no error, then turn on a DB trace (on screen or in logs) and check the built SQL statement if it is the one you want or not.

Questions, comments, feel free!

Database traces

Database tracing is very handy when you’re trying to grasp what is causing an error and you suspect something wrong in the configuration (an incorrectly setup security, market data, etc…) or you’re looking to build an SQL query and need to have a bit of a better understanding of the data model (note that I do not encourage you to use SQL, reporting and dynamic tables should be your first stop but sometimes it is not enough).

In Murex there are couple of database traces and some are more useful than others.

The first one I used is the one coming up through the GUI. Every request pops on the screen and by pressing escape gets executed. It’s simple to use, simple to turn on and don’t need to retrieve the file from the application server. Sadly, it sometimes causes crashes due to the popup windows interfering with the application or sometimes you simply have too many requests for it to be useful.
To turn it on, you need to go to Help-Monitor-DBX Info-Request . You can then enter criteria to filter the requests: From the action (update, delete, insert, select) to a string search (e.g. TRN_HDR or BOND001).
When I began in the Murex world, I loved that tool as it helped me put together the pieces of the puzzle.

The second one is to activate the traces dumped into a log file. To do so, go to Help-Monitor-DBX Info-RDB Statistics. You can choose what level of debugging you need. The basic one is good enough if you need to look at the queries executed. The full verbose (apart from the plan) one is necessary if you’re trying to track missing DB Indices or need to understand which requests are costing too much time. That trace will by default dump files into logs/mxsession/mx but the default path can be modified within the launcher.
One can also turn on that tracing with a slash command: /RDXSTATISTICS:<prefix>:<trace level>

You might end up using the information given from the traces for:

– Building a report

– Building a RQWHERE filter

– Narrowing down an issue

– Building a SQL procedure (useful when upgrading/reconciliation but always check with Murex if it’s all good)

DB traces are not the silver bullet and sometimes won’t give you the information you’re after. Also in case of investigating a crash, keep in mind that the last request might not be the one responsible for the crash and you might need to go back up in the logs to find the root cause.

 

If you have questions, funny stories or feel like it, please comment below!