Inside the trading lab.
Summary paragraph: In the future, machines will look after day-to-day execution, while humans will conduct experiments to further refine the trading process, says Nick Nielsen, head of trading, Marshall Wace Asset Management.
How would you describe Marshall Wace's approach to investment and trading?
There are two parts to our firm: the fundamental side, which is fairly traditional except that we can short stocks and might overlay with futures or ETFs; and the systematic application of fundamental investment through TOPS. Over the years, we've invested heavily in TOPS, in terms of portfolio construction, our relationships with the banks that contribute and execution.
TOPS has one of the biggest equity turnover levels of any strategy in the world. In Europe, on a daily basis, we are generally a very significant volume contributor. Because we're such active participants, we really need to focus on the overall cost of execution. As well as allowing us to capture more of the P&L of the strategy, this focus also lets us pursue other strategies with lower margins in the hope of monetising different types of alpha.
Trading at Marshall Wace was still very manual when I arrived. Over the last four years, we've worked to automate the entire execution process. That change has allowed us to cut our costs and has given us better benchmarking abilities which let us experiment with new ideas about how to add value to the process. As you automate, you find that benchmarking becomes very important; if you can repeat the process, you can understand and improve it.
We're constantly innovating and trying to experiment. For us, the day-to-day is less important than preparing for different things that could occur in the market.
It's been clear to us for the past four or five years that people are not going to be involved in execution for very much longer.
Right now, we're concerned that the quality of the electronic trading offerings from the sell-side could degrade in future because of resource cuts, too many people servicing too many different types of requests and the focus on high-speed technology, i.e. market data consumption and connectivity to different exchanges. Ten years ago, algorithmic trading groups had the quantitative and statistical resources to develop strategies, but political in-fighting at some institutions mean that now, when clients are increasingly statistically and quantitatively driven, a lot of these products are much more commoditised. The pace of innovation has slowed.
Now we're taking the execution process and fitting it into portfolio construction. This means we're feeding direct, real-time data on impact costs, negative selection, volume forecasts etc into the decisions being made at the portfolio level about whether to buy or sell particular securities. By leveraging what we've already built for TOPS, we've been able to give our fund managers tools that most discretionary fund managers don't have. So now they can simply submit portfolio slices, for example adding 10% to their holding of airlines. They can add in percentage or notional terms. They can interact very easily with their portfolios and source data from the trading function on marginal risk estimates and cost estimates on how certain events might impact particular securities or change their portfolio.
We also provide expected completion times based on real-time volume forecasting and spread forecasts as well as constant feedback on the results they've achieved.
Because we've been able to demonstrate cost savings through the use of quantitative methods, we can go back to the fund managers and have a much bigger influence on how they trade. Now they can focus more on the investment process because they've outsourced trading to a computer that can deliver consistent results.
What are the benefits to PMs of having a more precise idea of trading costs?
We can easily demonstrate to our fund managers how much we've saved - or cost - them; both are important. This information can reshape the type of securities they invest in and the timing of the investment decision. Most investment managers are very good at picking stocks, but they tend to struggle with the sizing and risk management.
If you can make the process more systematic so people have to make very active decisions when they override a system, it delivers much better investment results. From an execution cost perspective, we've been able to cut between about 1.5-2.5% on the NAV per year for many of our funds, which goes straight to the bottom line for investors.
While someone might not invest with us for that 1.5-2.5%, in a fiercely competitive environment, it takes you from above median to a near-top performer. All areas of our business are constantly trying to find and deliver to the investor an edge that a competitor doesn't have.
We have a very good idea of our costs. We try to hit our advisory targets where possible, often via commission sharing agreements, by delivering the smallest commission in basis point terms and then the least shortfall from any order entered.
In addition, we've embedded a number of the techniques that traditional prop firms use to actually generate P&L into our trading process. We like to believe we're putting pressure on the margins of the prop firms, and overlaying it into our execution process.
Is the contribution of trading transparent to end-clients?
We proactively give high-level information to clients on our innovations and where we think we add value. In due diligence and on request, we provide more specific information, but we tend not to bombard people with information.
For many clients, the all-in number is more important than us trying to attribute performance more precisely.
In automating Marshall Wace's trading, what third-party resources vs in-house resources have you used and why?
We've done most things ourselves. About 95% of our notional traded is fully automated. No one touches it. To enable this, the most important things are automating risk management, transaction cost analysis and market data collection, which we've spent a lot of time on. People often have problems with getting all the correct event benchmark peggings and data capture of their own trade history and integrating that into the market data. I've spent a lot of time normalizing market data so it can
be utilised across any instrument, geography or market, which means we need very good, clean data on the market open and close, and how condition codes differ across exchanges for example.
Anytime you want to change something, or want a different way of looking at the data, it's very difficult to achieve if you haven't built it yourself. We also have very, very rich static data and designing the structure of that static data is critical. This means that when you add a new exchange, for example, the system automatically understands whether you can short sell here or you need to report a short sale in a specific way, just from reading the static data.
Broker-specific data on commissions, market coverage, algorithms etc. are only enabled in our systems if all the required static data is populated. We can't send an order to a particular broker for specific types of DMA, for example, if we don't have the static data enabled, which acts as an additional risk control. We have also done most types of connectivity ourselves too. But in some cases, we use vendors for normalising FIX connectivity, because it's probably not worth us maintaining all the specific FIX tags for 150 or so brokers. We try to normalise all of our brokers to a generic standard across the algorithmic trading strategies we use so that we have good benchmarking methods between brokers and can make valid apples-to- apples comparisons. We also outsource connectivity to exchanges for the market data that our in-house built algorithms consume.
The common theme is that we outsource when we don't think we can add value to the process.
But we do use our own resources to ensure data relating to the execution process is as comparable as possible; otherwise, there's always an excuse from third parties as to why one set of data can't actually relate to another.
How does quantitative research inform the execution process?
When you're building an automated trading system you have to think about the different algorithmic strategies you're going to employ and how aggressive you want to be, which is a function of your volume forecast, alpha forecast and portfolio risk metric. We're constantly looking at our forward portfolio positions.
If we know, for example, we're going to be 5% of the volume in Vodafone and we expect 100,000 shares to be traded in that stock over a few minutes, we also know we'll be filled on 5,000 shares. We can then estimate the impact from a portfolio risk level in ten minutes' time and use that to change our aggression levels.
While aggression should be thought of as a cost - i.e. the more aggressive you are, the more impact cost you pay - it's also true that if you have systematic alpha, the less aggressive you are, the more alpha you leave in the market. So our aim is to take as systematic an approach as possible from a portfolio perspective. As well as using real-time optimisation to change participation rates and venues, we are also evaluating broker selection. It's almost like we're running a program desk: we're trading with 20 counterparties at any given point in time, which gives us better anonymity and control over market impact. And on any given day, we can internally cross as much as 14-15% of our book before it even goes to market.
What value do brokers offer Marshall Wace's execution process?
Over time we have further refined our algorithms that execute our flow, often leveraging algorithms we've designed ourselves, which partner more with brokers for access to markets and independent risk controls rather than vanilla algo offerings. We think that all counterparties imposing some sort of independent risk control before the order reaches the exchange is a really important development. We also value the unique inventory in their dark pools. The anonymity and cost savings brokers can provide in a dark pool outweigh any negatives.
Because we're running continuous optimisation, the feedback from the executions can change how aggressive we are, the sizes we place, the price we place at as well as the venues we trade at. From an overall execution perspective, greater control works better for us. If we decide we want to be very aggressive then it's probably ideal for us to be very aggressive very quickly, rather than to try and fiddle with different parameters on a broker-supplied algo.
By taking better control, we can better understand our own trading patterns and better optimise the actual impact cost that we pay.
Are brokers' execution consulting services of value to you?
Execution consulting can be interesting. People realize there is commoditization and less innovation. Customisation to suit specific clients may add value but its not going to be a huge amount of new business.
Execution consulting can be valuable as an exchange of ideas. For example, a number of years ago we looked at whether it was better to trade all orders that were 25% of a stock's average daily volume (ADV) on a high-touch basis. We executed orders in a random selection of stocks that were 10% of ADV via an algorithm while trading another random selection via high-touch channels, such as indications of interest.
We found we were worse off using high-touch services for large ADV orders, which was counterintuitive, but an example of where we were able to set up the experiment, keep the data clean and then be grown-ups about it once we see the outcome.
You hear a lot of generalisations about what could improve performance, but we have enough flow to try things out. Brokers have come up with some new ideas that have worked really well, but some that haven't and we've learned from both.
What is the make up and skill set of your team?
We tend to hire people that are not traditionally from a financial background. They tend to be younger, very hungry, very bright people from university, often with skill sets from software backgrounds. We've hired guys with PhDs and from the big technology firms.
Given the opportunity, smart people will figure out how to make things work. We can only back test so much, and we do back test a lot of things, but we also need to try and see if we can implement things we found in the back test.
We tend not to hire a lot of experienced people from the financial community, because we want people that haven't been influenced by what they've seen in the past and who can come in with a fresh mind. For the amount of volume we actually trade, our personnel is very small. Apart from myself, we have one trader in the US and one trader here and two traders in Asia.
What is your view of the prospect of new dark pool regulation in Europe?
Any type of regulation of dark pools is a mistake.
There are a lot of lobbying efforts by different market participants, specifically exchanges, to get the regulator to change things that are in their best interests economically. We have a very diverse client base, sometimes managing money for charities, pension funds, endowment funds, and also high net worth individuals.
But our clients aren't only wealthy people, they are just people who need a pension for their future income. We use dark pools to save money for them. We find better execution quality by incorporating dark pools into our trading strategy to hide what we're doing by not showing our full size and aggression to the market place, which is no different from ten years ago when you might hold an order back and trade over the counter on the phone.
Rather than allowing others to take advantage of a large order, people got smarter and decided not to show their full order any more, and that's no different from what's happening now. Going back to just the lit markets would be a mistake because it would cost participants in terms of higher impact by removing a choice for routing. This would force us to more fully reveal our flow and force a change to our process which will have another unintended consequence.
What are you working on now to try and further add value to clients?
As costs have come down, we've been able to monetize alphas that are smaller than we've ever been able to monetise before. So our research is more and more focused on shorter-term risk management techniques for our traditional portfolios. I don't think our portfolios will necessarily change that much, but we can add things in that will cut our trading costs further. As we take on more assets, we will likely increase turnover, which represents further savings to clients. We'll constantly focus on new and innovative ways to do that, such as internally crossing more stock. Also, adding new strategies that have different correlations and trading patterns will decrease your trading costs.
People might think, 'Oh, you're automated, you've fixed everything', but this is a constant evolution. Things that happened six months ago probably don't happen in the same way any more. You can't just write a machine or an algorithm that just learns everything and automatically does it right. It's a constant evolution to not just get ahead of the game, but to try and stay ahead of the game. We're active in nearly every equity market in the world, so things that are happening in Europe are slightly different from the US, or Japan. Trying to stay ahead of the curve across all regions globally is quite a big ask.
When I was on the prop trading side, strategies were built to capture alpha for three to six months and I'm sure now they're even shorter. Why should it not be true that we should have the same focus on our side?