Quantcast
Channel: System Trader Success
Viewing all 323 articles
Browse latest View live

Going Live With Automation

$
0
0

Going live with an automated strategy is one of the coolest, but also scariest things you can do in trading. Here you are letting a computer trade with your hard earned money. Theoretically, the computer makes ALL the buying and selling decisions except for rollover trades. Many people simply can’t do it – the stress and pressure of trading decisions being made outside of their control is just too much to bear.

The flipside though, is that automated trading can be extremely liberating. Turning control over to a computer – as long as you trust its decisions – frees you up to do other tasks (like developing more and more strategies!). Adding automated strategies to a portfolio can be fun and exciting, as well as hopefully ultimately profitable.

Of course, it is assumed that you have a properly tested and vetted strategy ready to go, and an example of which is shown in Figure 1. But once you are ready to go, then what? What pitfalls should you look out for? What kind of “tricks of the trade” are available?

figure1

Figure 1 – Crude Oil Strategy Walkforward Test Results, Ready to Trade

In this article, I’ll discuss some of the basics, and provide a few handy tools for you to use in your automation. This article is far from complete coverage of automation (the perils of unattended trading, and the use of virtual private servers are aspects of automated trading deserving of their own articles, for example). I’ll use the TradeStation platform for all examples and coding, but the concepts apply to all trading software platforms.

First Things First

When you tested your strategy, you likely used a continuous contract of some sort for your market data. In TradeStation, the back adjusted continuous contract (my favorite type) is denoted with a prefix “@”. So, for example the Crude Oil continuous contract is @CL.

Unfortunately, you cannot trade the live symbol, “@CL” since TradeStation needs to know which contract month you would want to trade. Thankfully, they provide continuous monthly contracts, such as “@CLV14” for October, 2014 and “@CLX14” for November, 2014. Each is identical to @CL when it is the front contract.

So, for actual trading use the symbols such as @CLX14, depending on the current contract month.

Are You Trading The Right Contract?

Since you tested your strategy with the @CL symbol, which always trades the lead contract, you want to make sure your live trading matches up with it. If the current month is November, you don’t want to trade the December contract, since then you are introducing the Nov-Dec spread dynamics into your trading, which is not what you tested. Always trade only what you tested!

So, how do you monitor that you are trading the correct contract? It obviously can get hairy when multiple strategies, multiple instruments, and multiple charts are involved. To assist me with this, I created a simple paint bar indicator which you can find at the end of the article.

All you have to do is have the contract you want to trade (@CLV14) as data1, and the perpetual continuous contract (@CL) as data2 in your chart. Remember the order is important, since you can only trade data1. The indicator monitors for any difference between these two data streams. As long as October “V” is the current month, the indicator will not plot anything. But, when the contract rolls to November “X”, the paint bar will paint a green bar in the data1 portion of the chart, as shown in Figure 2. That is your signal to roll the position.

Figure 2 - A Green Bar Indicates Rollover Time

Figure 2 – A Green Bar Indicates Rollover Time

The Match Game

Another critical part to successful automated trading is making sure that the strategy position and the real life position match. TradeStation provides a decent way to monitor this, as shown in Figure 3. But, in staring at the charts all day, I found it easy to forget to check the Trade Manager window. So, I created another paint bar indicator, included at the end of the article. It monitors the strategy position and the real world position, and draws a big red bar when there is a mismatch. An example of this is shown in Figure 4.

Figure 3 - Trade Manager Will Alert You To Position Mismatches

Figure 3 – Trade Manager Will Alert You To Position Mismatches

Figure 4 - A Red Bar Indicates Position Mismatch

Figure 4 – A Red Bar Indicates Position Mismatch

Unfortunately, this “position mismatch” indicator is a little too good, and gives false alarms quite often. I actually like that aspect, as it keeps me constantly checking positions, but it is left to the reader to improve upon the accuracy of the indicator. The good thing about the version given at the end of the article is that it does not seem to miss position mismatches – better to have a few false alarms than a major miss!

Rollover Time!

These indicators are especially useful when you have to rollover. They will tell you when to roll, and will ensure you do it correctly. I’ll show that below with an example.

On September 17th, the CL strategy was long 1 contract of October Crude Oil, @CLV14. At the close of September 17, TradeStation rolled the October contract to November. I was alerted to that fact by the green bar, shown in Figure 5. Remember, the green bar shows a mismatch in continuous contracts. So, that was my cue to roll.

Figure 5 - I Am Trading October - Time To Roll!

Figure 5 – I Am Trading October – Time To Roll!

Normally, I would use the exchange supplied spread to roll a position, in this case I would sell the October-November spread. That would make me flat in October, and long in November. The exchange supplied spreads are the safest and usually cheapest way to roll. Unfortunately, the TradeStation 9.x platform does not support this, so you have to “leg” in and out of the position in order to roll.

So, my first step is to exit my October long. After I do that, my chart was as shown in Figure 6. The big red bar told me there was a mismatch in position.

Figure 6 - After I Exit October, There Is a Position Mismatch

Figure 6 – After I Exit October, There Is a Position Mismatch

The next step is to change the data1 stream to @CLX14. This will eliminate the green bar (which was hidden behind the red bar), but the red bar persists, as shown in Figure 7.

Figure 7 - Now @CLX14 Is Top Month, And I Need to Buy

Figure 7 – Now @CLX14 Is Top Month, And I Need to Buy

That is telling me I need to do something in November, namely buy. I did that in Figure 7, and I was filled in Figure 8. But you will notice the red bar persists. Now, I simply turn the indicator off and then back on, and end up with Figure 9. No red bar, no green bar. Everything is good!

Figure 8 - I Bought November, But Mismatch Still Indicated

Figure 8 – I Bought November, But Mismatch Still Indicated

Figure 9 - Turn Indicator Off And On, And Now Everything Matches!

Figure 9 – Turn Indicator Off And On, And Now Everything Matches!

Conclusion

Of course, there is a lot more to automated trading than just synching up continuous contracts and monitoring live vs. strategy positions. But, these are two critical aspects that many people never fully get a handle on. Properly managing positions and contract rollovers can be critical to your success. As you trade 5, 10, or 50 or more strategies, you’ll be thankful you have these tools to help you monitor your rollovers and your positions.

If you would like to learn more about building trading systems be sure to get a copy of my latest book, Building Winning Algorithmic Trading Systems.

Download

MisMatched Indicator (TradeStation ELD)

— Kevin J. Davey of KJ Trading Systems

The post Going Live With Automation appeared first on System Trader Success.


Battle of the Oscillators…Round 1

$
0
0

In a recent article, Predictive Indicators written by John Ehlers he highlighted a unique indicator used to time market cycles. This indicator is a heavily modified Stochastic Oscillator and was demonstrated on the S&P. In this article, I want to put John’s Oscillator to the test by comparing it to another popular indicator used for timing to stock index markets.

Backtesting Environment 

For this entire article, the backtest will be conducted from January 1, 2000, to December 31, 2016. I will be deducting $5 in commissions and two-tick of slippage per round trip. I will trade one contract per signal on a $100,000 account. Profits will not be reinvested. The backtest will be conducted on a basket of index futures. The markets I will use are:

  • E-mini S&P
  • E-mini DOW
  • E-Mini NASDAQ
  • E-Mini RUSSEL 2000
  • E-Mini S&P MidCap 400

 John’s Oscillator Performance

These are interesting results and seem to verify that for these stock index markets, this indicator is a decent predictor of market turning points. We have a profit of over $434K which gives us a compounded annual rate of 10.36%. The profit factor is 1.33 and drawdown only exceeded 18% once. Let’s now compare it to another popular indicator used to locate potential turning points.

The 2-period RSI Oscillator

I created a simple strategy to open long trades when the 2-period RSI crosses below 10 and to sell short when price crosses above 90. This is a similar concept to John’s Oscillator as both strategies are either long or short. Below are the results of this strategy.




In this case, we can see the 2-period RSI under perform John’s Oscillator. Not only does it underperform in terms of net profit, profit factor, sharp ratio, and average annual return but the drawdown is larger. We have a profit of over $235K which is about $199K less than John’s Oscillator. The compounded annual rate is 7.37%. The profit factor is 1.24 and drawdown exceeds 20% many times and peaks at around 48%.

John’s Oscillator does appear to pick turning points better than the 2-period RSI on the stock index markets. Using John’s Oscillator combined with these markets might just be a great place to start building a profitable trading system.

In a future article, I’m going to compare it to a few other indicators and then move to other markets such as currency futures, commodities, and bonds.

John Ehlers Workshop!

If you want to get personalized help on how to use advanced cycle and DSP technology in your trading, you’ll want to check out John’s Workshop.

Downloads

The post Battle of the Oscillators…Round 1 appeared first on System Trader Success.

A Complementary Approach To Trading Technical Indicators

$
0
0

In the October issue of Futures magazine, author Jean Folger discusses an important aspect in selecting two or more indicators when developing a trading system. While I don’t recommend simply combining indicators to create a trading system, and I don’t think that’s what Folger is suggesting either, when there comes a time to introduce two or more technical indicators to a trading system, this is when Folger’s advice is relevant. The author highlights a common mistake when selecting two or more indicators that could really hinder the performance of your system. By following Folger’s advice you can multiply the effectiveness of your system by selecting two or more indicators when done properly.

Types of Indicators

When it comes to technical indicators we are talking about mathematical formulas that are applied to price or volume. These technical indicators include MACD, Moving Averages, Stochastics, ADX, ATR, CCI, and many others. Folger first organized these indicators into different categories based upon what they are measuring.

  • Trend – ADX, Moving Averages, MACD, Parabolic SAE
  • Momentum – CCI, RSI, Stochastics
  • Volatility – ATR, Bollinger Bands, Standard Deviation
  • Volume – Chaikin Oscillator, OBV, Rate of Change

Selecting Two Indicators

When it comes to selecting two indicators the mistake can be from selecting two from the same category. By selecting from the same category you are measuring the same market characteristics (Trend, Momentum, Volatility, or Volume). In this case you’re not getting new information about the market. For example, if you select ADX and Moving Average you are simply looking at the trending characteristics of the market. I’m a believer in keeping things simple and if you are introducing two indicators that are telling you the same thing, this is not helpful and it needlessly complicates your trading system. Each indicator should be dedicated to a specific purpose, not telling you the same thing two different ways. The point is to look at different market characteristics to expand your view. This can be done by selecting two indicators from different groups, say from Trend and Momentum. Now you are gathering complementary information about the market and are better prepared to make a decision.

Example

An example strategy will make this concept even more clear. I’ll take Folger’s lead and create a similar strategy used in the original article. Let’s create a simple strategy for the S&P E-mini futures market. We’ll use a daily chart just to keep things simple. No slippage or commissions will be deducted. Entry signals will generate with the stochastic indicator move out from its overbought/oversold regions. The system will simply reverse its current position thus, we are always in the market.

  • Go Long when the SlowD line crosses above 20
  • Go Short when the SlowD line crosses below 80
Trading Technical Indicators
Below are the results of this strategy.
One Indicator Without_Filter, Trading Technical Indicators
Now let’s try a complementary indicator. One technique I would like to utilize a lot is the use of a simple moving average to divide the market into two different regimes: bull market and bear market. Often a 200-period moving average applied to a daily chart will work just fine. However, Folger suggested an SMA crossover method to determine the market regime. A 50-period moving average and a 60-period moving average. If the 50-period SMA is above the 60-period SMA the market is considered in a bullish regime. Otherwise the market is considered in a bearish regime. Let’s apply this filter here.
  • Go Long when the SlowD line crosses above 20 and within Bull Market
  • Go Short when the SlowD line crosses below 80 and within Bear market
With these rules added to our buy condition we have introduced a trend-based filter. This should reduce unproductive trades by only taking trades in the direction of the dominate market regime. As a result this should reduce the total number of trades and increase the profitability of our strategy.
Below are the results of this strategy.
Trading Technical Indicators, complementary indicators With_Filter

As you can see using two complementing indicators can really improve the results. Keep this in mind when developing a trading system. The example trading strategy is, of course, not a tradable system. It’s only an example of how applying a complementary indicator to filter trades can improve the trading system’s performance. I personally use this technique a lot. It really can do wonders for a trading system. You will find below the code used in this article along with a TradeStation workspace.

Download

Complementary Example Strategy Code (text file)
Complementary Example Strategy Code (TradeStation ELD)
TradeSatation WorkSpace (TradeStation TWS)

The post A Complementary Approach To Trading Technical Indicators appeared first on System Trader Success.

The Top Three Pitfalls of Stock And ETF System Development

$
0
0

During my nearly 25 years of experience in the trading business, I have talked to many traders about system development. Futures trading systems and back-adjusted contacts are fairly well understood. Recent issues with the pits closing and markets being only electronic have created some issues but it’s still relatively straight forward to design a trading system which can beat, for example, the Barclay Systematic Traders Index.

I have found the same to not be true for stock and ETF systems, even though both security types are more accessible to the general public. This seems to be for several reasons. First, the standard split-adjusted data series used by itself in backtesting has severe limitations when testing on a portfolio. I’ll detail more about this later in the article, but the summary is that you need to use unrealistic trading assumptions just to get a backtest which is moderately distorted from what the real results would have been for a long backtest of 15-20 years. Certain trading types are off-limits with standard split-adjusted data in a portfolio. For example, you can’t just exit one of the stocks or ETFs being traded on a protective stop and continue trading the others.

Secondly, another issue is that stock and ETF system developers do not understand how to test a stock/ETF system to make valid comparisons to buy and hold. It is true that it is, indeed, hard to develop systems which outperform buy and hold on a return basis depending on your testing window. For example, if we don’t include the 2008-2009 crash, then it is very difficult. If this period is included, then the comparisons are easier. Outperforming buy and hold is a bit of a myopic goal, however. One thing that systems designers can do is build a system which makes almost as much as buy and hold without as much risk. In the trading world, reducing risk is always critical.

Please note that I am not saying mechanical trading systems cannot beat buy and hold. I am simply saying that you need a good system to beat buy and hold by a sizable margin over a long time period. In fact, I have designed many excellent strategies which greatly outperform buy and hold with a lot less risk. A later installment of this article series will focus on how to develop these strategies.

For now, let’s look at the beginning of how to develop stock and ETF strategies. Fundamental to designing these systems is understanding the concepts behind split-adjusted data and some other general issues relating to equity data.

Split-Adjusted Data

A major reason why traders believe that they cannot build mechanical trading systems that outperform “buy-and-hold” is that they do not understand the issues in terms of preparing the data and the effects that differences in the data can have on the results.

Most stock traders use what is called “split-adjusted” data, which is similar to “ratio-adjusted” data in the commodities world. The problem in using this type of data is that the adjustment destroys the dollar returns, historical daily range, and the original price levels. The only advantage it has is that it correctly calculated the percent return.

If the price of a stock moves too high or too low, management of a company can issue splits in the stock to encourage trading. The vast majority of stock splits occur because the price has gotten too high. Consider the case when the price rallies to $50 per share and management decides to split 2 shares for 1 share. This does not affect the company valuation, but there are two shares of $25 stock on the books for every one share of $50 stock that previously existed. If the gap on the chart cannot be smoothed, false trades occur in testing because the unadjusted chart looks like the price dropped from $50 to $25. If you had a protective stop at $40 on a trade, it might trigger in the backtested results. However, this is actually a false order because the company valuation didn’t change with the drop from $50 to $25. “Split-adjusted” data is a way to solve this problem.

The split-adjusted data stream is produced by dividing the prices prior to a stock split by the factor of the stock split. For example, in our case above, the dividing factor would be 2 (for the 2 for 1 split). This sounds like a great solution, right? Not quite. This has the effect of cutting the previous daily ranges in half. Instead of one day being between $45 and $55 per share, that day now traded between $22.50 and $27.50. The problem really occurs when a stock has been split many times. When this occurs, the split-adjusted data can get ridiculous. An extreme example of this is Microsoft. The split-adjusted price is $0.11 per share if stock splits are handled all the way back to 1988. The real price per share at that time was about $34!

Another problem has to do with the way the major markets have been operated in the past. Stocks were priced in fractions until quotes were changed to the current decimal format. Many of the available public stock databases are set up to calculate to two decimal places. When a stock like Microsoft has had so many splits, a split-adjusted $0.01 move scales out to $3.40! To compensate, a minimum of 4 decimal places needs to be saved and more would be better in many cases. Another problem with split-adjusted data is accounting for the calculation of commissions. The actual dollar values are meaningless and the results of backtesting can only be seen in terms of percent returns.

The core premise of split-adjusted data is that the percent return is correct. The next step in this logic is that always buying the same amount of each stock traded will make percentage return numbers also correct. However, what happens when you don’t want to buy the same dollar value of each stock? What if you want to use a percent risk model whereby position size is based upon how much risk is assumed in a given trade? For example, suppose you decide to risk 1% of your account on a given position. For a $100,000 account, the risk would be $1,000 on a given trade.

Disregarding split-adjusted data for the moment, presume that our system rules are to exit a long position at a 10-day low. If that low is $1 per share away, it is possible to buy 1000 shares of that stock. With split-adjusted data, this is not possible (remember, the Microsoft disparity in prices). The problem becomes amplified when looking at a portfolio of stocks where risk analysis needs to be performed on each of them. Since portfolio-level analysis in backtesting software is a relatively new development, many of these issues have yet to be addressed.

A percent-risk money management strategy might require that you place a large percentage of your account into one stock position. What if the example stock with the $1.00 per share risk was $100 per share stock? In this case, 100% of the account would have to be placed in the stock position while we might be trying to trade a basket of 100 stocks. We are within the directives of our system because we are risking 1% of our account’s value on that trade. In summary, it’s essential to not just know the split-adjusted price of the stock for the correct prices on the trades, but also the unadjusted price is needed for entries and exits so dollar returns, percent returns, and the amount of money committed to taking a position can be accurately calculated.

Business Survivorship

Another issue in a stock trading system is the “business survivorship” test. For example, how valid is a test on the current S&P 500 in which results are overstated due to the fact that stocks such as WorldCom and Enron are not included in the backtest? The “business survivorship” test is an issue that many traders choose to completely ignore. However, it is one that they will likely have to face in real life because old data for many de-listed stocks is not readily available. Whether you choose to consider this in your testing or not is naturally entirely up to you; however, you have to realize that it exists when you are evaluating a trading strategy.

Dividends

Dividends can create problems as well with split-adjusted data. This used to be an issue more so for Dow 30 companies and utility stocks. Nowadays, changes in tax treatment where options have to be expensed and dividends are better tax-wise has helped to make dividends more popular. The price of a stock drops on the day the dividend is assigned to the existing owner of the stock, which causes a downtick on the chart. When analyzing mature companies like those in the S&P500, it must be remembered that dividends can account for as much as one half of the return of buy-and-hold strategies. When profits from dividends are compounded and reinvested back into stocks, dividends can have a very powerful effect on those compound returns.

Other dividend-related problems could cause major changes to system results even if dividend-adjusted data is used. Assume that our system rules tell us to buy at the highest high of the last 12 months. During that time, the stock has had a high of $40, but has paid a $2 dividend since that high. Most system developers would buy if the price exceeds $40, but since the dividend was subtracted since the high price was made, the real adjusted 12-month high is $38. If the stocks were bought at $38 instead of $40 across hundreds of shares over decades, the results of the system would be drastically different!

Another issue to consider when trading stocks is the use of fundamental data. Most data vendors do not have access to fundamental data and the ones who do typically only maintain one year of it in their databases. This creates a problem since it is very difficult to backtest a strategy using such a limited amount of data. This is only one of the problems of trading stocks using fundamentals. Long-term histories of fundamental data are available, but it is usually quite expensive.

As you can see, stock traders can have a tendency to not believe in mechanical systems due to the difficulty of isolating the issues involved in obtaining realistic backtest results that would compare with real-world performance. Analyzing these problems objectively results in an easy loss of faith in the system. After all, if the data is not clean, how can we realistically test any system and know how it will hold up with real money on the line? A trader must have a tremendous amount of faith in the system and that only comes after extensive backtesting on clean data, with the right backtesting platform.

How Can We Solve These issues in Backtesting ETF and Stock Systems?

How we solve these issues will be discussed in next week’s article.  In that article I’m going to show you how to correct these issues we discused and in doing so, create systems that accurately backtest so you can become more successful in trading stocks and ETFs.

The post The Top Three Pitfalls of Stock And ETF System Development appeared first on System Trader Success.

Why I Prefer Trailing Stops

$
0
0

Whether you trade stocks, bonds, Forex, commodities, or other financial instruments, or whether you are short-term, intermediate, or long-term trader I am sure you will agree with the next sentence – “Being in sync with the market is vital for success“. It applies for both automated and discretionary investors. In this article, I will share with you how I try to be in sync with the market regarding the exit part of my automated Forex trading systems.

Trailing Stop

There are four different types of strategy exits – opposite trade signal, stop loss, time exit, profit target, and trailing stop. I’d like to elaborate on the last two options and how they relate to our topic. In my opinion using profit target in trading is counter to the cited above wisdom. That’s why in all my systems I use the trailing stop option instead of the target order.

On the chart above I have marked four days of price action on the eur-usd market during 2015. For four trading sessions, the market has made gains of 670+ pips. It is a huge short-term movement without any visible corrections. It is not an every month move, but when it occurs I’d like to be on the right side of the market for as long as possible.

Profit Target

If I use a profit target of 100 pips I would miss more than 80% of that move. Even if I use 200 pips the missed movement will be 70%. The solution I found out for myself is to use a trailing stop. I plot a trend-following indicator such as Moving Average or Parabolic SAR or any other tool which will work well during trending movements. Then my stop loss order just follows the indicator with regards to the settings and time frame I have chosen. For my long-term strategies I use a very wide trailing stop and for my short-term systems, I use a tight one. With this very simple tool, I am sure that if the market decides to move a lot in a certain direction it is very likely for me to catch a big part of it. How big it is, depends entirely on the chosen settings which are a function of the strategy and the personal preferences of each trader. Aggressive traders use tight stops and conservative traders prefer the wide ones.

It is also possible to use both target and trailing stop in a combined approach. Instead of exiting on your predetermined target, you just can use it as an activation point of a new trailing stop which as I’ve said above could be wide or tight. Once the target is hit, we begin to move our stop loss order when the price continues to move in the desired direction.

Why I Prefer Trailing Stops

If the market advances in our direction widely, we could use the presented above second target option which once hit, we lock in more profits than initially. The thinking process here is that the price has already moved a lot on the upside and thus chances of continuing the bull trend are now much lower. It is very convenient for those who don’t like to have big open profits. It is again a very good combination of both approaches.

Choosing a fixed target for your trading is hard and counterproductive because the market is always changing and a good target of 100 pips today is going to be worthless during a wild volatile market and you potentially could miss very big moves. The same applies to the opposite – using 200 pips target during quiet market conditions is a sure path to never reaching the target and missing profits again.

I like to think that if the market is willing to give me only 50 pips I would gladly take them. If the market is exploding and could give 500 pips or more for just a few trading sessions then again I am willing to be at least ready to grab them. I don’t like to force the market to hit my targets. I need to be very flexible because the market is constantly changing and one approach that has been giving us excellent results in the past few months could not give us good profit anymore.

Quantitative Proof

As a proof of above-written concept, I’d like to present you with an example of a simple Forex trading strategy designed to work on eur-usd currency. It is short-term trend following based on daily bars volatility breakout pattern. I have done four backtests and the only difference in strategy’s inputs is the exit. I have used a trailing stop option and three profit target variations. The backtest has been conducted on 15 years data – from 2001 to 2015. Here are the results:

Why I Prefer Trailing Stops

As you can observe the trailing stop settings make the best gain and lowest MaxDD. The bigger the target the worse the results.

Below are the equity curves of all backtests:

As expected the first one which represents the trailing stop option is the best and most good looking.

I hope that I have contributed to your knowledge about how to be more in sync with current market conditions and be more prepared to grab some big moves which occur from time to time. I wish you a profitable trading!

— By Professional Trading Systems

The post <thrive_headline click tho-post-12488 tho-test-10>Why I Prefer Trailing Stops</thrive_headline> appeared first on System Trader Success.

Simulation: Beyond Backtesting

$
0
0

One problem with traditional backtesting is that it relies on the presupposition that there are repeating predictive patterns in the market. In fact, most trading methodologies rely on this assumption. And yet we know the disclaimer that past performance is not indicative of future results.

And yet backtesting largely assumes that the future will be similar to the past. Yet, we can imagine the possibility for non-repeating but predictable profit opportunities. Even without getting into those possibilities, we can imagine that if we can model the dynamics of the market accurately, we can predict new outcomes that cannot be extrapolated from the past.

The way this is accomplished is simulation. Simulation offers the powerful promise of allowing us to make use of historical market data under varying conditions of future similarity. Simulation, massive simulation is also poised to impact every aspect of our lives.

Imagine for a moment that you are a world class MMA fighter or boxer and you’re competing against a similar well ranked fighter. What should your strategy be? In the past, you might have studied your opponent and intuited a strategy. Perhaps, if you were more sophisticated you might have even used crude statistics such as counting to figure up the risk of and probability of a given move working. But today, it is surely possible to feed in your moves into a computer with precise timing and force calculations. Next, it is possible to infer the same regarding your opponent by using previous fight videos. In addition, by using the fighters height, weight, and other statistics it is possible to model how well they could perform even moves that were not recorded. Once all the data is put into the computer then you can run thousands or hundreds of thousands of fight simulations. The best performing simulations will yield the best strategies. The strategies that are discovered may be non-intuitive and completely innovative. These can be used, with human cognition and consideration, as the basis for your game plan.

Now, imagine how this would work for the trader. It is not just running thousands of simulations on past data. But you must infer how future traders will react to changing market conditions. This is the difficult part because you need to know how the combination of variables will impact their behavior.

Even if that level of simulation is beyond the average developers capability or can only provide rough approximations due to the difficulty in modeling, it is still possible to start thinking more along the lines of simulation to explore creative opportunity and risk management.

Some ideas for how you might do this:

  • Use random and partially randomized entries and exits to try to find more universal or robust settings for your strategies.
  • Create synthetic market data where you change the amount of volatility, trend, and mean reversion to see how it might impact your strategies.
  • Create models of how traders might act in certain scenarios and look for situations that might offer predictive advantage.
  • Use Monte Carlo analysis with randomized entries to come up with pessimistic capital requirements.
  • Try to find optimal strategies for given market conditions.
  • Build self-learning strategies with limited capacity for memory and try to find the optimal rules for trading.

–by Curtis White from blog, Beyondbacktesting

The post Simulation: Beyond Backtesting appeared first on System Trader Success.

Randomly Pushing Buttons

$
0
0

Before my current circumstances, and before I was a photographer (see above), I used to make music for a living. Specifically, weird-ass techno/electronic music that many people found difficult or annoying. One of the ways I would find sonic inspiration was to use audio software to generate random sounds. I would record this stream of noisy squawkiness, sift through a lot of garbage, and occasionally find a useful gem. I would take these little bits of useful audio and turn them into gritty, weird dance music.

It’s possible to find dedicated software that dives deeply into finding non-obvious, non-linear connections between “features” of price data. For example, we can ask ourselves if today’s high of the price of oil is above its 3-day moving average, and the S&P 500’s closing price is below yesterday’s open, will gold go up the next day? The danger, of course, is that you might – no you WILL – hit upon a great-looking system that just so happens to look good for your test, but fails in real life. “Curve fitting” is a fact of life when developing trading systems, and you must take steps to reduce this.

Recently, I thought it would be fun to put together a rudimentary script that tests various combinations of the open, high, low, and close of recent days in the past. It can’t be compared to software dedicated to that purpose, of course. We can, for example, explore such questions as: if the open of two days ago is above the high of seven days ago, AND the close of three days ago is lower than today’s low, should I buy at the next open? Who knows! This little optimizer will tell me if there’s anything to it.

Now if you’re like me, you’re thinking what on earth can the comparison between prices 11 and 12 days ago have to do with current prices? Well… maybe something, maybe nothing. Let’s find out.

One thing I did realize quickly is that using a 2-day moving average improved my testing. It seems to filter out some noise and improve the signal (or perhaps just allow me to curve-fit more tightly?). You could add this as another parameter to test, but you increase your processing time by adding another dimension.

I call my rudimentary push-button optimizer the “Comparinator”. If you’ve ever watched the cartoon Phineas and Ferb, the semi-evil Dr. Doofenshmirtz invents all sorts of evil devices with names ending in “-inator”. Now you can evilly compare OHLC data in the comfort of your own secret lair.

The Deflate-inator

“Holy crap-inator, Matt, can you get to the point already? Show us a graph or something!”

OK, here’s a graph. Do you like it?

The gray line is SPY buy-and-hold. The orange is a system that the Comparinator developed. The out-of-sample performance is quite good. Also, note that this system is long-only and seems to love the volatility of bear markets. It doesn’t like political shenanigans such as what happened in 2011, but it still does exceedingly well. Wait until you hear the trading details, which seem a little, well… random.

By the way, I developed this using a super-secret trading system development platform (“SSTSDP”) for which I’m an alpha tester. The good news is that I’ve slapped together some AmiBroker code that does the same thing.

Here’s the pseudo code for the entry (using the SSTSDP syntax). This was created to trade SPY. When the below code evaluates to ‘true’, go long at the open of the next day.

ma(C[1],2)>ma(C[0],2) and ma(C[10],2)>ma(O[11],2)

In plain English: when the 2-day moving average of the close of one day ago is greater than the 2-day moving average of the close of today, AND the 2-day moving average of the close 10 days ago is greater than the 2-day moving average of the open 11 days ago, go long at the open of the next day.

Clear as mud, right? Let’s see if we can simplify it.

ma(C[1],2) > ma(C[0],2)
( C[1] + C[2] ) / 2 > ( C[0] + C[1] ) / 2
C[1] + C[2] > C[0] + C[1]
C[2] > C[0]

The first half of that expression can be simplified to read: today’s close was below the close of two days ago. That’s much easier to figure out. Just don’t forget to include the other part. I attempted to rationalize why that part of the equation improves results, but I’ve decided just to point out that it seems to work.

The exit is simple: when the first part of the entry is no longer true, exit. I.e. when the close of today is NOT below the close of two days ago, exit at the open of the next day. Often this is a 24-hour hold, but the average is two days.

Here’s some AmiBroker code you can play with and come up with your own overly curve-fit systems. Will they be valid in the future? No idea. I do recommend that you find systems that show consistency throughout their equity curves, with lots of trades so your results are statistically significant.

A suggestion for using the code: run this to optimize for one comparison first (which will require 4096 permutations), using whatever characteristic you prefer. I chose the best sharpe ratio that had at least 200 trades over the period 2000-2010. Then make those values permanent. Next, uncomment the second section in the code to come up with your fine-tuning. Play around with the entries and exits: same day close, next day close, limit orders, etc.

Remember also to only test on a portion of your data, and leave some as out-of-sample data to verify it works. Even then, don’t start trading actively right away. Let your systems stew for awhile, or trade with tiny sums.

This code is a child’s play compared to a fully-compiled, dedicated application. Use it as a springboard for your own ideas.

#include_once "Formulas\Norgate Data\Norgate Data Functions.afl"
oc = NorgateOriginalCloseTimeSeries();

//The Comparinator by Matt Haines
//www.throwinggoodmoney.com

SetTradeDelays(0,0,0,0);
SetOption("initialequity",100000);
SetOption ("MaxOpenPositions" , 1);
SetOption ("allowsamebarexit",false);
SetBacktestMode(backtestregular);
SetOption("CommissionMode",2);
SetOption("CommissionAmount",0);
SetOption("MCUseEquityChanges",1);

SetOption( "ExtraColumnsLocation", 1 );

Short=Cover=0;

d1 = Optimize("d1",0,0,15,1);
d2 = Optimize("d2",4,0,15,1);
p1 = Optimize("p1",2,0,3,1);
p2 = Optimize("p2",1,0,3,1);

// at any one time, three of these segments below will return zero, and one of them will return
// the price for the OHLC variable.

val1= Ref(O,-d1)*(p1==0) + Ref(h,-d1)*(p1==1) + Ref(l,-d1)*(p1==2) + Ref(c,-d1)*(p1==3);
val2= Ref(o,-d2)*(p2==0) + Ref(h,-d2)*(p2==1) + Ref(l,-d2)*(p2==2) + Ref(c,-d2)*(p2==3);

//these variables are reset when second chunk is uncommented.
//otherwise they result in a pass-thru for the Buy evaluation
val3=1;
val4=0;
//uncomment after optimizing the first chunk
/*
d3 = Optimize("d3",2,0,15,1);
d4 = Optimize("d4",1,0,15,1);
p3 = Optimize("p3",1,0,3,1);
p4 = Optimize("p4",0,0,3,1);
val3= Ref(o,-d3)*(p3==0) + Ref(h,-d3)*(p3==1) + Ref(l,-d3)*(p3==2) + Ref(c,-d3)*(p3==3);
val4= Ref(o,-d4)*(p4==0) + Ref(h,-d4)*(p4==1) + Ref(l,-d4)*(p4==2) + Ref(c,-d4)*(p4==3);
*/

BuyPrice=Open;
Buy= Ref(MA(val1,2)>MA(val2,2) AND MA(val3,2)>MA(val4,2),-1);

Sell=Ref(MA(val1,2)<=MA(val2,2),-1);
SellPrice=open;

— By Matt Haines from Throwing Good Money After Bad

The post Randomly Pushing Buttons appeared first on System Trader Success.

Secret Weapon of Stock & ETF System Development

$
0
0

This is the second article in a two-part series where I discuss the top three pitfalls when backtesting Stock & ETF trading systems. In the first article, The Top Three Pitfalls of Stock and ETF System Development, I highlighted the top three issues system developers face. In this article, I'm going to show you how I fixed this problem to give me precise and accurate historical backtested results.

As many of you know, I designed the ETF trading strategies for Tuttle Tactical Management. This firm manages over $200 million in ETFs. I started working with Tuttle in 2007. One of the reasons why I was hired as a consultant was due to my expertise in developing realistic equity backtests. Being aware of the glaring issues of backtesting both Stock and ETFs with traditional trading platforms was the only thing I could do to fix the issue. I created a system development platform that fixes these issues.

Behold! The Secret Weapon: TradersStudio

The platform  developed  was TradersStudio 1.0 way back in 2003-2004. I improved the algorithm in 2005 and it has remained the same since. My algorithm requires three different types of data streams for each equity being traded. In order to generate realistic results, TradersStudio needs split-adjusted data. Many data vendors roll split and dividend-adjusted data into one series. This is not an ideal solution as on longer historical data series it just compounds the errors in precision of the split-adjusted data.

Next, TradersStudio uses dividend-only adjusted data. This simply means the data series is subtracting out the dividends without any further adjustments. CSI Data is the only vendor which allows you to produce this series because it, in effect, gives away the stock and ETF dividend database. Finally, TradersStudio needs the totally unadjusted data series. I then use my algorithm which allows me to accurately backtest portfolios of stocks and ETFs with any money management strategy you would like and get accurate results. One can see the splits and the dividends in the trade-by-trade reports along with real prices.

TradersStudio is the only product on the market that can produce a table of stock splits and dividends when CSI stock data or data from another vendor allows the data to be outputted in the correct format. Let’s look at the poster child for problems with splits and dividends, Microsoft. This analysis is shown in the splits and dividends report.

This report shows the date of splits and dividends. The number of shares (288) was calculated by multiplying the stock split ratios (2.0 * 2.0 * 1.5 * 2.0 * …). From this report we can also see that Microsoft did not pay dividends until February 19, 2003. This is a valuable report due to the fact that it allows us to audit our testing results to see what has happened. This report is also created when this type of analysis is performed on a portfolio of stocks. For example, if it were run on the S&P500 or NASDAQ 100, you would have this same report, ordered by date, for all stocks included in the portfolio.

Consider now the buy-and-hold calculation. Since the split factor for Microsoft is 288, our formula is (288 * (final price – original split price)).

 Since there are issues with “maxbars” back, we need to use the value of the first bar’s open split adjusted price on 4/25/1986. That price was $0.11111. This is “Maxbarsback + 2” bars.  Next we have our final closing price which is $46.29. This is “lastbar - 1” close on 7/29/2015.

288 * ($46.29 [final price]) - $0.11111 [split adjusted open first bar] = $13,299.52

In our buy-and-hold calculation, we also need to adjust for dividends. Since Microsoft had split 288-1 by the time it paid any dividends and has not split during the dividend period, our calculation is easy. Simply add the dividends from the Splits and Dividends report at $10.28 per share and multiply that value by 288. This gives a total of $2,960.64. Add the two numbers together and we get $16,260.16 which matches exactly what is returned on the summary report as the buy-and-hold return.

A TradersStudio stock session has two different Trade-by-Trade Reports. There is the standard report that shows the entry and exit prices with the split-adjusted values. The P/L (profit/loss) from the trade does not represent the P/L that would be calculated by only using split-adjusted data, but the real P/L on the trade. The standard Trade-by-Trade Report used to the entry and exit prices match the prices you see on the chart. The Trade-by-Trade Real Price gives you real-world results. First, let’s look at the standard report. The split-adjusted price of Microsoft is less than $0.12 on June 5, 1986 while the real price is $34!

The other report is the Trade-by-Trade Real Price Report which shows the real entry and exit prices. For example, on June 5, 1986, Microsoft was purchased at $34.25 and exited at $33.75 on June 6, 1986.

This demonstrates how buy-and-hold is calculated and how TradersStudio gives you actual entry and exit prices as well as a Splits and Dividend history. There is a problem inherent in this analysis and it’s the reason why it looks like we only made a very small percentage of buy-and-hold with this system (only 376.99 points when buy-and-hold was over $16,000).

The answer is that we should be buying the current number of shares that one original share has become and not just one share to be consistent in our first test. The size of what we are buying should be equal to the current split factor.

To do this, we need to modify our simple system by adding a call to split factor to use for sizing and a flag so that buy-and-hold still represents starting with one share when we started. The code appears as follows.

'Simple ORB system to trade stocks, backtest adjusted for
'splits to create correct comparison to buy and hold
Sub QQQBreakOutStockTest(MULT)
Dim AveTr
Dim Nxtopen
Dim NumLots
NumLots = splitfactor
BuyAndHoldSingle = true
If BarNumber < LastBar Then Nxtopen = NextOpen(0) Else Nxtopen = 0 End If If Close > Open Then
Sell("SellBrk",NumLots,Nxtopen – MULT * TrueRange,Stop,Day)
End If
If Close < Open Then
Buy("BuyBrk",NumLots,Nxtopen + MULT * TrueRange,Stop,Day)
End If
End Substocks
 

backtest adjusted for splits to create correct comparison ' to buy-and-hold
Sub QQQBreakOutStockTest(MULT)
Dim AveTr
Dim Nxtopen
Dim NumLots
NumLots = splitfactor
BuyAndHoldSingle = true
If BarNumber < LastBar Then Nxtopen = NextOpen(0) Else Nxtopen = 0 End If If Close > Open Then Sell("SellBrk",NumLots,Nxtopen – MULT * TrueRange,Stop,Day)
End If
If Close < Open Then
Buy("BuyBrk",NumLots,Nxtopen + MULT * TrueRange,Stop,Day)
End If
End Sub

A new function called “splitfactor” is called. This number represents the number of shares that our original single share of stock has become after multiple splits. There is also a flag called “BuyAndHoldSingle” which is set to true. This bases buy- and-hold analysis on one original share and not the original number of shares that were purchased. For example if our analysis was started with 100 shares, buy-and- hold would be calculated based on 100 original shares if “BuyAndHoldSingle” was set to false. This means that the number of shares of Microsoft to buy and sell will change when there are signals based on the following schedule. Before September 21, 1987, only one share would be purchased.

By making this significant change, the analysis is now correctly comparing apples to apples and we see how the QQQBreakout system has done.

Indeed, our original system which looked like a joke $376.99 points versus over $16k now looks more respectable at $8,651.23 which is a little more than half of buy-and-hold. I am not arguing that this is a good system, but you can see how ensuring that we are truly comparing apples to apples which can change a system that looks dismal at first to something that actually makes sense.

When working with a stock database, the calculations should be to 4 decimal places including the time before decimalization to increase accuracy. Some stock database vendors have maintained their database calculations at only two decimal places even after decimalization occurred. This oversight serves to destroy the accuracy of the data. Using CSI Data avoids this issue entirely as they have done a good job overall of maintaining their data accurately. The purpose of this example was not to give out a great system but to illustrate and explain the issues that are related to trading stocks with stock splits and dividends. 

Learn more about this amazing platform by visiting TradersStudio.com

The post Secret Weapon of Stock & ETF System Development appeared first on System Trader Success.


Getting Started with Neural Networks for Algorithmic Trading

$
0
0

If you’re interested in using artificial neural networks (ANNs) for algorithmic trading, but don’t know where to start, then this article is for you. Normally if you want to learn about neural networks, you need to be reasonably well versed in matrix and vector operations – the world of linear algebra. This article is different. I’ve attempted to provide a starting point that doesn’t involve any linear algebra and have deliberately left out all references to vectors and matrices. If you’re not strong on linear algebra, but are curious about neural networks, then I think you’ll enjoy this introduction. In addition, if you decide to take your study of neural networks further, when you do inevitably start using linear algebra, it will probably make a lot more sense as you’ll have something of head start.

The best place to start learning about neural networks is the perceptron. The perceptron is the simplest possible artificial neural network, consisting of just a single neuron and capable of learning a certain class of binary classification problems. Perceptrons are the perfect introduction to ANNs and if you can understand how they work, the leap to more complex networks and their attendant issues will not be nearly as far. So we will explore their history, what they do, how they learn, where they fail. We’ll build our own perceptron from scratch and train it to perform different classification tasks which will provide insight into where they can perform well, and where they are hopelessly outgunned. Lastly, we’ll explore one way we might apply a perceptron in a trading system.

A Brief History of the Perceptron

The perceptron has a long history, dating back to at least the mid 1950s. Following its discovery, the New York Times ran an article that claimed that the perceptron was the basis of an artificial intelligence (AI) that would be able to walk, talk, see and even demonstrate consciousness. Soon after, this was proven to be hyperbole on a staggering scale, when the perceptron was shown to be wholly incapable of classifying certain types of problems. The disillusionment that followed essentially led to the first AI winter, and since then we have seen a repeating pattern of hyperbole followed by disappointment in relation to artificial intelligence.

Still, the perceptron remains a useful tool for some classification problems and is the perfect place to start if you’re interested in learning more about neural networks. Before we demonstrate it in a trading application, let’s find out a little more about it.

Artificial Neural Networks: Modelling Nature

Algorithms modelled on biology are a fascinating area of computer science. Undoubtedly you’ve heard of the genetic algorithm, which is a powerful optimization tool modelled on evolutionary processes. Nature has been used as a model for other optimization algorithms, as well as the basis for various design innovations. In this same vein, ANNs attempt to learn relationships and patterns using a somewhat loose model of neurons in the brain. The perceptron is a model of a single neuron.

In an ANN, neurons receive a number of inputs, weight each of those inputs, sum the weights, and then transform that sum using a special function called an activation function, of which there are many possible types. The output of that activation function is then either used as the prediction (in a single neuron model) or is combined with the outputs of other neurons for further use in more complex models, which we’ll get to in another article.

Here’s a sketch of that process in an ANN consisting of a single neuron:

Neural Networks

Here, x1x2,etc are the inputs. b is called the bias term, think of it like the intercept term in a linear model y=mx+bw1,w2,etc are the weights applied to each input. The neuron firstly sums the weighted inputs (and the bias term), represented by S in the sketch above. Then, S is passed to the activation function, which simply transforms S in some way. The output of the activation function, z is then the output of the neuron.

The idea behind ANNs is that by selecting good values for the weight parameters (and the bias), the ANN can model the relationships between the inputs and some target. In the sketch above, z is the ANN’s prediction of the target given the input variables.

In the sketch, we have a single neuron with four weights and a bias parameter to learn. It isn’t uncommon for modern neural networks to consist of hundreds of neurons across multiple layers, where the output of each neuron in one layer is input to all the neurons in the next layer. Such a fully connected network architecture can easily result in many thousands of weight parameters. This enables ANNs to approximate any arbitrary function, linear or nonlinear.

The perceptron consists of just a single neuron, like in our sketch above. This greatly simplifies the problem of learning the best weights, but it also has implications for the class of problems that a perceptron can solve.

What’s an Activation Function?

The purpose of the activation function is to take the input signal (that’s the weighted sum of the inputs and the bias) and turn it into an output signal. There are many different activation functions that convert an input signal in a slightly different way, depending on the purpose of the neuron.

Recall that the perceptron is a binary classifier. That is, it predicts either one or zero, on or off, up or down, etc. It follows then that our activation function needs to convert the input signal (which can be any real-valued number) into either a one or a zero corresponding to the predicted class.

In biological terms, think of this activation function as firing (activating) the neuron (telling it to pass the signal on to the next neuron) when it returns 1, and doing nothing when it returns 0.

What sort of function accomplishes this? It’s called a step function, and its mathematical expression looks like this:

And when plotted, it looks like this:

This function then transforms any weighted sum of the inputs (S) and converts it into a binary output (either 1 or 0). The trick to making this useful is finding (learning) a set of weights, w, that lead to good predictions using this activation function.

How Does a Perceptron Learn?

We already know that the inputs to a neuron get multiplied by some weight value particular to each individual input. The sum of these weighted inputs is then transformed into an output via an activation function. In order to find the best values for our weights, we start by assigning them random values and then start feeding observations from our training data to the perceptron, one by one. Each output of the perceptron is compared with the actual target value for that observation, and, if the prediction was incorrect, the weights adjusted so that the prediction would have been closer to the actual target. This is repeated until the weights converge.

In perceptron learning, the weight update function is simple: when a target is misclassified, we simply take the sign of the error and then add or subtract the inputs that led to the misclassifiction to the existing weights.

If that target was -1 and we predicted 1, the error is −1−1=−2. We would then subtract each input value from the current weights (that is, wi=wi–xi). If the target was 1 and we predicted -1, the error is 1–−1=2, so then add the inputs to the current weights (that is, wi=wi+xi).

This has the effect of moving the classifier’s decision boundary (which we will see below) in the direction that would have helped it classify the last observation correctly. In this way, weights are gradually updated until they converge. Sometimes (in fact, often) we’ll need to iterate through each of our training observations more than once in order to get the weights to converge. Each sweep through the training data is called an epoch.

Implementing a Perceptron from Scratch

Next, we’ll code our own perceptron learning algorithm from scratch using R. We’ll train it to classify a subset of the iris data set.

In the full iris data set, there are three species. However, perceptrons are for binary classification (that is, for distinguishing between two possible outcomes). Therefore, for the purpose of this exercise, we remove all observations of one of the species (here, virginica), and train a perceptron to distinguish between the remaining two. We also need to convert the species classification into a binary variable: here we use 1 for the first species, and -1 for the other. Further, there are four variables in addition to the species classification: petal length, petal width, sepal length and sepal width.  For the purposes of illustration, we’ll train our perceptron using only petal length and width and drop the other two measurements. These data transformations result in the following plot of the remaining two species in the two-dimensional feature space of petal length and petal width:

The plot suggests that petal length and petal width are strong predictors of species – at least in our training data set. Can a perceptron learn to tell them apart?

Training our perceptron is simply a matter of initializing the weights (here we initialize them to zero) and then implementing the perceptron learning rule, which just updates the weights based on the error of each observation with the current weights. We do that in a for()  loop which iterates over each observation, making a prediction based on the values of petal length and petal width of each observation, calculating the error of that prediction and then updating the weights accordingly.

In this example we perform five sweeps through the entire data set, that is, we train the perceptron for five epochs. At the end of each epoch, we calculate the total number of misclassified training observations, which we hope will decrease as training progresses. Here’s the code:

# perceptron initial weights
w1 = 0
w2 = 0
b = 0

# perceptron learning
epochs <- 5
errors <- vector()
for(j in c(1:epochs))
{
for(i in c(1:nrow(iris)))
{
yhat <- ifelse(w1*iris$Petal.Length[i] + w2*iris$Petal.Width[i] + b > 0, 1, -1) #prediction of current observation
error = iris$Species[i] - yhat #will be either 0, 2 or -2
w1 <- w1 + error*iris$Petal.Length[i]
w2 <- w2 + error*iris$Petal.Width[i]
b <- b + error
}
# end of epoch
preds <- ifelse(w1*iris$Petal.Length + w2*iris$Petal.Width + b > 0, 1, -1) #predict on whole training set
errors[j] <- sum(abs(iris$Species - preds))/2 # calculate errors
}
plot(c(1:epochs), errors, type='l', xlab='epoch') # plot error rate at each epoch

Here’s the plot of the error rate:

We can see that it took two epochs to train the perceptron to correctly classify the entire dataset. After the first epoch, the weights hadn’t been sufficiently updated. In fact, after epoch 1, the perceptron predicted the same class for every observation! Therefore it misclassified 50 out of the 100 observations (there are 50 observations of each species in the data set). However after two epochs, the perceptron was able to correctly classify the entire data set by learning appropriate weights.

Another, perhaps more intuitive way, to view the weights that the perceptron learns is in terms of its decision boundary. In geometric terms, for the two-dimensional feature space in this example, the decision boundary is the a straight line separating the perceptron’s predictions. On one side of the line, the perceptron always predicts -1, and on the other, it always predicts 1.

We can derive the decision boundary from the perceptron’s activation function:

where z=w1x1+w2x2+b

The decision boundary is simply the line that defines the location of the step in the activation function. That step occurs at z=0, so our decision boundary is given by

w1x1+w2x2+b =0

Equivalently

x2=w1w2x1bw2

which defines a straight line in x1,x2 feature space.

In our iris example, the perceptron learned the following decision boundary:

IrisDecsinBndry

Here’s the complete code for training this perceptron and producing the plots shown above:

### PERCEPTRON FROM SCRATCH ####

# load data
data(iris)

# transform data to binary classification problem using two inputs
iris <- iris[iris$Species != 'virginica', c('Petal.Length', 'Petal.Width', 'Species')]
iris$Species <- ifelse(iris$Species=='versicolor', 1, -1)

# plot data
plot(iris[, c(1,2)],pch=ifelse(iris$Species>0,"-","+"),
col=ifelse(iris$Species>0,"blue","red"), cex=2,
main = 'Iris Classifications')
legend("bottomright", c("species1", "species2"), col=c("blue", "red"), pch=c("-","+"), cex=1.1)

# perceptron initial weights
w1 = 0
w2 = 0
b = 0

# perceptron learning
epochs <- 5
errors <- vector()
for(j in c(1:epochs))
{
for(i in c(1:nrow(iris)))
{
yhat <- ifelse(w1*iris$Petal.Length[i] + w2*iris$Petal.Width[i] + b > 0, 1, -1)
error = iris$Species[i] - yhat #will be either 0, 2 or -2
w1 <- w1 + error*iris$Petal.Length[i]
w2 <- w2 + error*iris$Petal.Width[i]
b <- b + error
}
# end of epoch
preds <- ifelse(w1*iris$Petal.Length + w2*iris$Petal.Width + b > 0, 1, -1)
errors[j] <- sum(abs(iris$Species - preds))/2
}

slope <- -w1/w2
intercept <- -b/w2
abline(intercept, slope)

plot(c(1:epochs), errors, type='l', xlab='epoch')

Congratulations! You just built and trained your first neural network.

Let’s now ask our perceptron to learn a slightly more difficult problem. Using the same iris data set, this time we remove the setosa species and train a perceptron to classify virginica and versicolor on the basis of their petal lengths and petal widths. When we plot these species in their feature space, we get this:

This looks a slightly more difficult problem, as this time the difference between the two classifications is not as clear cut. Let’s see how our perceptron performs on this data set.

This time, we introduce the concept of the learning rate, which is important to understand if you decide to pursue neural networks beyond the perceptron. The learning rate controls the speed with which weights are adjusted during training. We simply scale the adjustment by the learning rate: a high learning rate means that weights are subject to bigger adjustments. Sometimes this is a good thing, for example when the weights are far from their optimal values. But sometimes this can cause the weights to oscillate back and forth between two high-error states without ever finding a better solution. In that case, a smaller learning rate is desirable, which can be thought of as fine tuning of the weights.

Finding the best learning rate is largely a trial and error process, but a useful approach is to reduce the learning rate as training proceeds. In the example below, we do that by scaling the learning rate by the inverse of the epoch number.

Here’s a plot of our error rate after training in this manner for 400 epochs:

ErrorRate

You can see that training proceeds much less smoothly and takes a lot longer than last time, which is a consequence of the classification problem being more difficult. Also note that the error rate is never reduced to zero, that is, the perceptron is never able to perfectly classify this data set. Here’s a plot of the decision boundary, which demonstrates where the perceptron makes the wrong predictions:

decisionBoundary

Here’s the code for this perceptron:

# load data
data(iris)

# transform data to binary classification problem using two inputs
iris <- iris[iris$Species != 'setosa', c('Petal.Length', 'Petal.Width', 'Species')]
iris$Species <- ifelse(iris$Species=='versicolor', 1, -1)

# plot data
plot(iris[, c(1,2)],pch=ifelse(iris$Species>0,"-","+"),
col=ifelse(iris$Species>0,"blue","red"), cex=2,
main = 'Iris Classifications')
legend("bottomright", c("species1", "species2"), col=c("blue", "red"), pch=c("-","+"), cex=1.1)

# perceptron initial weights
w1 = 0
w2 = 0
b = 0

# perceptron learning
epochs <- 400
errors <- vector()
for(j in c(1:epochs))
{
learn.rate <- 1/j # set learning rate
for(i in c(1:nrow(iris)))
{
yhat <- ifelse(w1*iris$Petal.Length[i] + w2*iris$Petal.Width[i] + b > 0, 1, -1)
error = iris$Species[i] - yhat #will be either 0, 2 or -2
w1 <- w1 + learn.rate*error*iris$Petal.Length[i]
w2 <- w2 + learn.rate*error*iris$Petal.Width[i]
b <- b + learn.rate*error
}
# end of epoch
preds <- ifelse(w1*iris$Petal.Length + w2*iris$Petal.Width + b > 0, 1, -1)
errors[j] <- sum(abs(iris$Species - preds))/2
}

slope <- -w1/w2
intercept <- -b/w2
abline(intercept, slope)

plot(c(1:epochs), errors, type='l', xlab='epoch')

Where Do Perceptrons Fail?

In the first example above, we saw that our versicolor and setosa iris species could be perfectly separated by a straight line (the decision boundary) in their feature space. Such a classification problem is said to be linearly separable and (spoiler alert) is where perceptrons excel. In the second example, we saw that versicolor and virginica were almost linearly separable, and our perceptron did a reasonable job, but could never perfectly classify the whole data set. In this next example, we’ll see how they perform on a problem that isn’t linearly separable at all.

Using the same iris data set, this time we classify our iris species as either versicolor or other (that is setosa and virginica get the same classification) on the basis of their petal lengths and petal widths. When we plot these species in their feature space, we get this:

Not-Linearly-Separable

This time, there is no straight line that can perfectly separate the two species. Let’s see how our perceptron performs now. Here’s the error rate over 400 epochs and the decision boundary:

errorRate
decisionBoundary

We can see that the perceptron fails to distinguish between the two classes. This is typical of the performance of the perceptron on any problem that isn’t linearly separable. Hence my comment at the start of this unit (see footnote 2) that I’m skeptical that perceptrons can find practical application in trading. Maybe you can find a use case in trading, but even if not, they provide an excellent foundation for exploring more complex networks which can model more complex relationships.

A Perceptron Implementation for Algorithmic Trading

The Zorro trading automation platform includes a flexible perceptron implementation. If you haven’t heard of Zorro, it is a fast, accurate and powerful backtesting/execution platform that abstracts a lot of tedious programming tasks so that the user is empowered to concentrate on efficient research. It uses a simple C-based scripting language that takes almost no time to learn if you already know C, and a week or two if you don’t (although of course mastery can take much longer). This makes it an excellent choice for independent traders and those getting started with algorithmic trading. While the software sacrifices little for the abstraction that enables efficient research, experienced quant developers or those with an abundance of spare time might take issue with that aspect of the software, as it’s not open source, so it isn’t for everyone. But it’s a great choice for beginners and DIY traders who maintain a day job. If you want to learn to use Zorro, even if you’re not a programmer, we can help.

Zorro’s perceptron implementation allows us to define any features we think are pertinent, and to specify any target we like, which Zorro automatically converts it to a binary variable (by default, positive values are given one class; negative values the other). After training, Zorro’s perceptron predicts either a positive or negative value corresponding to the positive and negative classes respectively.

Here’s the Zorro code for implementing a perceptron that tries to predict whether the 5-day price change in the EUR/USD exchange rate will be greater than 200 pips, based on recent returns and volatility, whose predictions are tested under a walk-forward framework:

/* PERCEPTRON

*/

function run()
{
set(RULES|PEEK|PLOTNOW|OPENEND);
StartDate = 20100101;
EndDate = 20161231;
BarPeriod = 1440;
LookBack = 100;
BarZone = WET;
BarOffset = 9*60;
asset("EUR/USD");
if(Train) Hedge = 2; //needed for training trade results

// set up walk-forward parameters
int TST = 50*1440/BarPeriod; //number of bars in test period
int TRN = 500*1440/BarPeriod; //number of bars in training period
DataSplit = 100*TRN/(TST+TRN);
WFOPeriod = TST+TRN;

//data
vars Close = series(priceClose());

//signals
var Sig1 = scale(ATR(10)-ATR(50), 100);
var Sig2 = (Close[0]-Close[1])/Close[1];
var Sig3 = (Close[0]-Close[5])/Close[5];
var Sig4 = (Close[0]-Close[10])/Close[10];

var ObjLong;
var ObjShort;
if(priceClose(-5) - priceClose(0) > 200*PIP) ObjLong = 1;
else ObjLong = -1;
if(priceClose(-5) - priceClose(0) < -200*PIP) ObjShort = 1;
else ObjShort = -1;

LifeTime = 5;
var l = adviseLong(PERCEPTRON+BALANCED, ObjLong, Sig1, Sig2, Sig3, Sig4);
var s = adviseShort(PERCEPTRON+BALANCED, ObjShort, Sig1, Sig2, Sig3, Sig4);
if(!Train)
{
if(l > 0 and s < 0) reverseLong(1);
else if(s > 0 and l < 0) reverseShort(1);
}

plot("l", l, NEW, GREEN);
plot("s", s, NEW, RED);
}

Zorro firstly outputs a trained perceptron for predicting long and short 5-day price moves greater than 200 pips for each walk-forward period, and then tests their out-of-sample predictions.

Here’s the walk-forward equity curve of our example perceptron trading strategy:

PerceptronEqCurve

I find this result particularly interesting because I expected the perceptron to perform poorly on market data, which I find it hard to imagine falling into the linearly separable category. However, sometimes simplicity is not a bad thing, it seems.

Conclusions

I hope this article not only whet your appetite for further exploration of neural networks, but facilitated your understanding of the basic concepts, without getting too hung up on the math.

I intended for this article to be an introduction to neural networks where the perceptron was to be nothing more than a learning aid. However, given the surprising walk-forward result from our simple trading model, I’m now going to experiment with this approach a little further. If this interests you too, some ideas you might consider include extending the backtest, experimenting with different signals and targets, testing the algorithm on other markets and of course considering data mining bias. I’d love to hear about your results in the comments.

Thanks for reading!

--by Kris Longmore from blog Robotwealth

The post Getting Started with Neural Networks for Algorithmic Trading appeared first on System Trader Success.

Trading The Equity Curve & Beyond

$
0
0

Some trading systems have prolonged periods of winning or losing trades. Long winning streaks followed by a prolonged period of drawdown.  Wouldn't it be nice if you could minimize those long drawdown periods?

Here is one tip that might help you do that. Try applying a simple moving average to your trading system's equity curve. Then, use that as an indicator on when to stop and restart trading your system. This technique might change your trading system's performance for the better. 

How to do this? Well, the moving average applied to your trading system's equity curve creates a smoothed version of your trading system's equity curve. You can now use this smoothed equity curve as a signal on when to stop or restart trading. For example, when the equity curve falls below the smoothed equity curve you can stop trading your system.  Why would you do this? Because your trading system is under performing, it's losing money. Only after the equity curve starts to climb again should you start taking your trades once again. This technique is trading the equity curve. You're making trading decisions based upon your equity curve. In essence the performance of your system is an indicator.

Trading the equity curve is like trading a basic moving average crossover system. When the fast moving average (your equity curve) crosses over the slower moving average (your smoothed equity curve) you go long (trade your system live). When the fast moving average crosses under the slower moving average you close your long trade (stop trading your live system).

In the image above the blue line is the equity curve of an automated trading system. The pink line is a 30-trade average of the equity curve. When the equity curve dips below the pink line, such as around trade number 60, you would stop trading the system. Once the equity curve rises above the pink line, around trade number 80, you would start trading.

It's a great idea and with some systems this technique can work wonders. In essence, we are using the equity curve as a signal or filter for our trading system. In the most simple case, it's a switch telling us when to stop trading it and when to resume trading. But you could also use this signal to reduce your risk instead of turning off the system.

How To Trade The Equity Curve

First you need to track all trades to generate the complete equity curve and moving average. This is done even if your live system has stopped trading. In other words, you will need to record the theoretical trades it would be taking. This means you will need to have two copies of your system running. One will is dedicated to taking every trade in simulation mode. This simulated system tracks the theoretical equity curve and computes the smoothed equity curve. No real trades would are taken by the simulated system. Its job is to track the two equity curves.

The second system is dedicated to trading live. This live system will have the ability to trade or not trade based upon the results computed by the simulation system. Think of the simulated system as an indicator. It's always running collecting data and crunching the numbers. This information will then be used by the live system to tell it when to trade and when not to trade.

One method to do this would involve passing data between two charts in TradeStation. Both charts are trading the identical trading system. One trades live while the other trades only in simulation mode. The chart running in simulation mode acts as your indicator. This indicator chart then passes a simple variable to the live chart. The live chart then acts on the live market.

This type of setup produces a dynamic trading system that adjusts its trading behavior based upon the system's performance. It's a simple concept, but it's complex to build in TradeStation. Some solutions that I've seen are also not very flexible. Overall solutions have proven to require complex programming skills and tedious setup to get this to work.

In short, building custom Easylanguage code to trade the equity curve has been very difficult to do. In fact, it has been well outside the ability of most programmers. But, not anymore.

Equity Curve Feedback Toolkit

There is a TradeStation product that can do all the heavy lifting for us! It is, the Equity Curve Feedback Toolkit. This kit allows me to use simple EasyLanguage functions within my code to trade the equity curve. It's super simple and will only take a few minutes to set up.  Let me show you.

I took an example trading system that appeared on System Trader Success called, A Simple S&P System. I then added the Equity Curve Feedback function to my code. Next, I made a couple minor adjustments to my code. Once done, the Equity Curve Feedback function now returns the simulated system's equity curve.

No need to use DLLs or other complicated setups!  With this information you can make a determination if the equity curve is above or below the smoothed equity curve. In other words, you can now turn-off or turn-on your live system based upon the equity curve!

First lets look at the results of Simple S&P System without the equity curve feedback. 

Simple S&P System With No EQ Feedback

You can see by looking at the equity curve the strategy got off to a bad start. See trades 20-40. Then the strategy had a nice run and a huge drawdown around trade 200.

The graph below is the underwater equity on a weekly basis. You can see at the start of trading and at the most recent trades, about a 12% drawdown.

Finally, here is the performance report of the S&P Simple System.

Adding Equity Curve Feedback

There are some adjustments to the original code required. For example, adding number variables and arrays. But at the heart of the issue is computing the equity curve of a simulated version of our strategy. That's difficult to do in EasyLanguage! But not any more.

Below is the line of code which does all the magic! That one line is what computes the simulated trades of our system. 

The next line of code is what enables or disables our live trading system. This function calculates the simulated equity curve of all trades. It then compares this value to the moving average of all simulated trades. It will then return true if the simulated equity curve is above the moving average. Else it will return false. 

I went ahead and added a 30 period moving average to the equity curve. Below are the results.

It's interesting to note the net profit is about the same. However, it generated this profit with fewer trades. That means many of the losing trades were eliminated. In fact, the average trade increased by about $50. Adding equity curve feedback reduced the drawdown by about half! Looking at the equity curve you can see the deep drawdown at the far-right side has been improved. Also, looking at the first few trades the equity curve looks much better.

You don't have to stop trading when the equity curve falls below its moving average. You could  adjust your risk. That is, if your equity curve begins to fall you can reduce the number of shares or contracts you trade. Or, when the equity curve is climbing, you need to increase your risk by buying more shares or contracts. You could also start or stop trading based upon drawdown or if the percentage of winners falls below a threshold. These are all possible with this kit.

Does It Always Help?

The short answer is, no. Trading the equity curve works well on some trading systems. Those systems tend to have prolonged periods of drawdown. Yet, other systems don't benefit from equity curve feedback. This is because the drawdowns are rather shallow and you end up hurting your equity more than anything. But like most things in the world of trading, you'll have to perform some testing. Test different moving averages and test between halting all trading or reducing contract/share size. Remember, some systems do not benefit from this technique at all.

Going Beyond With Multi-Agent Strategies

The Equity Curve Feedback Toolkit can also be use to create even more dynamic systems.​

For example, you can simulate many different variations of a single trading strategy. Maybe each of the simulated strategies has different input parameters. Instead of relying on a single set of input parameters giving a buy signal, you require two or more of the simulated strategies to give a buy signal. In effect you want confirmation from more than one strategy before taking a signal. This type of model is based upon on a voting scheme. When enough votes are counted - a trade is taken.

In another example you have multiple competing strategies. These strategies can simply be the same strategy with different inputs or they can be completely different types of strategies. A mean reverting strategy and a trend following strategy - for example. Based upon equity curve feedback, the strategy trade the best performing strategy in real time.

These examples are types of multi-agent strategies that can help make more dynamic and/or robust strategies. Such power was not widely available to the retail trader, but now it is.

Learn more about the Equity Curve Feedback Toolkit by clicking here.

The post Trading The Equity Curve & Beyond appeared first on System Trader Success.

Looking Into The Ulcer Index

$
0
0

Investors utilize a variety of performance and risk metrics to evaluate strategies. These numbers provide a summary of what happened to the strategy historically and can be useful to quickly compare different strategies. To use these statistics effectively, it is helpful to look at some of the nuances of those frequently cited and cases where the information they provide could be misleading.

Many of the common metrics can be classified in ways that are similar to quantities we use to describe the world around us: temperature, speed, weight, voltage, etc. These classifications add context to what is being described based on how it is calculated and the information it contains.

In finance, one typical summary statistic is the annualized return of a strategy. To calculate this, all we need is the starting and ending point; what happened in between is irrelevant. Much like average speed simply uses the total time and distance traveled, annualized return smooths over any intermediate details. This is somewhat similar to a state variable in physics, such as temperature change, entropy, and internal energy, which depends only on the initial and final states.

If it were as simple as that, the two strategies shown below would be equivalent, but even a novice investor would likely choose to have owned strategy A.[1] Equal-Return

 

Therefore, we look at metrics like annualized volatility, which incorporates the individual realized returns over a time period. We could call volatility a path dependent metric, much like mechanical work is in thermodynamics. It is a quantity that is likely to change if your “route” changes. However, annualized volatility only depends on what returns were realized, not in what order they came. This also applies to the Sharpe and Sortino ratios. To illustrate this concept, the following simulated paths both have the same realized volatility.

Same-Vol

To differentiate between these two strategies using summaries statistics, we must capture the sequence of the returns. Maximum drawdown does this by measuring the worst loss from peak to trough over the time period. Still, maximum drawdown lacks information about the length of the drawdown, which can have a substantial impact on investors’ perception of a strategy. In fact, Strategies B and C shown previously have the same maximum drawdown of 25%.[2]

Enter the Ulcer Index. It not only factors in the severity of the drawdowns but also their duration. It is calculated using the following formula:

Ulcer-Index1

where N is the total number of data points and each Ri is the fractional retracement from the highest price seen thus far. Whereas, the maximum drawdown is only the largest Ri, which can only increase through time, the Ulcer Index encapsulates every drawdown into one summary statistic that adapts to new data as it is realized.[3] Using the Ulcer Index, we can finally distinguish between strategies that have the same annualized return, annualized volatility, and maximum drawdown: Strategies B and C have Ulcer Indices of 11.2% and 12.8%, respectively.

As a case study, the following chart shows the return of a 60/40 portfolio of SPY and AGG rebalanced at the beginning of each year from 01/2004 to 12/2013. Along with the true realized path, I have included the path with the returns reversed and five paths with random permutations of the true returns.

60-40

The metrics for each path are shown in the table below:

Table

Only the Ulcer Index can fully differentiate among these paths. Even in cases where the maximum drawdown is similar (e.g. the true path and Random 1), the Ulcer Index shows a sharp contrast between the strategies.

For a more concrete way of picturing the Ulcer Index, imagine driving a car along a 55 mph speed limit road with stoplights spaced every half mile. Traffic is moderately heavy and the lights are poorly timed. As you accelerate, the light down the road turns yellow and then red. Easing off the accelerator will increase the time until you get to that light, perhaps to the point where you won’t have to stop, thus reducing the amount of time spent waiting for the light to change and the subsequent acceleration to approach the speed limit again.

You continue down the road anticipating the lights so that you do not brake when unnecessary or burn needless gas racing toward red lights. This not only reduces the variation in your speed (a volatility) but also the amount you have to slow down (the severity) and the time spent waiting at red lights (the duration). The smoother trip is likely to lead to less stress, not to mention wear and tear on the car, which can cause further headaches.

Ultimately, evaluating a strategy involves more than simple performance metrics since the methodology driving the strategy is key. But when comparing historical performance, it is helpful to have a toolbox equipped with implements able to measure performance on the bases of profitability and risk in ways that are amenable to our inherent, risk-averse inclinations.

— By  Nathan Faber from Flirting With Models. Nathan Faber is an Associate in Newfound’s Product Development and Quantitative Strategies group. Newfound is a Boston-based registered investment advisor and quantitative asset manager focused on rule-based, outcome-oriented investment strategies and specializes in tactical asset and risk management.

[1] One exception is if you owned another strategy that had the correct characteristics relative to strategy B (negative correlation, positive return, and similar volatility) so that the overall return was even smoother than strategy A. Even so, these trends would not have any guarantee of continuing in the future. [2] In simulations this is easy to do by reversing the order of the returns. [3] Perhaps another interesting metric would be an exponentially weighted Ulcer Index that places more weight on more recent observations.

 

The post Looking Into The Ulcer Index appeared first on System Trader Success.

How To Create A Multi Agent Trading System

$
0
0

In a previous article I demonstrated how to build a strategy that will trade the equity curve. That is, a strategy that will stop trading when the equity curve falls below a simple moving average. Lets look at a different technique that not many retails traders are aware of. In particular this technique is difficult to execute in EasyLanguage.

What do I want to do? I want to create a trading system that tracks multiple copies of a given system. Why would I do that? Well, while each of the strategies are copies their input values are slightly different. Thus, each version creates a slightly different trade performance. Lets look at an example to make this clear.

Lets first take a trading system to work with. Lets use the example strategy called Simple S&P. The rules are straightforward and listed below.

  • If today’s close is less than the close 6 days ago, buy (enter long).
  • If today’s close is greater than the close 6 days ago, sell (exit long).
  • Only take trades when the closing price is above 40 period SMA

This strategy produces a the following results.

Lets now duplicate our strategy and then change the inputs slightly. Instead of a our entry lookback being six, lets reduce it by one leaving us with 5. We have created a slightly different trading system which produces different results as seen below.

Lets duplicate the original strategy again and change the inputs. Instead of a our entry lookback being six, lets increase it by one. This gives us a value of seven. We have a new strategy which produces the following results.

So, we have three different trading strategies. Each uses a different lookback value to determine when to enter a trade. 

In a traditional trading environment you might optimize a single strategy over your historical data and pick a reasonable set of input parameters. You would then trade that one system. Wouldn't t it be interesting if we could monitor the performance of each strategy variation and trade the best performing strategy in real time? In essence we want to the ability to watch a horse race and change our bet as the race race progresses. A very interesting idea! But when you start to code such a scheme, it becomes a very, very daunting process.

To pull this off we need the ability to track multiple trading systems within a given strategy. We then simply pick the best performing strategy to trade live. Simple in concept but difficult to do. Natively there is no method to accomplish this in EL. However that has changed. This tool, Equity Curve Feedback Toolkit, will allow us to do just that.

Lets create our first multi agent strategy which automatically trades the best performing strategy variation.

Environmental Settings

I coded the Simple S&P rules in EasyLanguage and tested it on the E-mini S&P futures market going back to 2000. Before going further with the demonstration let me say this: all the tests within this article are going to use the following assumptions:

  • Starting account size of $25,000
  • Dates tested are from 1998 through December 31, 2016
  • One contract was traded for each signal
  • The P&L is not accumulated to the starting equity
  • No deductions for slippage and commissions
  • There are no stops

Baseline Results

Creating Our Multi-Agent Strategy

There are three key things we have to do to our strategy.

First is creating a virtual backtesters to track the performance of our three different different strategies. This is accomplished by using the ECF_VirtualBackTester function included with Equity Curve Feedback Toolkit. The three lines of code to accomplish this are below.

Result = ECF_VirtualBackTester(Orders1, Trades1, True);
Result = ECF_VirtualBackTester(Orders2, Trades2, True);
Result = ECF_VirtualBackTester(Orders3, Trades3, True);

Second we now get the trades and current equity from our backtesters. The function ECF_GetEquity is getting the equity information placing it within CurrentEquityX variables and EquityX arrays. 

CurrentEquity1 = ECF_GetEquity(Orders1, Trades1, Equity1, cLongAndShort, UseOpenTrade);
CurrentEquity2 = ECF_GetEquity(Orders2, Trades2, Equity2, cLongAndShort, UseOpenTrade);
CurrentEquity2 = ECF_GetEquity(Orders3, Trades3, Equity3, cLongAndShort, UseOpenTrade);

Third, we now must determine if our strategy is trading above the equity curve. This is explained in a lot more detail within the previous article, Trading The Equity Curve & Beyond. The following lines will determine if our virtual strategies are trading above their respective equity curve.

TradeEnable1 = ECF_EquityMASignal(Orders1, Equity1, EquityMALength, cLongAndShort, UseOpenTrade);
TradeEnable2 = ECF_EquityMASignal(Orders2, Equity2, EquityMALength, cLongAndShort, UseOpenTrade);
TradeEnable3 = ECF_EquityMASignal(Orders3, Equity3, EquityMALength, cLongAndShort, UseOpenTrade);

Fourth,  we must determine which of the three simulated strategies is performing the best. This is accomplished with these lines of code.

BestSys1 = false;
BestSys2 = false;
BestSys3 = false;
If ( CurrentEquity1 > CurrentEquity2 ) and ( CurrentEquity1 > CurrentEquity3 ) and ( CurrentEquity1 > 0 ) then BestSys1 = true
else If ( CurrentEquity2 > CurrentEquity1 ) and ( CurrentEquity2 > CurrentEquity3 ) and ( CurrentEquity2 > 0 ) then BestSys2 = true
else If ( CurrentEquity3 > CurrentEquity1 ) and ( CurrentEquity3 > CurrentEquity2 ) and ( CurrentEquity3 > 0 )then BestSys3 = true;

There we have it. A single strategy which tracks the performance of three virtual strategies and only trades the best performing strategy. This all happens in real time. 

Below is a snapshot of the chart showing trades being taken by our multi-agent version of the Simple S&P strategy. You can see on the left-hand side it's trading version 2 of our strategy. Then on the right-hand side of the chart it's switch to trading version 1 of the strategy. 

Below is the performance report of our multi agent strategy.

Conclusion

This is a very simple example of a multi-agent trading strategy.  In our case, we have three similar strategies being simulated in the backtester. You can create many more virtual copies. As many as you want. You don't even have to use the same strategy. In our example we used the same strategy but changed one of the inputs. You could use different strategies to compete against each other. For example, a trend following model vs a mean reverting model.

In our example we made trading decisions based upon the equity of each system but, we could use other metrics like drawdown or average profit per trade. The options are really incredible and I hope this simple example is getting the wheels in your head turning!

What are he best practices for building such a trading model? Good question. This material is rather new to me and I would love to hear what you think. Like any tool, it can be abused. My opinion at this time is you should correctly build a single, traditional, strategy first. That is, follow all the known best practices of strategy development. Once you have a solid system only then attempt to add the tools available in Equity Curve Feedback Toolkit.

You might be wondering what inputs you should change in your strategy? Good question. I think the fewer the better. In this example I only changed one of the lookback values. I think it's probably imporant that you first estable what the stable range is for each parameter. Then use the values within that range. However like I said, this is a new area for me but it sure looks promising. 

If you want a more detailed look at building this strategy I've created a video showing you how I did it. To watch the video simply sign-up below.

Watch Video Demo on Creating A Multi Agent Strategy

The post How To Create A Multi Agent Trading System appeared first on System Trader Success.

Finding Out What Works, And What Doesn’t Work

$
0
0

Many traders who try system trading have previously had difficulty at discretionary or manual trading. Most of these folks eventually recognize the benefit of trading a system with well defined rules – a system that has performed well in the past. It is nice to know a trading approach has historically worked, but as with all things related to trading, past performance is no guarantee of future results.

Unfortunately, many people who try systematic/algorithmic/mechanical/rule-based trading for the first time bring along a lot of the baggage that they have acquired from their previous method. Depending on the pre-conceived notions they bring into mechanical trading, these new systematic traders may run into a lot of frustration and trouble.

Many times, for example, traders will always test with a few core concepts, such as always closing by the end of the day. This is what they were used to as a discretionary or manual trader, and therefore they never even think to test ideas out of their old comfort zone. Perhaps removing these “comfort” rules would dramatically improve performance.

In this article, I will examine three common items that new systematic traders test, and see how these items actually work when they are subjected to rigorous testing.

Ground Rules

In all the testing that follows, I will test in 7 different markets, across a range of commodities:

  • Wheat (W)
  • 10 Year Treasury Notes (TY)
  • Lean Hogs (LH)
  • Australian Dollar (AD)
  • Heating Oil (HO)
  • Cotton (CT)
  • e-mini Nasdaq (NQ)

I picked these at random, one from each major market group. The test period will be from 1/1/2007 to 12/31/2011, a 5-year period that includes some quiet, and some very chaotic markets.

Finally, I will assume $5 round turn commissions, and $30 round turn slippage. The $30 slippage might be excessive, but I’ve always found it is better to be conservative than to underestimate slippage. I’d rather not be disappointed in real time with larger than expected slippage.

The System

I will use an extremely simple strategy for the tests I am running: a simple breakout based on closing prices. The system is always long or short, and will enter at the next bar open, long if the close is the highest close of the past X bars, and short if the close of the lowest close of the last X bars. One version of the system always exits at the end of the day, and the other version is a swing system. X will be varied from 5 bars to 100 bars.

Here is the code for the swing version, Strategy A:

figure01

Figure 1

Here is the code for the day trading version, Strategy B:

Figure 2

Figure 2

Day, or Night?

Since most markets these days run nearly 24 hours, we can run into issues when testing historically, as many markets traded “pit” hours years ago. Because of that, and because most of the volume is during these traditional pit hours, I will use the old pit session times, but still use electronic trading data.

Exit 1 Minute Before Close

If you use Tradestation to test, you may be familiar with their keyword “setexitonclose.” This is a neat function for backtesting, but in live trading the order is sent after the market is closed, rendering it ineffective. So, I set up the custom sessions above to exit 1 minute before the old “pit” closing time. Then, I know my strategy will exit properly, and will therefore work in backtest and in real time.

Final Pre-Test Info

Now that we have everything properly setup, it is time to run some tests. For each instrument given above, I will run tests with 5, 15, 30, 60, 120, and Daily bars. That gives us 7 instruments x 6 bar sizes = 42 tests. Then, for each test, I will run the breakout length X from 5-100 in steps of 1, which yields 96 iterations per test. Since I am running two strategies overall, I am running 42 x 96 x 2 = 8,064 unique performance sets.

For each of the 42 tests for each strategy, I will record the best Net Profit (out of the 96 iterations) and its corresponding maximum drawdown (if best Net Profit is greater than zero). For each of the 42 tests, I will also record the percentage of iterations that are profitable.

All the tests being run are shown graphically in Figure 3.

Figure 3

Figure 3

Questions To Answer

Once all the tests are run, I will see if I can answer the three questions below. These questions are directly related to the desires of many discretionary day traders.

  1. Many traders love strategies that are the same across multiple instruments. An example of this is price action trading – many feel the principles of price action hold across all instruments. So, when these traders test algorithms, one demand is that a strategy is viable ONLY if it is profitable across many instruments. Question: Is multiple market profitability a reasonable requirement?
  2. Most day traders need to be out by the end of the day. Question: Does forcing an exit at the end of each day improve or decrease strategy performance?
  3. Many day traders try to go for the smallest (shortest) bar size possible. The theory is that trades will be more frequent, losses will be smaller, and exits will be more responsive to market conditions. Question: What kind of impact does bar size have on performance?

Before I get to the results, I should point out that this is one study, with two strategies, across only seven markets. So, the conclusions I reach may not hold up over all markets, and may be different for different strategies. I’m guessing, though, that the conclusions probably do hold in general.

Results

Results of all 8,046 performance tests are shown in Table 1 for the swing Strategy A, and Table 2 for the intraday Strategy B. I will refer to these results as I discuss each of the three questions.

Table 1

Table 1

 

Table 2

Table 2

1. Is profitably across multiple instruments a reasonable requirement?

Figures 4 and 5 show the best case Net Profit for both the swing and intraday version of the strategy. With one notable exception (Heating Oil intraday trading), what is good in one market is generally good in another market. Although the maximum profit varies wildly from market to market, all the swing markets are profitable, and all but one of the intraday markets are unprofitable. This should give you some confidence that the strategy is sound (or not). Of course, it doesn’t mean the strategy is tradable in each market, since I am looking only at maximum net profit based on optimization. But, clearly the intraday Strategy B is not viable in most markets.

Figure 4

Figure 4

Figure 5

Figure 5

So, the results show that profitably across multiple markets is possible, even with markedly different instrument, suggesting that is indeed a valid requirement.

2. Is exiting end of day (intraday trading) a good idea?

Figure 5 shows the answer loud and clear – the answer is no. You’ll get more profit by swing trading. Unfortunately, it might also mean enduring more drawdowns, but think of it like this: the market “pays” people for holding overnight and weekends, and taking on that risk.

3. Are shorter timeframes better?

Since Strategy A is clearly the better strategy, I will use those results to look at the impact of bar length (timeframe). This is shown in Figure 6, where I look at the percent of iterations that were profitable for each combination of instrument and bar size. So, for example, Wheat with 5-minute bars shows that 62% of iterations were profitable (denoted with a red circle in Figure 6). This means that as I varied the breakout length from 5 to 100 in increments of 1 (96 total iterations), 60 of those runs resulted in a profitable end result. Obviously, the best case is 100% – where no matter what value you use for breakout length, the end result is positive.

Figure 6

Figure 6

Results of Figure 6 show that in general profitability increases as the bar length increases, although this effect is not very pronounced, and it does not hold for all instruments. This is confirmed by the results of Figure 4, where the maximum Net Profit generally increase with increasing bar period, but not dramatically.

Why is this so? Profitability could be better as bar period increases, since random market noise plays a smaller role in longer period bars (i.e., the true trend is more easily seen in larger period bars). Of course, drawdown should also be considered when looking at bar size, but in this study it does not seem to change greatly with bar size.

Conclusion

Results with this simple strategy lead to three conclusions:

  1. Striving for profit in multiple markets, as a confirmation of a strategy, is indeed possible.
  2. Swing trading is likely more profitable than intraday trading.
  3. Longer timeframes are generally superior to short time periods, but this is not a major effect.

Follow On

Of course, I made these conclusions based on one study. What if the strategy was different? What if the timeframes or markets were different? What if different years were used for the test period? Will the conclusions reached here still hold? I’ll examine those questions in Part 2.

If you would like to learn more about building trading systems be sure to get a copy of my latest book, Building Winning Algorithmic Trading Systems.

Download

Breakout Strategies (TradeStation ELD)
Breakout Strategies WorkSpace (TradeStation TWS)

— Kevin J. Davey of KJ Trading Systems

The post Finding Out What Works, And What Doesn’t Work appeared first on System Trader Success.

4 Reasons Why Scalpers Could Be Dangerous

$
0
0

In this article I am going to share with you four reasons why scalpers could be dangerous and everyone who would commit money to trade with such short-term strategies should be extremely cautious and mindful. Although potentially the shorter-term a strategy is, the more profits can be earned, and there are reasons why the real time results could differ a lot from those obtained ​in backtesting mode regardless of the trading platform ​that was used. Let's begin with the first reason:

Backtesting Reliance 

Most Forex scalpers rely on tick occurrence in order for the signals to be generated. Their logic is based upon every market movement known (tick). The problem here is that Metatrader 4, for example, which is the most popular trading platform generates ticks in the backtester using its own algorithm based on OHLC data from lowest timeframe data available – usually M1. So, the ticks used to backtest your scalper are not “real”, they are made from MT4. This fact makes the obtained backtest results susceptible for errors and discrepancy. Personally, I have made tests running a scalper on live chart for a few days and then backtested the same system in tester on the same time span. Surprisingly the trades themselves were a lot different which led me to ponder on how real my backtested calculations are and just confirmed my knowledge that ‘ticks’ on MT4 are not real.

Backtesting reliance

Slippage dependency

Since scalpers are frequent traders it is reasonable to expect that the average trade measured in pips would be very low in most cases – like 0.5-1 pips at most. Therefore any additional trading cost should be subtracted from the already low average trade. A slippage of 0.4 pip would possibly wipe out almost 80% of potential profits.

In order to be sure about the profitability, one should run the scalper on a small live account and then monitors the slippage and if there is enough profit left, then the Forex strategy could be forwarded to well-funded real account.

Spread dependency

An increased spread during important Forex news or a spread just bigger than the one used in the backtest could be another cause of eaten profits. Expecting a lot of trades to be executed during a day makes a small change in spreading a potentially big determinant of the final net result of the trading system.

Since during the backtest we could only apply fixed spread, we should be very careful in choosing a broker which have low real time spreads which in turn will allow the scalper to be profitable. Another viable option here is run the backtest with wider than normal spread in order to get conservative results.

Spread dependency

Broker's side

The final reason why I am staying away from trading with scalpers is the fact that some brokers do not allow trading with systems which are very frequent traders. Here we are talking mostly about real extreme cases where the average length of the trade is few seconds. If a trader begins to trade with such a strategy, his account could be suspended from trading so it is imperative that if you are planning to trade a scalper you contact first the broker and explain your intent and then if everything is OK, ​proceed trading.

Summary

I have shared with you my four reasons why trading with scalping strategies could be dangerous, and the traders who want to try them should be extremely cautious. The difficulties regarding the backtest accuracy, slippage, and spread costs during the real trading and broker's unwillingness are more than enough reasons for me to stay out. I hope that I have contributed to your knowledge regarding the matter. I wish you profitable trading.

The post 4 Reasons Why Scalpers Could Be Dangerous appeared first on System Trader Success.

The Internal Bar Strength Indicator

$
0
0

The internal bar strength or (IBS) is an oscillating indicator which measures the relative position of the close price with respect to the low to high range for the same period. The calculation for Internal Bar Strength is as follows… IBS =  (Close – Low) / (High – Low) * 100; For example, on 13/01/2016 the QQQ etf had a high price of $106.23, a low price of $101.74 and a close price of $101.90. The value of IBS would be calculated as … (101.90 – 101.74) / (106.23 – 101.74) * 100 = 3.56 Low IBS readings show that a market has closed near the lows of the day, high IBS readings show that a market has closed near the highs of the day. The following image shows the IBS indicator plotted beneath the price series. IBS-1-630x315

Testing the Internal Bar Strength Indicator The test period is between 01/01/2005 and today. $100,000 hypothetical starting balance, with 100% of available equity invested per position. Commissions are $0.01 per share and there is a minimum cost per trade of $1.00. Tests are carried out using Amibroker and the data is provided by Norgate Premium. These are the Strategy rules:

  • If IBS < 10
  • Buy the close
  • Exit at close if price > previous day high.

QQQ Results

  • No. of trades = 199
  • % of Winners = 72.36%
  • Average P/L% per trade = 0.62%
  • Average hold time = 4 days
  • Annualized return = 11.32%
  • Maximum draw-down = -16.14%
  • CAR/MDD = 0.70
  • Exposure = 28.47% The monthly breakdown of returns is as follows…

Internal-Bar-Strength-QQQ-Performance-1 One common trait among many of these types of mean-reversion systems is that they seem to perform best during volatile markets. To illustrate the point, I added a filter to the above strategy which only allows us to buy QQQ if the VIX has closed above the value of it’s previous 10-day MA. Adding the VIX filter improved the win-rate %, average profit % per trade, and annual % return of the strategy. Here are the results…
* No. of trades = 158
* % of Winners  = 75.32%
* Average P/L% per trade = 0.85%
* Average hold time = 4 days
* Annualized return = 12.46%
* Maximum draw-down = -14.83%
* CAR/MDD = 0.84
* Exposure = 22.3% The Equity curve and monthly breakdown of returns are as follows…

Internal-Bar-Strength-and-VIX

Combining the Internal Bar Strength Strategy with a Trend-Following Strategy Finally, I wanted to see whether combining a trend-following strategy with the Internal Bar Strategy could improve the risk-adjusted returns of either stand-alone strategy. The trend following strategy is as follows…

  • Buy the SPY when the Close > upper Bollinger Band (C,200,1);
  • Sell the SPY when the Close < lower Bollinger Band (C,200,1); Since 2005, this strategy has the following performance metrics:

  • No. of trades = 5

  • % of Winners  = 80%
  • Average P/L% per trade = 19.17%
  • Average hold time = 425 days
  • Annualized return = 7.37%
  • Maximum draw-down = -14.17%
  • CAR/MDD = 0.52
  • Exposure = 76.25% The Equity Curve and monthly breakdown of returns are as follows…

SPY-trend-Following For the Final test, QQQ is traded using the Internal Bar Strength (IBS) strategy and SPY is traded using the above trend-following (TF) strategy. The first test will allocate 70% of available equity to the SPY strategy and 30% of available equity to the QQQ strategy. Further tests will use different capital allocations. Because the SPY can sometimes be held for a long period, it is necessary to periodically rebalance the size of open SPY positions. For the following tests the rebalance frequency will be monthly and the rebalance threshold will be 2%.

70%TF / 30%IBS. Results

  • No. of trades = 163
  • % of Winners  = 75.46%
  • Average P/L% per trade = 1.39%
  • Average hold time = 18 days
  • Annualized return = 9.01%
  • Maximum draw-down = -11.45%
  • CAR/MDD = 0.86
  • Exposure = 60.23%

60%TF / 40%IBS. Results

  • Annualized return = 9.53%
  • Maximum draw-down = -11.09%
  • CAR/MDD = 0.86
  • Exposure = 54.76%

50%TF / 50%IBS. Results

  • Annualized return = 10.04%
  • Maximum draw-down = -10.98%
  • CAR/MDD = 0.91
  • Exposure = 49.35%

40%TF / 60%IBS. Results

  • Annualized return = 10.55%
  • Maximum draw-down = -10.93%
  • CAR/MDD = 0.97
  • Exposure = 43.95%

30%TF / 70%IBS. Results

  • Annualized return = 11.05%
  • Maximum draw-down = -10.87%
  • CAR/MDD = 1.02
  • Exposure = 38.51% The above results illustrate that combining both a trend-following strategy and a mean-reversion strategy within your portfolio has been a useful method for improving risk-adjusted returns. For context, the IBS strategy when traded alone produced a maximum draw-down of 14.83% and a CAR/MDD of 0.86. The Trend-following strategy produced a maximum draw-down of 14.17% and a CAR/MDD of 0.52.

The combined strategy portfolio with a 30% TF and 70% IBS allocation produced a maximum draw-down of 10.87% and a CAR/MDD of 1.02.  This is superior to either strategy if traded alone. The equity curve and monthly breakdown of returns for the 30% TF / 70% IBS portfolio are as follows… Combining-trend-folloiwng-with-Mean-Reversion-2 *Update: As some readers have correctly pointed out, trading on the close when an indicator requires the closing price to be calculated is not necessarily straight forward. With that said, I have included further tests below whereby the trades are executed on the open of the day which follows the signal. The IBS strategy does not perform as well, but the main point of the article (that combining mean-reversion with trend-following can be beneficial) is still valid.

QQQ Results (Enter on open of day which follows signal)

  • No. of trades = 197
  • % of Winners  = 70.05%
  • Average P/L% per trade = 0.49%
  • Average hold time = 4 days
  • Annualized return = 8.69%
  • Maximum draw-down = -15.99%
  • CAR/MDD = 0.54
  • Exposure = 22.04%

QQQ Results with VIX Filter (Enter on open of day which follows signal)

  • No. of trades = 157
  • % of Winners  = 73.89%
  • Average P/L% per trade = 0.64%
  • Average hold time = 4 days
  • Annualized return = 9.15%
  • Maximum draw-down = -16.10%
  • CAR/MDD = 0.57
  • Exposure = 17.26%

SPY Trend-Following Results  (Enter on open of day which follows signal)

  • No. of trades = 5
  • % of Winners  = 80%
  • Average P/L% per trade = 18.87%
  • Average hold time = 425 days
  • Annualized return = 7.24%
  • Maximum draw-down = -14.15%
  • CAR/MDD = 0.51
  • Exposure = 76.06%

Combined Portfolio (70%IBS / 30%TF)  (Enter on open of day which follows signal)

  • No. of trades = 162
  • % of Winners  = 74.07%
  • Average P/L% per trade = 1.19%
  • Average hold time = 17 days
  • Annualized return = 9.90%
  • Maximum draw-down = -11.47%
  • CAR/MDD = 0.86
  • Exposure = 35.47% The equity curve and monthly breakdown of returns for the 70% IBS / 30% TF portfolio are as follows…

Combining-trend-folloiwng-with-Mean-Reversion-3 — By Llewelyn of Backtest Wizard

The post The Internal Bar Strength Indicator appeared first on System Trader Success.


Market Seasonality Study

$
0
0

With November almost over I thought it would be a good idea to review our Market Seasonality Study. 

If you will recall, back on May 18 of this year we closed our or seasonality trade in anticipation for a traditional weak period of the stock index markets. That is, the months of May, June, July, August, September and October. So what did the market do during those months?

Below is the image of the graph were we exited back in May and where we re-entered this November.

It was up, up and away! Like anything in automated trading. Not everything works our 100%. In this case we did not lose money on a losing trade per say. Instead, we lost by not participating in the continued strong bullish movie in the stock index market.

We are now entering a traditionally strong season for the stock index market. This is a well known seasonality period. In this article I would like to take a closer look at the seasonality bias which so many investors and traders talk about. 

Seasonality Bias

First, I would like to test the popular trading idea of buying the S&P in November and selling in May. I will test this on the cash market going back to 1983. The number of shares to buy will be adjusted based upon a fixed amount of risk. In this case, $5,000 is risked per trade based upon the volatility of the market. As this is a simple market study no slippage or commissions were taken into account. The EasyLanguage code looks like this:

CurrentMonth = Month( Date );
 
If ( CurrentMonth = BuyMonth ) And ( MP = 0 ) Then Buy(“Buy Month”) iShares contracts next bar at market;
 
If ( CurrentMonth = SellMonth ) Then Sell(“Sell Month”) iShares contracts next bar at market;

Testing Environment

I decided to test the strategy on the S&P cash index going back to 1960. The following assumptions were made:
* Starting account size of  $100,000.
* Dates tested are from 1960 through May 2017.
* The number of shares traded will be based on volatility estimation and risking no more than $5,000 per trade.
* Volatility is estimated with a two times 20-week ATR calculation. This is done to normalize the amount of risk per trade.
* The P&L is not accumulated to the starting equity.
* There are no deductions for commissions and slippage.
* No stops were used.

From here we can plug into the input values the buy month (November) and sell month (May). Doing this we generate the following equity graph:   It sure looks like these months have a long bias as those are some nice results. What would the equity curve look like if we reverse the BuyMonth and SellMonth? That is, let’s buy in May and sell in November. Below is the equity curve for this inverted system.   The early years of 1960 to about 1970 the equity curve went lower and lower. Then it started climbing reaching an equity peak of 1987. During this period, the strategy was producing positive results. That equity peak occurred during 1987 should be familiar. That was the year we had the massive one day market crash on October 19th known as Black Monday. The Dow Jones Industrial Average dropped 22% in one day. Since that event the behavior of market participants has been altered. This is not unlike the radical market changes which occurred after the 2000 market peek where much of the trending characteristics of the major markets were replaced by mean reversion tendencies. So far the basic seasonality study looks interesting. However, keep in mind that we do not have any stops in place. Nor do we have any entry filter that would prevent us from opening a trade during a bear market. If the market is falling strongly when our BuyMonth rolls around we may not wish to purchase right away. Likewise, we have no exit filter to prevent us from exiting when the market may be on a strong rise during the month of May. It’s conceivable that the market may be experiencing a strong bull run when our usual SellMonth of May rolls around.

Simple Moving Average Filter

In order to avoid buying and selling at the wrong times I’m going to introduce a 30-period simple moving average (SMA) to act as a short-term trend filter. This filter will be used to prevent us from immediately buying into a falling market or selling into a rising market. For example, if our SellMonth of May rolls around and the market happens to be rising (trading above the 30-period SMA), we do not sell just yet. We wait until unit price closes below the short-term SMA. The same idea is applied to the buying side, but reversed. We will not go long until price closed above the short-term SMA. Within EasyLanguage we can create a buy/sell confirmation flag called BuyFlag and SellFlag which will indicate when the proper go-long or sell conditions appear based upon our short-term trend filter.

if ( MinorTrendLen > 0 ) Then BuyFlag = Close > Average( Close,  MinorTrendLen )
Else BuyFlag = true;
If ( MinorTrendLen > 0 ) Then SellFlag = Close < Average( Close, MinorTrendLen )
Else SellFlag = true;

The MinorTrendLen variable is actually an input value which holds the look-back period to be used in the SMA calculation. You will notice there is an additional check to see if the look-back period is zero. This is done so we can enable or disable this SMA filter. If you enter zero for the look-back period, the code will always set our BuyFlag and SellFlag to true. This effectively disables our short-term market filter. This is a handy way to enable and disable filters from the system inputs. Below is the performance of both our Baseline system and the system with our SMA filter: We increased the net profit, profit factor, average profit per trade, annual rate of return, and the expectancy score. The max intraday drawdown fell as well. Overall it looks like the SMA filter adds value.

MACD Filter

A standard MACD filter is a well known indicator that may help with timing. I’m going to add a MACD calculation, using the default settings, and only open a new trade when the MACD line is above zero. Likewise, I’m only going to sell when the MACD line is below zero.  Within EasyLanguage we can create a MACD filter  by creating two Boolean flags called MACDBull and MACDBear which will indicate when the proper major market trend is in our favor.

If ( MACD_Filter ) Then
Begin
MyMACD = MACD( Close, FastLength, SlowLength );
MACDAvg = XAverage( MyMACD, MACDLength );
MACDDiff = MyMACD - MACDAvg;
If ( MyMACD crosses over 0 ) Then
Begin
MACDBull = True;
MACDBear = False;
End
Else If ( MyMACD crosses under 0 ) Then
Begin
MACDBull = False;
MACDBear = True;
End;
End
Else Begin
MACDBull = True;
MACDBear = True;
End;

Below are the results with the MACD filter: Utilizing the MACD filter and comparing it to our baseline system, we reduced every so slightly many of the performance metrics. Overall, it does not appear to be better than our SMA filter.

RSI Filter

For our final filter I will try the RSI indicator with its default loopback period of 14. Again, like the MACD calculation, I want price moving in our direction so I want the RSI calculation to be above zero when opening a position and below zero when closing a position.

If ( RSI_Filter ) Then
Begin
   RSIBull = RSI(Close, 14 ) > 50;
   RSIBear = RSI(Close, 14 ) < 50;
End
Else Begin
   RSIBull = true;
   RSIBear = true;
End;

The RSI filter performed better than the MACD filter. Comparing it to the Baseline, we see it’s very similar.  Yet, the net profit factor is lowerand the Expectancy Score is lower. In the end it does appear applying a SMA filter or an RSI filter can improve the baseline results. Both filters are rather simple to implement and were tested for this article with their default values. You of course could take this much further.

Conclusion

It certainly appears there is a significant seasonal edge to the S&P market. The very trading rules we used above for the S&P cash market could be applied to the SPY and DIA ETF markets. I’ve tested those ETFs and they produce very similar results. The S&P futures market also produces similar results. It even appears to work well on some stocks. Keep in mind this market study did not utilize any market stops. How can this study be used? With a little work an automated trading system could be built from this study. Another use would be to apply this study as a filter for trading other systems. This seasonality filter could be applied to both automated trading systems or even discretionary trading. It may not be much help for intraday trading, but it may be worth testing. Anyway, just being aware of these major market cycles can be helpful in understanding what’s going on with today’s markets and where they may be going in the near future. Hope you found this study helpful. This seasonality filter (with SMA) is what’s used on our State of U.S. Markets webpage.

Downloads

Seasonal Strategy (TradeStation ELD) Seasonality Strategy WorkSpace (TradeStation TWS) Seasonal Strategy (Text File)

The post Market Seasonality Study appeared first on System Trader Success.

4 Ways to Increase my Discipline

$
0
0

In the following rows I'd like to share with you three ways I use to increase my discipline which is the cornerstone of every successful trader I know. The discipline is often the missing part of many traders who try to conquer the financial markets. The below presented routines are not hard to do and they are worth every minute I am investing in them. So, let's begin with the first one:

Know your trading method well

I know from my own trading experience that the more knowledge I have about my automated Forex trading strategy, the less worried I am during the real time trading. In my opinion there is a strong correlation between my knowledge of the system, my confidence in it, and my discipline to execute and trade its signals without any second thoughts in mind.

If I know when to expect good trades to occur, how often they occur, and what their characteristics are like length, open profits left off the table, then I am prepared in my mind and I don't get caught by any surprises.

​The same applies to the system's average annual return, MaxDD, average trade measured in points, percent of profitable months, and every characteristic I think is important for me to know and understand completely.
4 Ways to Increase my Discipline

Trade with less risk per trade

Once I had a very valuable and educated in nature experience trading one system with the same settings on two different accounts. On the first one I applied a conservative Money Management (MM) rule thus allowing small account swings and more consistent and less volatile equity curve. I traded the second account with ​a lot more risk per trade than the first one. I reiterate that the system was the same settings except MM.

I have noticed over time that I was consistently getting worried about the results on the second trading account. I couldn't handle the big equity swings well, watching the charts seeing my profits or losses changing a lot over time.

Then I understood that ​aside from the trading system, the risk I am committing also have a big impact on my trading psychology, and I was way more prone to making mistakes executing my trading signals because I was emotionally burned out.

The same applies for capital which I am using for trading. If I am trading with money I cannot afford to lose, then almost instantaneously I get anxious and my thoughts start to flow toward losing and why I cannot lose my money and what I will be doing without them - a very similar juncture to the high risk per trade example.

4 Ways to Increase my Discipline

Having an exit trading plan

No matter how good my automated trading strategy is and no matter how conservative my MM rules are, I always have an exit plan in case I ​experience big equity losses. Regardless of the reason I have prepared in my mind, a fixed percentage drop with my account which if exceeded I will just stop trading and evaluate the situation. It is important that this step is done before starting to trade real with a particular system because in the heat of the moment I know that I am ​more likely to make emotional decisions, and eventually make mistakes.

This ending point is determined based upon the system's backtesting performance so that the typical equity drop could occur without stopping too early to trade the particular system. For example, if I've known from a past performance that MaxDD is 20%, they would not stop trading the systems if I lost 10% of my account because it is natural to have bigger drops. Every time I would choose an exit point which is bigger than the MaxDD from the past.

Knowing that I have a limit of my eventual losses makes me much more confident for the future, and the confidence itself translates into better discipline of applying my trading plan rigorously.

Summary

I showed to you my personal ways to increase my discipline and emotional health during real time trading with my mechanical systems. In summary I have understood ​thru an experience that the more I am prepared, the less emotional and more disciplined I am. To understand my trading method well, risking ​a little from my account per trade and having an exit plan are my ways to cope up with the task called trading. I hope that I have contributed to your knowledge with this article. I wish you a profitable trading.

-- By Professional Trading Systems.

The post 4 Ways to Increase my Discipline appeared first on System Trader Success.

Creatures of Habit

$
0
0

"Creatures", in this case =  human or algorithm executing entries and exits.

I was searching for a generic photo that has the different days of the week, but in different languages, when I came across this graph, which shows historical Days of Week averages of patients waiting for CT scans... which you may think has absolutely NOTHING to do with Trading or Investing strategies... right?!

WRONG!
 
Look at the hours of the day... humm.


I'm not going to dive deeply into human psychology, behavior but if you spend some time studying Behavioral Finance, you'll find many real-world justifications for why certain days have certain patterns, even extracting the algorithms from the equation, the human effect upon the market, therefore the human behavior upon the market is quite easily quantifiable, and to the average investor/trader/developer, looking at days of week patterns could be something useful to help improve your outcome, without diving deep into BF.

​Some simple studies/indicators can be developed to search for each days (M, T, W, Th, F, S, Su) averages such as:
  •  Volume  (Volume Total, Up Volume, Down Volume, Volume Ratio)
  •  Volatility & ATR (Total, Upward movement, Downward movement)
  •  Momentum (Total, Upward movement, and Downward movement & applied to Volumes)
  •  Close > open and Open > close  (Up and Down bar ratios)
  •  > XValue   (XValue can be a Moving Average or historical S&R, etc. Ex: Close > Lowest(high,20)[20])
Then you add some simple contexts like current conditions, trends, directional biases, expanding contracting volatility or momentum, change of accel or decel, strength and length of cycle, etc. These very simple methods can be used to "set the stage". But, before we go into specifics, let's talk a little about WHY... WHY does all the days of week have different averages, patterns, etc.? Well... that's because we as humans have patterns... patterns in our work methods and practices, personality, behavior, actions, attitude, and when you're dealing with a large industry, these patterns can be measured, and have an impact on price.


Ex: ​Majority of humans, despise getting up early on Monday to go to work ​right away after they came home from a long weekend at camp... or the beach... or the ball game. So Mondays, start slow for you, until you get back in the "groove" of your work week... which could be after the ​second or third cup of Joe, which would lower the number of Discretionary traders. However, many traders, firms, or systems close out positions on Friday, to control risks, so they may be jumping back in on Monday... Now, the humans in the pits on Wallstreet or CME are no different.

When you add the algorithms into the equation, they simply multiply the effect because that's what they do - they increase volatility, liquidity, etc. So any human behavior pattern, that has small effect on the market is now multiplied. Also, algorithms are developed by humans, so in many instances, they reflect the desires of a human... Ex: Human Trader Brad is a swing trader but he closes out all positions on Friday so he doesn't hold any risk over the weekend. Brad also developed a system that trades a different strategy, but deploys the same EOW exit logic for risk mitigation. So you have End of Day Exits (Known as the 2 Oclock FU), and End of Week Exits. Then you have algorithms that multiply and increase price direction strength/momentum/vola/etc. So you can get massive price movements at certain times, and on certain days.

I could go on for years, detailing every reason why certain times of every day, or certain times of certain days or weeks, or certain minutes of certain hours have these patterns. Such as Economic & News reports, Earnings releases, Session open/closes, lunch time siesta, "effects of Next bar executions", start of week, end of week, start of day, end of day, start of bar, end of bar, etc...
​Basically, you can create a simple strategy that does not require Intense Quantitative analysis... just some simple run-of-the-mill technical analysis will work fine.

Here are Daily EC's from a Long Only strategy that has Generic Four lines of code, and allows three entries (1 contract per entry).

​If DayOfWeek( Date ) = DayOfWk and close open then Buy next bar at close limit;
​If DayOfWeek( Date ) = DayOfWk and close > open then sell next bar at close limit;
​If mp = 1 and close > open and openpositionprofit > 0 then sell next bar at close limit;
​if mp = 1 and close > open and closeW > closeW[1]  then sell next bar at close limit;

*Disclaimer: Historical performance whether live or simulated may not be indicative of future results. Trading Stocks, Options, FX, or Futures involves significant risks not suitable for all investors. Investors should only choose to invest funds ​they can afford to lose without impacting lifestyle. Price activity is not always predictable or repetitive, thus any strategy developed using historical data will be equally unpredictable. We do not guarantee our strategies or analysis methods will perform profitably and/or replicate results shown above, for any length of time. Before leasing/using our products, please consult your investment professionals to discuss if our products are right for your risk tolerances and objectives. Price activity is not always repetitive or predictable, thus any strategy developed using historical data will be equally unpredictable.

Monday

Tuesday

Wednesday

Thursday

Friday

So as you can see, if you are using a "Value" long only investing strategy, you could improve performance, simply by being a little more selective on which day(s) you allow entries to execute @Value.

Now, I realize these Equity Curves are not super great, but to offer decent performance from such a simple statement, imagine if you dug a little deeper and  traveled a little further into the abyss. Or even if you just wanted to add a few statements to help control risk... no problem. The point is, you can use simple methods in a creative way, you can implement foundations of complex reasoning into a simple structure. You don't always need 1,000 moving parts. 

You can implement behavioral finance into your concept with something as simple as analyzing Time - Based patterns. Remember, most systems are End of Bar systems... ( ie. they execute using "NEXT BAR AT MARKET" or "NEXT BAR AT CLOSE LIMIT" or "NEXT BAR AT BID or ASK... ETC."). Also, the majority of larger firms uses Daily bars and only trades U.S. sessions, so orders are executed in market open. Or, if they use a smaller bars size, like a 15-min... if you had to choose "The most Important part of the 15-min bar?", what would it be? The first 30-60 seconds, because the majority of systematic traders are using End of bar so their volumes are higher at the beginning, and Discretionary traders volume is higher afterwards. What's really interesting is when you break down, which is historically more profitable by analyzing and comparing future price direction to the volume ratio of each segment and trade type (Discretionary and Systematic), within the bar but that discussion is for another post.

For now, let's keep this in the framework of "investing". If you know the market has an upward bias and uptrend is still strong, and the 2 Oclock FU combines forces with the Friday End of Week Exit or the Monday End of Day Exit (because many Econ reports are released on Tuesday + Sluggishness of Blue Monday) to generate a massive downward price movement, you could place your Long Entry, just before market closes at an extreme/new price low... ie. value. The graphs above are applied to ES E-minisP500 Futures. However this concept can also be applied to Stocks, ETF's, Bonds, etc. Some markets will vary, and some may have specific patterns that no other market has, but the concept in general should be applicable.

This is just a single, simple, and common example of when you break down Time, Price, Days, Weeks, Volatility, Momentum, and Volumes within the context of Human & Systematic Behavior, there are REAL supporting reasons why these patterns exist, and why they are likely to exist in the future, which also makes it easier to identify why and when they will not work in the future.

If you really want to dive deep, you have to break it down into minute by minute, day by day, etc. analysis... the data is extremely useful but also very time consuming. 

I realize this is all common knowledge to experienced Quants/Traders, but to the average investor/trader/developer, this is something that could be easily implemented into their strategy, even a buy and hold to improve their outcome, which is the objective of this post. In the next post, I'll dive deeper into the Quantitative analysis of this concept/analysis method/data. Any rate, I hope some of this is useful, and good luck on your never ending journey of discovery!


Brian

*Disclaimer: Historical performance whether live or simulated may not be indicative of future results. Trading Stocks, Options, FX, or Futures involves significant risks not suitable for all investors. Investors should only choose to invest funds ​they can afford to lose without impacting lifestyle. Price activity is not always predictable or repetitive, thus any strategy developed using historical data will be equally unpredictable. We do not guarantee our strategies or analysis methods will perform profitably and/or replicate results shown above, for any length of time. Before leasing/using our products, please consult your investment professionals to discuss if our products are right for your risk tolerances and objectives. Price activity is not always repetitive or predictable, thus any strategy developed using historical data will be equally unpredictable.

--By Brian Miller from blog Optimizedtrading

The post Creatures of Habit appeared first on System Trader Success.

Broken Strategy or Market Change: Investigating Underperformance

$
0
0

I recently had someone email me about the performance of a strategy I created back in late 2005/early 2006 and traded for a few years. I remember the strategy being a daily mean reversion set up with an intraday pullback entry. I figured it probably had not done well over the last decade. I stopped trading in the middle of 2008 because I did not like how it was behaving. In the backtest it did well in bear markets but was not doing so in the middle of 2008.

I ran the strategy from 2007, using the rules as they were published and was pleasantly surprised by the results. A CAR of 25%. Overall not too bad. Wish I had still been trading it. This is an ​11-year out of sample test.

Then I remembered, didn’t this have insane results from 2000 to 2005? Here are those results.

Did you notice that 595% return in 2003? Now I am thinking what the heck happened since 2007. Look at the drop in yearly returns. Was the original strategy overfit and flawed? Did the markets change? If so, what is the difference? What follows is my investigation into trying to determine if the strategy is broken or the markets have changed.

Rules

I cannot share the rules because of the NDA I signed when it was created. What I can say is that it is a very typical mean reversion strategy that trades stocks.

I will include 2006 in the In-sample period because the return for the year was in line with previous years.

Curve Fit?

Was the initial strategy curve fit? I do not have the original code so I made a best guess of the parameters I would have optimized on and their values. The strategy CAR was 1.3 standard deviations from the average of all the variations. Even the worst variation still had a very good CAR from 2000 to 2005 and saw similar drop in performance. This variation may have been a little overfit but that is not the entire story.

Number of Trades

Is there a decline in the number of trades? Average number of trades per year:

  • 2000 to 2006: 171
  • 2007 to 2016: 88

Wow, a drop of 50%! That starts to explain a lot. But why less trades?

Trading Universe

Are fewer stocks passing our liquidity filter? This is the average per year:

  • 2000 to 2006: 978,239
  • 2007 to 2016: 861,763

A small decline but it does not explain the large drop.

Number of Setups

Since the strategy depends on intraday sell off, maybe we are seeing less setups. This is the average per year:

  • 2000 to 2006: 5307
  • 2007 to 2016: 2537

That explains the lower number of trades per year. We have less setups. This means we are seeing fewer stocks sell off and setting up for a mean reversion trade. This explains a lot. But let us see if it is the entire story.

Avg % P/L per Trade

Is the quality of the trade dropping too? Meaning what is the average % profit/loss per trade?

  • 2000 to 2006: 6.25%
  • 2007 to 2016: 2.73%

Another drop of 50%. A double whammy. These last two stats explain the drop in the performance. But I wanted to know more.

Volatility

Are the trades less volatile and is that why they are not as big? These are the average 100-day historical volatility of the trades:

  • 2000 to 2006: 116%
  • 2007 to 2016: 106%

That is not leading to the decrease. What is causing the decrease in average % p/l? I don’t know what else to look at.

Final Thoughts

Is this strategy broken? I don’t think so. But what has changed is the market, which has greatly reduced the returns. Having fewer setups and smaller gains on the trades seems to have caused the reduction in returns. As to why, my theory is that it’s because of the popularity of mean reversion strategies and quantified trading. Do you have any ideas on what is going on or other tests you want to see? Post them in the comments below.

Remember we had eleven years of out-of-sample data to work with to determine if the strategy was originally overfit or if it had broken, or if the markets had changed. Even with all these data it was not completely easy to figure out what was going on. The next time your strategy starts to perform poorly after 3 to 12 months, remember how hard it can be. I dropped the strategy after 6 months. It was not broken, the markets had changed.

--by Cesar Alvarez from Blog Alvarezquanttrading

Good quant trading,

The post Broken Strategy or Market Change: Investigating Underperformance appeared first on System Trader Success.

Is The Christmas Season Bullish For U.S. Markets?

$
0
0

With Christmas just a few days  away , I thought it would be interesting to see how the S&P behaves in the days just before Christmas. Do the days just before this holiday tend to be bullish, bearish, or neutral?

 To test the market behavior just before the Christmas holiday I will use the S&P Cash index back in 1960. I will create an EasyLanguage strategy that will enter a trade X days before Christmas and close that trade on the opening of the first trading day after Christmas.  Each trade will dedicate $100,000 to purchase shares. Stops, and both commissions and slippage are not utilized in this study.

Ten Days Before Christmas

First let's look at the ten days before Christmas. What happens if we enter a trade X days before Christmas and close that trade on the open after Christmas? By using TradeStation's optimize feature I can systematically test each day over the historical data. The results of each test is the generated P&L for each iteration and is depicted in the bar graph below. Looking at the graph, each bar on the x-axis represents the number of days before Christmas.

It appears that the 10 days before Christmas all show positive P&L. In general, the longer you're holding period before Christmas the better.

Ten Days After Christmas

Using a similar trading system, I will look at entering a trade on open of trading day following Christmas and holding that trade for X days. Below is a bar graph showing the days 1-10 after Christmas. Again, each bar represents P&L and the x-axis is the number of days the trade is held.

Historically, all days after Christmas in our study have returned positive results. Unlike the 10-days before Christmas, in this case it appears there is not much gain for holding beyond five days.

The Christmas Trade

Based on the information above, which seems to show a strong bullish biased for the days immediately before and after Christmas, I'm going to create another strategy that will open a trade five days before Christmas and closes that trade five days after Christmas. I picked five days simply because it was the middle value (1-10) for the days before and after Christmas we tested.  Last year's Christmas Trade (December 2016) was a scratch trade. It's pictured below.

Christmas Trade 2016

Christmas Trade 2015


When you combine all the trades going back to 1960 we get the following equity curve and performance. 

Conclusion

There certainly does seem to be a very strong bullish tendency around Christmas. Can you take advantage of this in your trading? Perhaps. Remember, the code provided below this article is not a complete trading system, but an indicator to help me gauge the market behavior around the Christmas holiday. If you have trading systems or trade a discretionary method around these days before and after Christmas, you might use this knowledge to ignore short signals, or modify your exit.

Download

Below is the free EasyLanguage code used to generate the Christmas Trade as both a TradeStation ELD file and text file. You will also find a copy of the TradeStation WorkSpace and the performance report as an Excel document.

Christmas Trade (TradeStation ELD)
Christmas Trade WorkSpace (TradeStation TWS)
Christmas Trade Strategy Code (Text File)

The post Is The Christmas Season Bullish For U.S. Markets? appeared first on System Trader Success.

Viewing all 323 articles
Browse latest View live