Quantcast
Channel: System Trader Success
Viewing all articles
Browse latest Browse all 323

The nth Order Polynomial Acceleration Strategy, Part 2

$
0
0

This is the second part of a two part article on testing a polynomial acceleration strategy. You can read part one here.

Finding The Strategy Input Parameters in The Walk Forward In-sample/test sections

The PWFO generates a number of performance metrics in the in-sample/test section.  The question we are attempting to answer statistically, is which performance metric or combination of performance metrics (which we will call a filter) in in-sample/test section will produce strategy inputs that produce statistically valid  profits in the out-of-sample section.  In other words, we wish to find a performance metric filter that we can apply to the in-sample/test section that can give us strategy inputs that will produce, on average, good trading results in the future.  The PWFO produces a total of 32 different performance metrics in in-sample/test section.  If we have 2592 different strategy input combinations or cases then the in-sample/test section consists of 32 columns of performance metrics for each of the 2592 input cases or rows.

Appendix A (PDF download) shows an example of a truncated PWFO file for Out-Of-Sample end date of 6/22/12:

An example of a simple filter would be to choose the row in the in-sample/test section that had the highest Net Profit or perhaps a row that had the best Profit Factor.  Unfortunately, it was found that this type of simple filter very rarely produces good out-of-sample results.  More complicated metric filters can produce good out-of-sample results minimizing spurious price movement biases in the selection of strategy inputs.

Here is an example of a better, more complicated filter that was used in this paper.  We require that the number of trades(NT) in the in-sample/test section be greater than 10 trades a month. We require this so that we can eliminate strategy inputs that produce very few trades. One calendar month is approximately 21 trading days so we are requiring at least a trade every other day on average.  Not many traders can stay with a strategy that has a large number of losers in a row (LR).  For this filter we will choose LR<=3. This choice of LR is completely arbitrary and is what I feel comfortable with.  After using a NT and LR filter, as described, there can still be 100’s of rows left in the PWFO file in-sample/test section. The PWFO generates the metric p/l.  This metric calculates the ratio of the Median Winning Trade Profit divided by the absolute value of the Median of the Losing Trades.  We use the median for this metric, rather than the average trade profit, because we do not want this metric distorted by any outlier trades, that is trades that generate very large profits of losses one or two times. Thus, we would want the p/l to be as large as possible.  Let us choose the 20 rows that contain the largest(top) p/l values from the rows that are left from the NT-LR screen. In other words, we sort p/l from high to low, eliminate the rows that have NT<10 and LR>3 then choose the top  20 rows of whatever is left. This particular filter will now leave 20 cases or rows in the PWFO file that satisfy the above filter conditions. We call this filter t20p/l|lr3|nt10 where t20p/l means the Top 20 p/l rows left after the NT-LR filter.

Suppose for this filter, within the 20 PWFO rows that are left, we want the row that has the maximum median (mp-rd) metric in the in-sample/test sectionThe PWFO metric mp-rd is the final trade profit minus the maximum rundown of the trade.  That is for each trade we measure where the trade closed compared to its worst loss from the start of the trade.  Each set of strategy inputs in the in-sample/test section has a number of winning trades and losing trades.  Each winning trade and losing trade has a mp-rd value. Each row in the PWFO file represents a set of strategy inputs that has an associated median mp-rd value.  This would produce a filter named t20p/l|lr3|nt10-mp-rd. For each in-sample/test section this filter leaves only one row in the PWFO in-sample/test section with its associated strategy inputs and out-of-sample net profit. This particular t20p/l|lr3|nt10-mp-rd is then applied to each of the 105 PWFO in-sample/test sections which gives 105 sets of strategy inputs that are used to produce the corresponding 105 out-of-sample performance results. The average out-of-sample performance is calculated from these 105 out-of-sample performance results.  In addition, many other important out-of-sample performance statistics for this filter are calculated and summarized.  Figure 3 (PDF download) shows such a computer run along with a small sample of other filter combinations that are constructed in a similar manner.  Row 3 of the sample output in Figure 3 shows the results of the filter discussed above.

Bootstrap Probability of Filter Results

Using modern “Bootstrap” techniques, we can calculate the probability of obtaining each filter’s total out-of-sample net profits by chance.  By net we mean subtracting the cost and slippage of all round trip trades from the total out-of-sample profits. Here is how the bootstrap technique is applied. Suppose as an example, we have 100 PWFO files of test/oos data and we calculate the total out-of-sample net profits(tOnpNet) for 5000 different TopN-Metric- LR-NT filters. A mirror filter is created for each of the 5000 filters. However, instead of picking an out-of-sample net profit(OSNP) from a filter row, the mirror filter picks a random row’s OSNP in each of the 100 PWFO files. Each of the 5000 mirror filters will choose a random row’s OSNP of their own in each of the 100 PWFO files. At the end, each of the 5000 mirror filters will have 100 random OSNPs picked from the rows of the 100 PWFO files. The sum of the 100 random OSNP picks for each mirror filter will generate a random total out-of-sample net profit (tOnpNet). The average and standard deviation of the 5000 mirror filter’s different random tOnpNets will allow us to calculate the chance probability of each TopN-Metric- LR-NT filter’s tOnpNet. Thus, given the mirror filter’s bootstrap random tOnpNet average and standard deviation, we can calculate the probability of obtaining the TopN-Metric-PF-LR-NT filter’s tOnpNet by pure chance alone.  Since for this run there are 15376 different filters, we can calculate the expected number of cases that we could obtain by pure chance that would match or exceed the tOnpNet of the filter we have chosen or (15376) X (tOnpNet Probability). For our filter, in row 3 in Figure 3, the expected number of cases that we could obtain by pure chance that would match or exceed the $72206 is 15376 x 1.22 10-6 = 0.0187.  This is much less than one, so it is improbable that our result was due to pure chance.

The partial run shown in Figure 3 reveals that the following filter in row 3 will produce the most consistent and reliable out-of-sample results.

Filter: #Trds>=10and LR<=3 and Top 20 p/l  then maximum mp-rd

Where:

NT = The number of trades for a given set of strategy inputs in the in-sample/test section.
LR = Maximum losing trades in a row for a given set of strategy inputs in the in-sample/test section.
p/l = The ratio of the Median Winning Trade Profit divided by the absolute value of the Median of the Losing Trades in the in-sample section.
mp-rd = The median of the final trade profit minus the maximum rundown of the trade in the in-sample/test section.

The first part of the filter chooses those rows or cases out of the 2592 rows in each PWFO file test(in-sample) section that satisfy the criteria  LR<=3 and NT>=10 .  After using a NT and LR filter, there can still be 100s of rows left in the PWFO file. The PWFO generates the metric p/l, which is the ratio of the median winning trade Profit divided by the absolute value of the median of the losing trades for any given set of strategy inputs in the in-sample/test section . Let us choose out of the rows that are left from the NT-LR screen, the 20 rows that contain the top 20 p/l . This particular filter will now leave 20 cases or rows in the PWFO file that satisfy these filter conditions. We call this filter t20p/l|lr3|nt10-mp-rd  where t20p/l means the Top 20 p/l rows left after the NT-LR filter. Within the 20 PWFO rows that are left, we want the row that has the maximum PWFO metric, mp-rd, in the in-sample/test section.  This Filter or selection procedure will leave only one choice for the system input values of  degree, N, aup, adn.  We then use these input values found in the in-sample/test section by the Filter on the next week of one minute EC out-of-sample price bars following the in-sample/test section.

Results

All results, tables and figures are available as a supplemental PDF download.

Table 1 below presents a table of the 105 test and out-of-sample windows, the Filter selected, strategy inputs and the weekly out-of-sample profit/loss results using the filter described above.

Figure 1 presents a graph of the equity curve generated by using the filter on the 105 weeks of 6/25/10 – 6/25/12.  The equity curve is plotted from Equity column in Table 1.  Plotted on the equity curve is the least squares straight line.

Figure 2 presents the out-of-sample 1 minute bar chart of EC for 6/4/12 to 6/5/12 with the Nth Order Polynomial Acceleration System Indicator and all the buy and sell signals for those dates.

Figure 3  Partial output of the Walk Forward Metric Performance Explorer.

Discussion of System Performance

In Figure 3 Row 3 of the spreadsheet filter output are some statistics that are of interest for our filter.  An interesting statistic is Blw.  Blw is the maximum number of weeks the OSNP equity curve failed to make a new high.  Blw is 17 weeks for this filter.  This means that 17 weeks was the longest time that the equity for this strategy failed to make a new equity high.

To see the effect of walk forward analysis, take a look at Table 1 (PDF download). Notice how the input parameters  degree. N, aup, adn  take sudden jumps from high to low and back .  This is the walk forward process quickly adapting to changing volatility conditions in the test sample. In addition, notice how often degree changes from 2 to 4 and aup changes from 0.4 to 2.4.  When the data gets very noisy with a lot of spurious price movements, the degree has to be lower.  During other times when the noise level is not as much degree can be higher to get onboard a trend faster.

In Figure 1, which presents a graph of the equity curve using the filter on the 105 weeks of out-of-sample data, notice how the equity curve follows the trend line with an R2 of 0.94.

Using this filter, the strategy was able to generate $72,206 net equity trading one EC contract for 105 weeks. The strategy did not trade in 25 of the 105 weeks. Either the NT or LR or tnp>0 criteria was not met by any set of inputs in the in-sample section.  The period of time from 6/25/10 to 6/22/12 was a very volatile market.  Yet, the Nth Order Polynomial Acceleration strategy was able to adapt quite well.  From Table 1, the largest losing week was -$4813 on the week ending 10/14/11 a very wild financial time and market week.  The largest drawdown was -$6501 from the week ending on 6/17/11 to 10/14/11 during that same wild financial time.  However, during this 17 week period the strategy did not trade for 13 weeks during this period from 7/15/11 to 10/7/11 and completely recovered  and made a new equity high within two weeks on 10/21/12.  The longest time between new equity highs was 17 weeks.

In observing Table 1 we can see that this strategy and filter made trades from a low of no trades/week to a high of 76 trades/week with an average of  16.6 trades/week on the weeks the strategy traded.  The 76 trades/week occurred for the week ending 6/17/11.  degree was 4, N was 60 and aup was an unusually low aup=0.4. This was a big up week that followed a big down week and caused an unusual number of whipsaw trades on the buy side although the week ended with a profit of $1515.

Given 23 hour trading of the EC, the strategy did not miss any profitable trends opportunities when Asia and then Europe opened trading in the early morning.

– Dennis Meyers

Dennis Meyes has a doctorate in applied mathematics in engineering. He is a former member of the Chicago Board Options Exchange (CBOE), a private trader, and president of Meyes Analytics. His firm specializes in consulting on financial software for traders. He can be reached at his website, MeyersAnalytics.com.

References

  1. Efron, B., Tibshirani, R.J., (1993), “An Introduction to the Bootstrap”, New York, Chapman & Hall/CRC.

Viewing all articles
Browse latest Browse all 323

Trending Articles