Training method; affects the way how parameters, factors,
rules, or machine learning models are generated in the
The following flags can be set:
||Ascent parameter optimization. Evaluates the effect
of any parameter on the strategy separately. Walks through all
parameter ranges and seeks for 'plateaus' in the parameter space, while
ignoring single peaks. This is normally the best algorithm for a robust strategy,
except in special cases with highly irregular parameter spaces or interdependent parameters.
||Brute force parameter optimization (Zorro S required). Evaluates all
parameter combinations and selects the most profitable combination
that is not a single peak. Can take a long time when many parameters are
optimized or when parameter ranges have many steps. Useful when parameters affect each other in complex ways. Brute force optimization
tends to overfit the strategy, so out-of-sample
testing or walk-forward optimization is mandatory.
||Genetic parameter optimization (Zorro S required). A population of parameter combinations is evolved toward the best
solution in an iterative process. In each iteration, the best combinations
are stochastically selected, and their parameters are then pair-wise
recombined and randomly mutated to form a new generation. This algorithm is
useful when a large number of parameters per component must be optimized or
when parameters affect each other in complex ways. It will likely overfit the strategy, so out-of-sample or walk-forward testing
||Use trade sizes (Lots or Margin)
determined by the script. Large trades get then more weight in the
Set this flag in special cases when the trade volume matters, f.i. for
optimizing the money management or for portfolio systems that
calculate their capital distribution by script. Otherwise trade sizes are ignored in the training process.
||Exclude phantom trades. Otherwise phantom trades are
treated as normal trades in the training process.
||Optimize toward the highest single peak in the
parameter space, rather than toward hills or plateaus. This can generate
unstable strategies and is for special purposes only. For instance when
optimizing not a parameter range, but a set of different algorithms or
||Generate individual OptimalF
factor files for all WFO cycles, instead of a single file for the whole
simulation. This produces low-quality factors due to less trades, but
prevents backtest bias.
factor files not with the OptimalF algorithm, but by
script with a user-defined algorithm. For this, set OptimalF, OptimalFLong,
OptimalFShort to a script calculated value in the
FACCYCLE training run (if(is(FACCYCLE)) ...).
Number of parameters to optimize for the current
asset/algo component that was selected in a
loop (int, read/only,
valid after the first INITRUN).
Current parameter or current generation, runs from 1 to NumParameters
in Ascent mode, or from 1 to Generations in
Genetic mode (int, read/only, valid after the first
Number of the optimize cycle, starting with 1 (int,
read/only, valid after the first INITRUN).
The number of step cycles depends on the number of steps in a parameter
range and of the population in a genetic optimization. Counts up after any
step until the required number is reached or StepNext is set to
Set this to 0 for early aborting the optimization (int).
Number of training cycles (int, default = 1)
for special purposes, such as training rules and parameters at the same time when
they depend on each other. In any cycle either RULES or
PARAMETERS are trained, or both, dependent on flags
Not to be confused with WFO cycles.
The number of the current training cycle from 1 to NumTrainCycles,
or 0 in [Test] or [Trade]
Maximum population size for the genetic algorithm (int, default = 50). Any parameter
combination is an individual of the population. The population
size reduces automatically when the algorithm converges and only the fittest
individuals and the mutants remain.
Maximum number of generations for the genetic algorithm (int, default = 50).
Evolution terminates when this number is reached or when the overall fitness does not increase for 10
Average number of mutants in any generation, in percent (int, default =
5%). More mutants can find more and better parameter combinations, but let the
algorithm converge slower.
Average number of parameter recombinations in any generation, in
percent (int, default = 80%).
Highest objective return value so far (var,
starting with 0).
Pointer to a list of PARAMETER structs for the current asset/algo
component. The Min, Max, and Step elements are set up in
the list after the first INITRUN in [Train]
mode. The PARAMETER struct is defined in trading.h.
- Training methods for machine learning or rules generating are set up with the
- Alternative optimization algorithms from external libraries or
individual optimization targets can be set up with the
parameters and objective
- Parameter charts are only produced by Ascent optimization.
It is recommended to do first an Ascent training for
determining the parameter dependence of a strategy. Afterwards the final
optimization can done with an alternative algorithm.
- Percent steps (4th parameter of the optimize function)
are replaced by 10 equal steps for brute force and genetic optimization.
- In genetic optimization, parameter combinations that were already
evaluated in the previous generation are not evaluated again and are skipped in the log.
This lets the algorithm run faster with higher generations.
- Genetic optimization is also possible with the free Zorro version using
the Z Optimizer tool from the Download page.
- When parameters are trained several times by using
NumTrainCycles, each time the start
values are taken from the last optimization cycle
in Ascent mode. This
sometimes improves the result, but requires a longer time for the training process
and increases the likeliness of overfitting. To prevent overfitting, use not more than 2
subsequent parameter training cycles.
optimize, advise, OptimalF,
objective, setf, resf
► latest version online