TradingView
Jittra
Jul 30, 2018 4:07 PM

ANN [Original Version Repainting Fixed] 

Gold/U.S. DollarFXCM

Description

It's the ANN, the original version with repainting issue got taken care of. The result as you can see, it's might not be that good compare to original. If anyone want to put it into a strategy, feel free. I do not recommend you to use this as a stand alone. Trade with your own risk.

Maybe trade the opposite way might do it lol
Comments
wicksell
Sorry, but it really is ANN, I'm using it with multiple times and recharging to prevent that repainting, it has worked.
wicksell
Jittra
@wicksell, diffrent timeframe but still higher will still repaint.
Rigelintelligence
@Jittra,

May I know how to see it repaints ?
Chasecal
@wicksell, looks good
vladimir_777Qtf
@wicksell,
It is impossible to avoid redrawing until the model does not work as it should; in this case, it is a pre-trained MLP model that cannot work properly in PineScript.
The weights should be adjusted by the MLP model, which is not here and only the weights remain unchanged, this generates repainting and this cannot be avoided...

Retraining the model and weights will help for a while, but this must be done using Python at a minimum!
mateolesai
does it repaint??
vladimir_777Qtf
⚠️Warning ⚠️
I now understand why it is impossible to avoid redrawing, everything is connected with the model’s weights, they are pre-training and are not adjusted.
Why does this happen because PineScript does not support the creation of static MLP models or other, this means that we need to train the model in Python, for example, and then integrate the trained model here; therefore, it is very difficult to implement this here and have it work as expected!

So all those who know how to program, you can write a simple script for MLP models or train others and use them there and not here, I don’t know when the opportunity to do this will appear here!!!
vladimir_777Qtf
⚠️ Warning ⚠️
Данный индикатор является примером простой нейронной сети, где:

- 3 слоя нейронов (входной, скрытый и выходной)
- 15 нейронов во входном слое (l0_0 - l0_14)
- 30 нейронов в скрытом слое (l1_0 - l1_29)
- 1 нейрон в выходном слое (l3_0)

Входные данные для сети - разница цены закрытия текущего и предыдущего дня (getDiff()). Эти данные подаются на 15 нейронов входного слоя.

Далее идет скрытый слой из 30 нейронов, каждый из которых комбинирует сигналы со всех 15 входных нейронов с разными весами (коэффициентами).

На выходе скрытого слоя получается 30 сигналов, которые подаются на единственный выходной нейрон l3_0. Он также комбинирует их с весами и выдает итоговый сигнал предсказания.

Проблема этой сети в том, что веса всех нейронов жестко заданы и не обучаются на данных. Поэтому сеть дает случайные предсказания, не отражающие реальной ситуации.

Чтобы исправить это, нужно реализовать обучение сети с подбором весов, например с помощью алгоритма обратного распространения ошибки. Тогда веса будут настраиваться в соответствии с данными для получения точных прéдсказаний.

Также можно увеличить количество скрытых слоев и нейронов в них для построения более сложной и гибкой модели. И использовать нелинейные функции активации нейронов вместо линейных.

Translate to English 🔻🔻🔻

Here is the verbatim English translation of the Russian text:

This indicator is an example of a simple neural network, where:

- There are 3 layers of neurons (input, hidden and output)
- 15 neurons in the input layer (l0_0 - l0_14)
- 30 neurons in the hidden layer (l1_0 - l1_29)
- 1 neuron in the output layer (l3_0)

The input data for the network is the difference between the closing price today and yesterday (getDiff()). This data is fed to 15 neurons of the input layer.

Next comes a hidden layer of 30 neurons, each of which combines signals from all 15 input neurons with different weights (coefficients).

At the output of the hidden layer there are 30 signals that are fed to the single output neuron l3_0. It also combines them with weights and produces the final prediction signal.

The problem with this network is that the weights of all neurons are hardcoded and not trained on data. Therefore, the network gives random predictions that do not reflect the real situation.

To fix this, it is necessary to implement training of the network with weight adjustment, for example using the backpropagation algorithm. Then the weights will be tuned according to the data to produce accurate predictions.

It is also possible to increase the number of hidden layers and neurons in them to build a more complex and flexible model. And use non-linear activation functions for neurons instead of linear ones.
More