Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • main default protected
1 result
Created with Raphaël 2.2.07Apr19Mar171154324Feb1831Jul302615825Jun18121075Fixed the issue where tc_values df got values from both the gradient based learning as well as generative evolutionmainmainFinal model. The model now has bootstrap based confidence testing, statistical testing and trust indicator scores. The calibration falls in the academically motivated thresholds and there is no Chelonsky decomposition failures in sight. Trying to get rid of the model outputs from the commit by clearing them and restarting the modelFinal model. The model now has bootstrap based confidence testing, statistical testing and trust indicator scores. The calibration falls in the academically motivated thresholds and there is no Chelonsky decomposition failures in sight. Trying to get rid of the model outputs from the commit by clearing them and restarting the modelFinal model. The model now has bootstrap based confidence testing, statistical testing and trust indicator scores. The calibration falls in the academically motivated thresholds and there is no Chelonsky decomposition failures in sight.Added confidence testing of the model.Added two-step calibration process in order to avoid local minima. This was done via differential evolution which made tc be pushed far enough out to be academically viable.Added synthetic dataset creation. Omega initialization based on stationary dataseries to ensure proper initialization, added ADF tests to see how well dataset stationarised.Added Lomb Scargle Spectral analysis on detrended time series in order to initialise omega value at the beginning of the training. Added the parameter constraints to be enforced during training and not after it. Added adaptive regularization to be based on exponential decay function in order to decrease bias while training the model. Next (and hopefully) last things to do are multi-window confidence testing as well as Bootstrap-Based trust indicator which assesses the robustness of the fit via resampling methodsAdded the confidence calculation and visualisation for it. The new math in TF_boiler works now as well as does the training process with the hardcallback.The iprovements made to the explicit matrix calculation model (now in Tensorflow model) taken from yesterdays build at test. The motivation behind changing the whole math process was due to the fact that in order to actually compute the confidence score utilising academically robust methods required a new look at the training process. Lets see how it goesModel with lenghtening datasets. Next step the confidence indicator after which looking at explicitly restricting the values during the training process to be within the academically found threshold windows.The shrinking window logic fixed and the model is expanding from the right direction. Next up the confidence score calculation.The learning was from the end date onwards, which was wrong. Now the model is being thought from the earliest point to the latest. Now the shrinking window logic needs changing and the confidence score computation needs to be done as well.Added threshold values and a hard cut system for fits which do not satisfy the threshold criteria proposed by Filimonov & Sornette 2013. Next step is the confidence score calculattion = qualified fits/all fits. After this implementationAdded L1 and L2 regularizations in to the code. Improved visualisation for the training and val loss plotsAdded validation loss and cleaned out the code of the training loss visualisation. Early stopping is now checked against val loss and for 10 echelons. Next step is to update the visualisation so that all val and train losses are on the same grpah and to implement the hard/soft limit on parameters when constructing the final fit.Added some fixes regarding the front fileTensorflow_test Tf_jupyterTensorflow_test Tf_jupyterTensorflow_test Tf_jupyterModel Tensorflow_test Tf_jupyter notebook.managed to break the model once again. Made the tcNew Tensorflow Test files have been created.Added test files to test different ways toWorking basic model for LPPL. Next up:Added LPPL model. Check imports and sklearn.Made Tensorflow model working and do basic plot.Created Tensorflow model and new Jupyter toAdded print function to debug in lppls.pyt1 and t2 parameters need troubleshooting.Got the model working with proper dataset.Same changes as previously.Created jupyter notebook file and added code thereAdded Jupyter to setup in order to visualise viaAdded LPPL_model and put together a POC utilisingTestiInitial commit
Loading