Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
G
Gradu
Manage
Activity
Members
Plan
Wiki
Code
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Deploy
Releases
Package registry
Operate
Terraform modules
Analyze
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
ltvirtav
Gradu
Repository graph
Repository graph
You can move around the graph by using the arrow keys.
main
Select Git revision
Branches
1
main
default
protected
1 result
Begin with the selected commit
Created with Raphaël 2.2.0
7
Apr
19
Mar
17
11
5
4
3
24
Feb
18
31
Jul
30
26
15
8
25
Jun
18
12
10
7
5
Fixed the issue where tc_values df got values from both the gradient based learning as well as generative evolution
main
main
Final model. The model now has bootstrap based confidence testing, statistical testing and trust indicator scores. The calibration falls in the academically motivated thresholds and there is no Chelonsky decomposition failures in sight. Trying to get rid of the model outputs from the commit by clearing them and restarting the model
Final model. The model now has bootstrap based confidence testing, statistical testing and trust indicator scores. The calibration falls in the academically motivated thresholds and there is no Chelonsky decomposition failures in sight. Trying to get rid of the model outputs from the commit by clearing them and restarting the model
Final model. The model now has bootstrap based confidence testing, statistical testing and trust indicator scores. The calibration falls in the academically motivated thresholds and there is no Chelonsky decomposition failures in sight.
Added confidence testing of the model.
Added two-step calibration process in order to avoid local minima. This was done via differential evolution which made tc be pushed far enough out to be academically viable.
Added synthetic dataset creation. Omega initialization based on stationary dataseries to ensure proper initialization, added ADF tests to see how well dataset stationarised.
Added Lomb Scargle Spectral analysis on detrended time series in order to initialise omega value at the beginning of the training. Added the parameter constraints to be enforced during training and not after it. Added adaptive regularization to be based on exponential decay function in order to decrease bias while training the model. Next (and hopefully) last things to do are multi-window confidence testing as well as Bootstrap-Based trust indicator which assesses the robustness of the fit via resampling methods
Added the confidence calculation and visualisation for it. The new math in TF_boiler works now as well as does the training process with the hardcallback.
The iprovements made to the explicit matrix calculation model (now in Tensorflow model) taken from yesterdays build at test. The motivation behind changing the whole math process was due to the fact that in order to actually compute the confidence score utilising academically robust methods required a new look at the training process. Lets see how it goes
Model with lenghtening datasets. Next step the confidence indicator after which looking at explicitly restricting the values during the training process to be within the academically found threshold windows.
The shrinking window logic fixed and the model is expanding from the right direction. Next up the confidence score calculation.
The learning was from the end date onwards, which was wrong. Now the model is being thought from the earliest point to the latest. Now the shrinking window logic needs changing and the confidence score computation needs to be done as well.
Added threshold values and a hard cut system for fits which do not satisfy the threshold criteria proposed by Filimonov & Sornette 2013. Next step is the confidence score calculattion = qualified fits/all fits. After this implementation
Added L1 and L2 regularizations in to the code. Improved visualisation for the training and val loss plots
Added validation loss and cleaned out the code of the training loss visualisation. Early stopping is now checked against val loss and for 10 echelons. Next step is to update the visualisation so that all val and train losses are on the same grpah and to implement the hard/soft limit on parameters when constructing the final fit.
Added some fixes regarding the front file
Tensorflow_test Tf_jupyter
Tensorflow_test Tf_jupyter
Tensorflow_test Tf_jupyter
Model Tensorflow_test Tf_jupyter notebook.
managed to break the model once again. Made the tc
New Tensorflow Test files have been created.
Added test files to test different ways to
Working basic model for LPPL. Next up:
Added LPPL model. Check imports and sklearn.
Made Tensorflow model working and do basic plot.
Created Tensorflow model and new Jupyter to
Added print function to debug in lppls.py
t1 and t2 parameters need troubleshooting.
Got the model working with proper dataset.
Same changes as previously.
Created jupyter notebook file and added code there
Added Jupyter to setup in order to visualise via
Added LPPL_model and put together a POC utilising
Testi
Initial commit
Loading