summaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md34
1 files changed, 33 insertions, 1 deletions
diff --git a/README.md b/README.md
index 6e06a74..a2a4a41 100644
--- a/README.md
+++ b/README.md
@@ -28,8 +28,40 @@ make download
make generate
```
+## Train
+
+
+Use, modify, or create a new experiment found at `training/conf/experiment/`.
+To run an experiment we first need to enter the virtual env by running:
+
+```sh
+poetry shell
+```
+
+Then we can train a new model by running:
+
+```sh
+python main.py +experiment=conv_transformer_paragraphs
+```
+
+## Network
+
+Create a picture of the network and place it here
+
+## Graveyard
+
+Ideas of mine that did not work unfortunately:
+
+* Use VQVAE to create pre-train a good latent representation
+ - Tests with various compressions did not show any performance increase compared to training directly e2e, more like decrease to be honest
+ - This is very unfortunate as I really hoped that this idea would work :(
+ - I still really like this idea, and I might not have given up just yet...
+
+
+* Axial Transformer Encoder
+ - Added a lot of extra parameters with no gain in performance
+ - Cool idea, but on a single GPU, nah... not worth it!
-## TODO
## Todo
- [ ] remove einops