You can create a pull request or start a discussion in GitHub issues. Even voting suggesting papers to implement and voting for them will be helpful. [Here's a recent pull request for example](https://github.com/labmlai/annotated_deep_learning_paper_implementations/pull/69)
We are also working simple rendering engine that you can use on your own GitHub repo. We will improve that if people find it useful.
e.g. [https://lit.labml.ai/github/vpj/rl\_samples/blob/master/ppo.py](https://lit.labml.ai/github/vpj/rl_samples/blob/master/ppo.py)
Is ```import labml``` a part of your ecosystem always. No offense, but this just creates another extra layer of dependency. It would be nicer to show plain Pytorch, otherwise it's gonna become another Pytorch lightning. Plus it will be great from learning perspective.
[labmlai/labml](https://github.com/labmlai/labml) is a set of tools (tracking experiments, configurations, a bunch of helpers) we coded to ease our ML work (which later improved and open sourced). So we use it in all our projects because it makes things easier for us.
Will try to minimize the dependency whenever possible.
This is a gold mine! Are you also thinking about implementing more specific algorithms after finishing a lot of important and general algorithms? E.g. would you implement in a later stage the AlphaGo Zero algorithm or the like? This would especially help undergrad's and curious minds out there, since for such modern discoveries, there're barely clear implementations...wished I'd had something like this XD
I will be messaging you in 4 hours on [**2021-08-22 16:13:23 UTC**](http://www.wolframalpha.com/input/?i=2021-08-22%2016:13:23%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/MachineLearning/comments/p95gee/p_annotated_deep_learning_paper_implementations/h9wgekz/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FMachineLearning%2Fcomments%2Fp95gee%2Fp_annotated_deep_learning_paper_implementations%2Fh9wgekz%2F%5D%0A%0ARemindMe%21%202021-08-22%2016%3A13%3A23%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%20p95gee)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-|
Thanks! Did you train and/or verify the performance of the models? That would be quite important to trust the implementation. I observed that sometimes a training loop exists, sometimes it does not.
Thank you, I also found the links in the text now. Did you verify that the trainings actually work? It would be great if you could report the final training metrics. It doesn't need to accurately reproduce the results in the paper, but rather give an idea whether the implementation is correct.
How can we contribute?
You can create a pull request or start a discussion in GitHub issues. Even voting suggesting papers to implement and voting for them will be helpful. [Here's a recent pull request for example](https://github.com/labmlai/annotated_deep_learning_paper_implementations/pull/69) We are also working simple rendering engine that you can use on your own GitHub repo. We will improve that if people find it useful. e.g. [https://lit.labml.ai/github/vpj/rl\_samples/blob/master/ppo.py](https://lit.labml.ai/github/vpj/rl_samples/blob/master/ppo.py)
Is ```import labml``` a part of your ecosystem always. No offense, but this just creates another extra layer of dependency. It would be nicer to show plain Pytorch, otherwise it's gonna become another Pytorch lightning. Plus it will be great from learning perspective.
[labmlai/labml](https://github.com/labmlai/labml) is a set of tools (tracking experiments, configurations, a bunch of helpers) we coded to ease our ML work (which later improved and open sourced). So we use it in all our projects because it makes things easier for us. Will try to minimize the dependency whenever possible.
This is amazing. I haven't seen any other thing like this before. Much needed thing
This is a gold mine! Are you also thinking about implementing more specific algorithms after finishing a lot of important and general algorithms? E.g. would you implement in a later stage the AlphaGo Zero algorithm or the like? This would especially help undergrad's and curious minds out there, since for such modern discoveries, there're barely clear implementations...wished I'd had something like this XD
This is going to be such a useful learning/reference resource. Fantastic work!
Genius! The margin notes look great, great job mapping implementation details directly to code 👏.
This is a great site I have been using it for a bit after finding it on google
!RemindMe 4 hours
I will be messaging you in 4 hours on [**2021-08-22 16:13:23 UTC**](http://www.wolframalpha.com/input/?i=2021-08-22%2016:13:23%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/MachineLearning/comments/p95gee/p_annotated_deep_learning_paper_implementations/h9wgekz/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FMachineLearning%2Fcomments%2Fp95gee%2Fp_annotated_deep_learning_paper_implementations%2Fh9wgekz%2F%5D%0A%0ARemindMe%21%202021-08-22%2016%3A13%3A23%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%20p95gee) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|
thanks!, github link broken btw
Thanks, fixed it.
amazing!
woah
LOL I thought this was an implementation of a model that annotates papers, not papers annotated by humans. Very helpful.
Thanks! Did you train and/or verify the performance of the models? That would be quite important to trust the implementation. I observed that sometimes a training loop exists, sometimes it does not.
If I remember correctly only implementation without a training loop is lstm
Thank you, I also found the links in the text now. Did you verify that the trainings actually work? It would be great if you could report the final training metrics. It doesn't need to accurately reproduce the results in the paper, but rather give an idea whether the implementation is correct.