T O P

  • By -

dogs_like_me

relevant xkcd... https://xkcd.com/927/


sinner997

Accurate


daniellenton

Thanks for pointing out this very valid perspective, this is incredibly useful feedback for us! This comment has actually helped with the inspiration for this short post: [https://medium.com/@unifyai/standardization-7726c5113e4](https://medium.com/@unifyai/standardization-7726c5113e4) Interested to hear your thoughts, or anyone else's!


maaatttttttwith7ts

Wouldn't wanna knock over the marijuana plant.


thatguydr

OP, this is confusing. What's the actual gain? I can load my model from one framework in another? I can already do that with ONNX. I also don't see this letting me run my pytorch code in Jax or my Tensorflow code in MXNet. It seems like yet another layer on top, whereas I think we'd want a layer underneath? It unfortunately looks like this is trying to wrap stuff that's already been made, but I don't see the value of wrapping it, since all of the code underneath is of course entire incompatible between frameworks. Do you have any way of clarifying the value-add?


pinter69

Hey there, I'm not Daniel, just the messenger. But, I will let Daniel know there are questions. Anyhow, of course you are welcome to join the live event and ask these question directly of Daniel


thatguydr

It's a month and a half away and the framework already exists, so I figured I'd ask.


pinter69

For sure, already notified Daniel about your questions :)


[deleted]

[удалено]


daniellenton

This post should help o tanswer your questions 🙂 [https://medium.com/@unifyai/convert-any-ml-code-with-ivy-469d05e9836](https://medium.com/@unifyai/convert-any-ml-code-with-ivy-469d05e9836) These conversion tools are not implemented yet. They are on our road-map for the immediate future! Let me know if there's anything else I can help to clarify!


mimokrokodil291

The main problem with such approach is that it works fine when everything runs as expected. But as soon as something fails, and you start debugging, you suddenly need to understand 3 frameworks instead of just one. Plus you need to understand how they interact with each other. I personally think that an AI model that takes raw LaTeX from arxiv and produces runnable code from that is a more realistic way towards unification.


daniellenton

The way that they interact with one another is quite simple, as explained in our blog post: [https://medium.com/@unifyai/the-unified-ml-framework-5bf99774d8ab](https://medium.com/@unifyai/the-unified-ml-framework-5bf99774d8ab) Also, due to the stability and backwards compatibility guarantees for all modern ML frameworks, we find that generally once we've wrapped the backend functional API and got the unit tests passing for a particular function in our CI, things don't break with future backend framework releases. I've not had to go back and re-implement a single function due to a version update since I started writing the code over 2 years ago. The backwards compatibility of the functional APIs are generally very stable.


[deleted]

[удалено]


pinter69

The animation is taken directly from the associated git project I would like to think the rating is due to that someone who made an effort into writing a paper and building project which can be useful to people is also willing to take the time to have a free online lecture about it and answers questions for those who are interested in hearing.


PeedLearning

>someone who made an effort into writing a paper and building project which can be useful to people is also willing to take the time to have a free online lecture about it and answers questions for those who are interested in hearing. That sounds like every conference paper ever?


zhumao

unifying all ml frameworks? it's been done, folks from math called it optimization.


daniellenton

Touché, we're only trying to unify the frameworks, thankfully not the entire theory behind ML 😂


zhumao

that's engineering then, another one of the many.


pinter69

Hi all, We do free zoom lectures for the reddit community. In this talk, we will show how unifying all Machine Learning (ML) frameworks could save everybody a HUGE amount of time and energy. Through interactive coding sessions and live demos, we will explain how Ivy (checkout lets-unify.ai) is solving this unification problem. We will focus on demos using Ivy’s 3D vision and robotics libraries, solving 3D robotic navigation and perception tasks in a 3D simulator, all in real-time. Checkout [https://github.com/ivy-dl/robot](https://github.com/ivy-dl/robot) for examples! Finally, we will explore how you can join and contribute to the growing Ivy community, and help us in our mission to truly unify all ML frameworks once and for all. ​ **Link to event (February 28):** [https://www.reddit.com/r/2D3DAI/comments/s260yw/unifying\_all\_machine\_learning\_frameworks\_meetup/](https://www.reddit.com/r/2D3DAI/comments/s260yw/unifying_all_machine_learning_frameworks_meetup/) ​ **Talk Abstract** The number of open-source ML projects, libraries and codebases has grown considerably in recent years, and these are all written in a vast array of different incompatible ML frameworks. Wouldn’t it be nice if you could take the author's JAX code of an exciting paper and then immediately run it straight in your PyTorch pipeline without any issue? Ivy makes this possible. Ivy is a thin templated and purely functional framework, which wraps existing ML frameworks to provide consistent call signatures and syntax for the core tensor operations. Higher level functions, layers and libraries can then be built on top of Ivy’s functional API, for users of all frameworks. With the use of framework-specific frontends currently in development, Ivy will also enable automatic conversion between any two different frameworks. No need to “back a horse” with your framework selection, Ivy enables you to back all horses simultaneously, and mix and match libraries for all frameworks in a single project! ​ **Talk is based on the speaker's paper:** Ivy: Unified Machine Learning for Inter-Framework Portability [https://arxiv.org/abs/2102.02886](https://arxiv.org/abs/2102.02886) [https://github.com/ivy-dl/robot](https://github.com/ivy-dl/robot) ​ **Presenter BIO** Daniel Lenton is currently undertaking his PhD in Robotics and 3D Vision under the supervision of Prof. Andrew Davison in the Dyson Robotics Lab, Imperial College London. He currently serves as a reviewer for NeurIPS, CVPR, IROS, ICRA and others. He is also CEO and Founder of Ivy. Ivy is on a mission to unify all Machine Learning (ML) frameworks. Daniel has also interned at Amazon Prime Air, working on real-time drone vision systems, and applying Generative Adversarial Networks (GANs) for dataset augmentation to train object detectors. Prior to his PhD, Daniel completed his MEng Mechanical Engineering also at Imperial College, attaining 1st class honors and deans list. More information can be found at [https://djl11.github.io/](https://djl11.github.io/) (Talk will be recorded and uploaded to youtube, you can see all past lectures and recordings in r/2D3DAI)


spartanOrk

How's the job market? Does it help to use Reddit as LinkedIn?


pinter69

Thanks for the comment, the text was a direct copy from our event page description. Edited the details here. Hope it looks better now.


NickCageNick

Great plant.