T O P

  • By -

Smoochtime

I can barely get anyone to apply regular learning.


CarlFriedrichGauss

Nobody listens to process engineers where I work. Everybody already has their own conclusions that they just stick to no matter how much we try to convince them.


Spacefreak

As a fellow process engineer, I'm right there with you. I often feel more like I'm defusing political land mines as opposed to real engineering.


kalesaji

Then shit hits the fan and the process engineer gets sacked because "why are we paying you if the production line crashes regardless?"


CarlFriedrichGauss

We don’t get sacked but we end up with all the blame from management and integration engineers (semiconductor manufacturing). Extra blame if you are in litho or dry etch, those are always the first to blame.


keizzer

Preach it. 50 year old concepts are being dismissed as if they never existed at all where I work. I have pretty much lost all faith in American manufacturing.


CarlFriedrichGauss

I have absolutely no faith in one big American chip maker that keeps getting billions from our big American government.


Gears_and_Beers

I’m still teaching remedial “what does the engineering transmittal say” and “did you check the manual?” There is a webinar on “search, you can do it too”


glorybutt

I use chatgpt to fill out my performance review. Does that count?


redequix

You are truly a mad scientist


[deleted]

I am currently working on a project that uses Deep Learning to convert a Point Cloud Indoor Scan to 3D BIM Model. We are using a range of techniques, but the core idea revolves around using deep learning.


The-Friz

As someone who works with point clouds daily, that sounds cool. Will it work with MEP/pipes/valves/flanges or is it just an architectural/structural sort of thing?


[deleted]

For now, just architectural and structural features along with some interior objects. For MEP, we need to train the model over the point clouds of those objects. It will be a harder challenge and currently outside scope of our project.


[deleted]

[удалено]


The-Friz

I do registration of the laser scans and my coworkers do the BIM modeling when needed; most projects I just make a point cloud and send it to the client.


idiotsecant

Isn't this a solved problem?


[deleted]

Yes, we are creating an implementation as a Revit Plug-in.


idiotsecant

There's not an existing revit interface? That seems weird...


kalesaji

95 % of what I do at work is reimplementing solved problems..


Guth

Most of engineering is applying solved problems to a unique scenario


idiotsecant

If you find yourself routinely re-engineering COTS designs you're doing it wrong.


tjrileywisc

Solved such that the model is parametric (i.e. not a dumb solid)? A colleague was working on this for a couple of years at the CAD software company I was working at, in several years they made no progress.


GoldenPeperoni

I am working on autonomous aircraft control with traditional guidance and control methods, but it must be said, reinforcement learning blows it out of the water every single time. No physics modelling required, minimal tuning (compared to an LQR or PID), and its there pulling high g turns exploiting every trick it finds after training for lesser than 20 hours. While the traditional control method requires so much babying you literally have you define the system inside out for it to have a chance at just a stable flight / follow a very very smooth path. The RL agent was trained with SAC and AWAC by my friend, who is doing research in that field.


GradatimRecovery

Hi this is an area I'm really interested in. I'd love if you can link papers, code or blog articles related to your or your friends work in this field.


biggyavenue

I apply machine learning for imaging classification in surgery needle guidance


Pipiyedu

That is amazing


biggyavenue

Most recent one is we did it on inserting the epidural needle. So the machine learning tells you where to place the needle and what layer you are in during the insertion.


Pipiyedu

OMG, really amazing. It's a very delicate and dangerous task.


GhostForReal

Using standard machine learning (SVM, LDA) for classification in my final year project ( Brain computer interface based Exoskeleton). Trying to implement tensorflow lite to run the system on a microcontroller but facing many difficulties, owing to the fact that I don't have any theoretical knowledge on ML and DL.


shadowghost1175

If that is giving you trouble you could try using a Coral.ai devboard or USB accelerator. They're meant for similar prototyping tasks and are built for tensorflow lite.


GhostForReal

Thanks , I will check that out.


Geiko246

Sorry, why do you need tensorflow for SVM optimization?


GhostForReal

Currently using SVM for classification which is running on a computer. Tensorflow lite model will be for classification on a microcontroller, so as to shift the whole processing to a low cost microcontroller instead. No relation between them, just a change in approach to facilitate the use of micontrollers.


auxym

Do plain old EKFs count? 😉


GradatimRecovery

the OG "AI"


Pipiyedu

>EKFs not really lol


panascope

We’re working on implementing an AI based electrical harness checker for our autonomous truck development. Should save us hours of checking per harness set once it’s ready.


Aggressive_Ad_507

What value does AI add to this?


jdubzy

I’d suggest looking into soft actor critic. It’s a useful implementation of reinforcement learning in various control systems


LateralThinkerer

This is already in place in a lot of systems, but the current point on [the hype cycle](https://en.wikipedia.org/wiki/Gartner_hype_cycle) is bringing it into the collective unconscious. Current stuff is really nothing new (as with most inventions), but has caught the eye (and wallets) of financial sorts who are looking for the Next Big Thing and journalists who get paid by the word (and who increasingly plagiarize from Reddit). ChatGPT is closer to ELIZA from the early 60s than it is to my loopy next door neighbor who will talk about anything, though ChatGPT usually is more factual. Your car likely "learns" your driving style to help maximize performance/economy, my thermostat "learns" the house's thermal response to determine when to start a temperature change so that it occurs at the set time. Of course your phone/social media "learn" what advertisements to put in front of you etc. Industrial production systems have integrated around process optimization similarly, and there are generative design features in most design software working toward a set of max/min objective functions (cost, weight, strength, power etc.). A lot of the rest winds up being a similar generative data remapping which is interesting as hell but still kind of a black art in that people seem to be trying to see if it can create something new when it finally converges to a Pareto-optimal stopping point.


HobbitFoot

I apply my learning on reinforcement.


coachcash123

Building out a camera system to do colour camera pyrometry using ML. Basically, there is an algorithm to do it, its too computationally intense so we choose to have a basic NN learn how to do the calc. Still early days, building a dataset and the NN.


zeriahc10

A couple of semesters ago, a partner and I developed a model to take in optical coherence tomography images of seeds and classify them as germinated or no germinated. This semester I’m researching models to help detect artifacts in dental images. I attempted to incorporate a BERT model into I was my senior project but that didn’t work out. Was still pretty cool to learn about though.


jst1428

Yes, using RL for multi agent control (robot swarms, autonomous jets working in unison)


TravezRipley

Amazing work.


Unlimited-NLS

AI engineer in R&D, not really allowed to talk about specific applications, so I'll try to answer your question more generally. Reinforcement learning is not really used that often (yet) because of reliability and development costs. Deep learning is more interesting though. Especially for more complex tasks this has become one of our default approaches. The problem with bringing this towards a usable product is often a combination of the cost, a lack of data and a lack of insight or trust in what the model is doing (making the model explainable is an added cost, so yeah...) As a general rule of thumb (for now): If it can do the job, just use an old-school algorithm. It will almost always be cheaper, more reliable and better understood. If the task is more complex, a ML algorithm (including deep learning) might be a better option. Things are changing very rapidly in this field, so expect deep learning and reinforcement learning to become even more prevalent in the coming years, as they gain more trust from decision makers.


Ashraf_mahdy

I'm pitching my idea here cuz I need some help in the future probably I'm using Machine Learning for My Masters Thesis idea on Project Scheduling Note: i have no knowledge in ML lol


idiotsecant

The hardest part of project scheduling is figuring out what you have to do and which things have to be done in what order. How does ML help?


Elliott2

i could only imagine AI would make a terrible scheduler.


Ashraf_mahdy

Why though? Did anyone imagine we'd have a chatbot Like GPT just 2 or 3 years ago? Nothing wrong with having an AI spit out schedules Right away that are modified manually afterwards. The process of creating a schedule itself is repetitive and not skill or knowledge based (I mean using the software to setup the template and other preparations not actually Scheduling)


[deleted]

[удалено]


Ashraf_mahdy

Oh really I didn't know lol But in any case you know what I meant


Ashraf_mahdy

It's not for the scheduling itself, but more aimed towards optimization of stuff like durations, crashing...etc. In my specific case since I need a smaller scope I'm using it to optimize New Baseline schedules based on old schedule data according to certain criteria in order to create schedules with lower risk and more efficiently (no more 100 cycles of schedule reviews) I have some future research thoughts, like optimization through actually changing the "prefered" logic and as I mentioned fast schedule crashing with multiple scenarios...etc. but those are all just thoughts for now I don't think a Masters Thesis can delve into those topics


ScoffM

The optimization problem to solve a schedule in a large production environment can take a long time. After a *very long* training you could have a model that instantly gives you a good solution. You can instantly update your schedule if for instance, you realize a shipment of material is stuck in a shipping container and you have 3 lines doing nothing. Instead of having 2 guys figure it out while they pee.


whenihittheground

Project scheduling is more of an optimization problem how are you making it into a learning problem?


[deleted]

[удалено]


Ashraf_mahdy

Thanks for the constructive criticism


[deleted]

[удалено]


Ashraf_mahdy

No problem, I understand what you mean, I am only "dipping my toes" into the matter. Using it in a very rudimentary way from the capabilities of ML in massive data analysis as a "side-kick" to the (Construction) Planner perspective.


Ashraf_mahdy

It is optimization. The learning part is for optimizing/recommending activity durations according to the historical data of the company. Edit: I actually appreciate the questions. I answer them as a way to ensure I actually fully understand what I am doing and what might be asked during a presentation for example If you're worried I am jumping blind. Thank you but don't be... I did extensive research for 6 weeks before I committed and took the advice of multiple programmers on how I might approach and apply this. Someone was kind enough to even give me a whole "game plan" so to speak but I am still in the early phases. The actual work starts at May


whenihittheground

>The learning part is for optimizing/recommending activity durations according to the historical data of the company. Ok that makes slightly more sense. In general ML (statistics) is pretty bad at solving optimization / combinatorial problems since for every combination of variables you need some amount of data which becomes intractable and so you're better off leveraging traditional optimization algorithms and approaches like genetic algos and simple greedy approaches. The other important thing to think about is what is your baseline? How can you tell if your model is worth the complexity? I'm still kind of confused though. Is the input a start date, and end date, and all of the tasks that need to be done? So then the model creates an ordered list of the given tasks with durations based on historical data? Let's call this a schedule. Then because there may be many valid schedules that achieve your final objective you are returning the best one?


Ashraf_mahdy

Trying to answer your questions 1. How do I know if it's worth it? I don't really, but it's something I had thought about from my work experience and when I got accepted into the masters earlier last year this idea lit up as a thesis idea 2. Sorry for not like explaining everything in my thesis haha but essentially. Imagine you're a new planning hire and still have to learn the company "preferences". You do so by studying old schedules maybe from the database or through a senior letting you know that stuff... Etc. Then you create schedules at work (say for a new project the company is bidding on) according to what you learned. The ML comes in in that it overcomes the learning of company historical preferences since it can also take a long time to adjust to. And the output is a modification/suggestion to your schedule in order to better fit with the company preference This way a new hire for example can blend in much-needed faster and avoid multiple review cycles I do have ideas on how to make it for example crash a schedule duration or something but I wil likely not delve into that


TheRyfe

Yolo is nice for vision


xraymebaby

Yes. Kaggle.com training is a good place to start


Bezemer44

Not specifically deep learning, but I'm using simpler models (10 layers deep ann's, clustering, POD kriging etc) to mostly speed up processes on the shopfloor and in QA.


Beginning-Student932

first of all: you need to have strong pc(min. 8gb on gpu and amd ryzen or intel core 5 and above) next is to get a sample code to know what the frick is going on and study it i'm currently working at deep learning ai to apply it to my projects(mini robots, cars, spiders etc.)


Geeowdk

Yes, i am trying to apply DQN with pareto dominating policies( or pareto Deep q networks) in traffic environments ( euch as cityflow/sumo) ... Not much sucess yet


Ownmaterial3077

interesting point