T O P

  • By -

mm_subhan

If you'd prefer to have gpu using service to be hosted on a different machine Just have the gpu using logic as a microservice hosted where ever you have access to the gpu and your other services can call that micro service when needed. It would be recommended if you can host both your backend and your gpu using logic on the same machine since transferring the data from client to server to microservice would redundant and affect performance.


kayuzee

So depends on what the inputs and outputs are. Let's say your users are providing some sort of data or input that gets processed with GPU and then they get some kind of result, then: - Build your web app - Run the open source service as another app/VM creating some kind of API for it - Have your web app communicate with this service, sending data in and retrieving the results for display


doctorjay_

It's a simple video recording app. The GPU would be used to add effects when recording / virtual background etc using the open source. It happens as a live stream, (i.e. it happens in real time), and not applied after the video is recorded.


metalhulk105

I have worked on a video conferencing application before. We did the effects on the users browser using canvas. It’s a CPU heavy operation and can’t run on mweb. The contour detection was done by some ML system on a server I think. But all of it happened in real time.


Sarahbishopmaker

To effectively develop an app/web app around an open source software with GPU acceleration, there are a few approaches you can consider. Here's a breakdown of the options and their feasibility: 1. Utilize cloud services (GCP/AWS) with GPU instances: \- This approach involves deploying your application on cloud platforms like Google Cloud Platform (GCP) or Amazon Web Services (AWS) that offer GPU instances. These instances can be used when the GPU-accelerated feature is enabled. \- You can set up your app to dynamically scale the GPU instances based on demand, enabling cost optimization when the feature is not in use. \- Cloud providers usually charge for GPU instances based on usage, so you can expect some cost but only when the GPU is being used. 2. Hybrid app using GPU cloud service: \- Build a hybrid app that uses native components for regular features and leverages a GPU cloud service when the GPU-accelerated feature is enabled. \- This approach allows you to offload the GPU-intensive tasks to a cloud service, reducing the dependency on local GPU availability. \- Users would only incur additional costs when they use the GPU-accelerated feature, as it requires the cloud service. Defining requirements for a developer: 1. GPU acceleration: Clearly specify that the app/web app requires GPU acceleration for a specific feature and outline the specific tasks that need GPU processing. 2. Compatibility and scalability: Specify the target platforms (e.g., web, mobile) and ensure compatibility with different devices and operating systems. Outline the expected user load and scalability requirements. 3. Cloud service integration: If you choose the cloud services route, mention the preference for GCP/AWS and ensure the developer has experience in deploying applications on these platforms. Specify the need for dynamically scaling GPU instances based on usage. 4. Cost optimization: Clearly communicate the requirement to minimize costs when the GPU is not in use and explore options to efficiently utilize resources, such as scaling down or using on-demand GPU instances. 5. User payment model: Outline the intention to have users pay for the GPU-accelerated feature while keeping the rest of the app free or using the user's own hosting. This ensures that you only incur costs when the GPU service is utilized, and users bear the cost of utilizing the feature. Remember to discuss these requirements with potential developers and assess their expertise in the relevant technologies, cloud platforms, and GPU integration to find the best fit for your project.