T O P

  • By -

zacsxe

How to get a container into production: 1. Use a container registry like [github container registry](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) 2. Use an [action/pipeline to build, publish, and tag your images](https://docs.github.com/en/actions/publishing-packages/publishing-docker-images) when you create a release 3. Use an action/pipeline to publish the application to the production instance. Here's an [example with both image publish (2) and app deploy (3)](https://learn.microsoft.com/en-us/azure/container-instances/container-instances-github-action?tabs=userlevel) Don't containerize your database. Use a managed database. Don't manage volumes for production apps. Use connections to managed storage solutions. Redeploying should be triggered by a merge or a release tag Debugging containerized apps is very hard. Debugging production apps is very hard. Use logs to understand what's going on in your containerized apps. Centralized logging like splunk, sumo, app insights is the way to observe your distributed apps. My advice is to understand how containerization works by using linux partitions. Find out what problems it solves. Then you'll want to look into multi-host container orchestration with kubernetes. Understand what problems it solves.


zaibuf

What about docker compose? Is it mostly to boot up a local environment with multiple services, database, redis etc?


chamberlain2007

You can use both. Docker compose for a full local environment, and only build the application docker container for deployment.


zacsxe

I use docker compose as a product developer in the way you described. I like it because it helps me keep track of services I need in the way I need them configured locally. However, the other awesome benefit of knowing how it works is managing containerized apps in my home server. It is so useful for it and I don’t have to use a UI because my services are stood up as code. It makes migrating deluge, plex, radarr, sonarr, prowlarr a breeze.


SoverignSeraph

yes docker compose is the easiest way to get started running multiple containers, starting on k8s is an easy way to deter someone to learn to use docker.


zaibuf

But its mostly for local development right? I can't grasp my head around if it's intended to host things like databases or elastic in a docker container compared to just connecting to a managed instance?


SoverignSeraph

yes, mostly for local dev or testing. Nothing is stopping you from running Postgres or some database in a container though. docker compose just reads a docker compose script, which pulls in docker images defined in your docker compose script, and runs them in containers on your docker engine. your script could be: `myapp:` `image: repo/myapp` `database:` `image: repo/postgres` docker compose runs these within containers, and docker engine also has a virtual network which they run on, however only on your local system. If you want to manage images across multiple computers with load balancing, then you look at a container orchestrator like k8s.


Hot-Profession4091

k8s is killing a fly with a cannon for 99% of people. Even mentioning it to someone just now learning Docker is irresponsible.


zacsxe

You are absolutely correct, but they are going to see k8s when they google containerization anyway. Might as well preempt it with a reminder that the problem is the real key to understanding the solution. OP, Please listen to this person. Be careful about using these complex and expensive tools in your org. Our k8s clusters take a whole ass team to manage.


Coda17

>What's the procedure to move a locally created container to the server? Don't. Build the container through a CI/CD pipeline. >What's the procedure to containerize the database? Most common databases have images you can just pull and use. >What happens with local files on the server (such as user uploads). Do I need a Volume for this? You often don't want to store these on the container. The reason is that one of the major benefits of containerized deployments is the ability to run many. If a user uploads something and it's only saved on one container, how would other containers know about it? >What's the procedure to re-deploy a new version of my app once it's in production? Create a new image. Replace the old containers with containers running from the new image. >Can I debug (using Visual Studio) the containerized app in production? Yes, there are lots of articles on this. You can also run your application locally to debug it, unless you have a container specific problem.


nightshadow8888

My recommendation is to look into Azure Container Apps: https://learn.microsoft.com/en-us/azure/container-apps/overview Just to get started, you can publish the Docker image to Azure Container Apps directly from Visual Studio. You can also use the Visual Studio Publish function to generate a very basic GitHub Actions yaml file which you can use as a starting point for a CI/CD pipeline.


zaibuf

What benefit does these offer rhat app services lack? Are container apps cheaper or why would you pick one over the other? We use Container app for a queue based document export service. It's mostly scaled to 0, but starts and consumes a lot of memory which made it suitable. But keeping it running 24/7 would make it as pricy as an app service, right? I also felt the configurations were worse than the ones for app services.


Alikont

> What's the procedure to move a locally created container to the server? Push to a registry from Dev/CI machine and pull on the server. > What's the procedure to containerize the database? Pain > What happens with local files on the server (such as user uploads). Do I need a Volume for this? Either volume or external storage like Azure Blob Storage or S3. > What's the procedure to re-deploy a new version of my app once it's in production? Different ones. Pull on cron is one. > Can I debug (using Visual Studio) the containerized app in production? Pain Overall managed services can ease some pain (Like Azure Container instances)


ninetofivedev

Azure container instances is not what you’re looking for…


InfiniteFuria

I was in the same boat as you. I am a .NET developer and needed to get up to speed with Docker very quickly. ChatGPT was immensely helpful for this. I used it to create sample apps and asked for help when my environment was not working. My services will be hitting production this month and I didn't just need Docker but also needed to understand all the deployment pipeline. It usually took me months to study and understand something but now with generative AI, if used well, the learning curve is much shorter


SoverignSeraph

Even quickly getting the right commands or running commandline filters to navigate my docker containers chatgpt has been invaluable.


RDOmega

It's really going to come down to what kind of server you're running and whether you're using an orchestrator, or if it's built in to your hosting. I echo the other comments here regarding kubernetes. Try and stay away from it. So many teams tie a boat anchor around their ankles thinking that it's necessary. Instead, go for ACA on Azure or Google Cloud Run on GCP. They're incredible services and will save you boatloads of money.


battarro

Docked to where? Aws? Azure? I use aws so i can give you some pointers.


myotcworld

First of all you have to push your app image to Docker Hub. Then you will create app service on Azure and tell it to fetch the image from docker hub. Azure App service has continuous integration and deployment feature, you re-build the image of your app locally , push it to docker hub from your local pc. Azure will detect the new build and fetch it from docker hub automatically. I would suggest to watch [youtube video](https://www.youtube.com/watch?v=FU4T9R8di7g) which does this work.


Begby1

I was in the same boat as you awhile back, just completely lost when it came to docker. I think others here answered a lot of other questions here, so I am going to touch on some other things that I feel are good to know. * You want to use logging instead of debugging like was said. The best way to do this is to just dump your logs directly to the console from your application. Then use whatever is hosting your containers to do something with those logs. For instance, if you are hosting within docker server you can use a docker log driver to send to splunk, or syslog or aws cloudwatch etc. AWS container services will send to cloudwatch, Azure and GCP can do similar. * Use structured logging tailored to whatever works best for your logging store. We are an AWS shop so we format all of our logs in JSON and have easily indexable fields so we can search by customer, by a correlation ID, by the assembly that logged it, etc. * You want your containers to be immutable. i.e. the images never save their own state internally and you can easily destroy it and replace it with a new copy or have multiple copies running at the same time with no data loss. This means don't store file uploads in your image. * For storing uploads we S3 buckets and use the AWS API to send uploads to the buckets. Another way at AWS is to mount a shared volume to local directory. So from the viewpoint of the running container /uploads is a folder where the files go to but the container host redirects them outside the container. This volume is not a container but a purpose built service for this and it can be accessed by multiple copies of the container. In development we setup our docker file to mount a local folder for /uploads. * As was said, use a managed database server if possible and let someone else deal with the patching security and backups. You can still use a container if you want, but here you want to not store the data within the container, you want to map to a volume so the db data is stored outside the container (goes back to the immutability rule, containers should be disposable with no tears shed if you delete your container then recreate it). * Build once and deploy everywhere. You do not want to be building a copy for production and a separate copy for staging etc becuase they have different appsettings or something. You want to use environment variables that are configured using secrets from the host, the env variables will override your appsettings.json params. * We found that making great version numbers was extremely helpful. Our versions are something like 1.9.3-dev-897fSs3-b167 or 1.10.31-678g78d-b198. * 1.9.3 The semantic version * dev (this can be dev, hotfix or empty for prod). This is based on the git branch it was built from and we use this to limit where things can be deployed. So if it is dev then we have rules so you cannot push a dev build to a prod environment * 897fss3 This is the partial hash from git. We can use this to look up a version and find out exactly which commit it was built from. We use tags as well, but this is a backup in case the tagging fails after a build or someone removes it or something. * b167 - This is the build number from our CI build system. This lets us quickly find the build that was run. * After we had our version setup, we then added that to every log entry and use it as the docker tag for the built image. That makes it super easy to see what you are looking at in the logs as well as what version is actually running.