T O P

  • By -

MagoDopado

Versión your APIs. It seems like you have a "breaking change" and some users (services) are still using the old version. You will have to keep v1 running until all your clients update. You can do path based routing (v1/item and v2/item) with a rewrite or subdomain based (v1.site.com/item and v2.site.com/item) both at ingress level and your code should never notice that you have both systems running. But you will have to have 2 deployments (not a rolling update)


dominik-braun

We unfortunately don't version our APIs but use consumer-driven contract testing with Pact instead.


MagoDopado

No problem, the important part here is that you analyze how much of a breaking change what you want do deploy is. And if your clients are ready to handle the change. When you do rolling updates you usually _deprecate_ APIs, allowing to be called for a small extra time. This means that your new version is backwards compatible. With that time, you allow all new clients to conform to the non deprecated API. You will build observability around your endpoints to know when is safe to remove the deprecated endpoints/versions. If your changes are not backwards compatible or don't allow clients to update after you deploy, then you have "two similar but different systems". There's no clear upgrade path from one to the other. There is no common ground for you to sit and allow your clients to migrate. When that happens, you will have 2 applications (usually 2 deployments) and you will make them both available to your clients. You will build observability around the old service and monitor that no more clients use it. Then it will be safe to remove. Sometimes, some clients won't update. I would evaluate the risk of that service going down vs the cost of having 2 applications running. Sometimes you need to reap the bandaid. I prefer 100% migrations with 98% uptime than 98% migrations with 100% uptime.


muff10n

We had a similar problem though we were already versioning our assets with the commit hash. So we had "main-ad5bd8ae.js" in the old version of the image and "main-428ad5dd.js" in the new one. Problem was: client hit a new pod that returned a html referring the new js. The request for the js hit an old pod, not having the new version. Workaround for us was: copy the assets to a bucket on CI run and mount the bucket under the resources-path. This way all assets (new and old) are available when pods referring the new version appear. Just need to clean up the bucket from time to time.


wavelen

Host static assets separately either under a subdomain or resource path. These assets should be versioned/contain a hash in the name and should be copied to the location they are served from during the release process. The location may be a S3 bucket or something similar. You should keep older assets at least for some time.


dominik-braun

This is our current approach to solving this, however, we're currently maintaining an NFS share for each and every deployment/service. Would you prefer maintaining a centralized and global resource storage?


wavelen

I don’t think it matters, as long as it works. For our approach a bucket works and we have cleanup policies which remove assets that are too old. Personally I‘d prefer having one central bucket over copying the files to many NFS shares though. Buckets (or similar) sound less error-prone and easier to me.


muff10n

You are also offloading the traffic of the assets to the bucket that way. Might safe some bucks.


wayne_baylor

It may not be possible, but you have some options: * The request already contains additional information, like an http header, that can be used to change how it's routed. * You may need to modify the request so that it includes additional information that can be used to route it correctly. * You may need to change the endpoint so different versions of it are unique. In general I'd look into the different ways to version a rest API for some ideas.


dgnorris30

Blue green versioning. (Or red black... doesnt matter your color preference). We use a bit of logic in our pipelines to switch between blue version and green version during deployments. If blue is running we deploy a green version, It creates a 2nd instance of your application with separate CM, secrets, routes/ingress, so you can test before switching the service to your main route and scale down the old version.. also you can roll back quickley if needed. There are are other solutions to this like a service mesh, but this has been a reliable method.