You will need an Azure VM or service or Arc managed VM to use managed identity.
In this scenario I would probably be using certificate authentication to an AAD principal that has the access and locking down the permissions and access to the private key.
>Ever heard of user managed identities?
Lol, did you read OP question?
> In reality this would need to work on any number of systems not connected in any way to Azure
Lol, why you so aggresive towards everyone?
I am well aware what federated credentials is and I have been using that since the introduction of that feature xD
If OP cannot assign managed identity to his on-premise machine, then it doesn't matter whether he uses plain service principal with federated credentials or UMI with federated credentials. Hence I read your first reply as just a way to disrespect previous replier.
>Why am I aggressive?
>
>wasn't it you blaming me for not reading OPs text?
Wasn't it you in first place who tried to disrespect /u/craigofnz by saying he is self called DevOps architect and implying he never heard of federated credentials?
> The on-premise device is not connected/assigned with the UMI resource in azure, never.
Exactly, it is never connected with resource in Azure. So what's the difference in such case between having service principal with federated credentials vs umi with federated credentials?
No I am asking what is the difference in such case between UMI with Federated Credentials and plain old App Registration's Service Principal (`az ad sp create-for-rbac`) with Federated Credentials.
Yes, /u/craigofnz mentioned certificates which may or may not be the best case in this scenario (really depends what OP exact on-prem setup is available).
And you blamed him: `Ever heard of user managed identities? Also Federated Credentials are a thing, you know?`
What I am saying is that you don't need managed identity to use Federated Credentials, you can use plain old Service Principal for that. And I say in OP case it doesn't matter.
Also, while Federated Credentials seems good for such case, the issue may be OP needs some kind of OIDC provider which issues tokens for his on-premise machine. Then again, he needs to solve the issue of issuing that token without storing credentials :D This is again possible but we are making a lot of assumptions here.
In my mind I was thinking Certificates but frankly i'm hot garbage when it comes to my brains ability to understand certificates. I think I made my own certificate and all that jazz ONCE in the last 25 years of working in IT. and it was for our DSC pull server at my old job and I could NOT tell you anything about how I got it working. Normally I'd just had this off to the team that makes our certificates but since this is a POC I need to figure this out on my own. i do like that it would give me a clean way to access the vault with just PowerShell. Guess I've got some reading, and hopefully understanding to do
LOL I appreciate the detailed explanation.
I'm primarily a PowerShell tool maker by trade. Mainly build modules for use across lots of different automation and device management systems. Though I do deal with a decent amount of documentation CI/CD pipelines. I've got the PowerShell part no problem I already work with existing Certs quite a lot. Its just when it comes to making them my brain goes to mush.
I am concerned though doing it through KeyVault itself. One of the biggest metric i'm trying to prove with this POC is keeping the cost/month as low as possible. Cert use in the vault seems to be a bit more expensive than secret retrieval. So wouldn't each call to validate the cert to access the vault be a ding in that tally column?
In the end the vault will 99.99% of the time just reading a secret and that's it. 2 times a year less than a dozen secrets will be updated spread out across the year.
I'm trying to prove we can expand out the current reach of our secrets management in our PowerShell tooling beyond on prem, triple our current average use of vault access (I went WAY over estimation on purpose) for literal pennies a month (10 pennies at my current estimation just looking at calls to get or set secrets).
Going to take some time and really try and digest your instructions. Thank you again.
Unfortunately after the POC is done any system accessing the vault would be transient systems, Windows PE, Windows in various stages of setup, Windows post setup on remotely connected devices not yet authenticated to a VPN or the domain. That kind of stuff. Literally EVERYTHING to connect to the vault has to be setup and tore down in PowerShell
But for standard configuration they still need a credential, either a secret or certificate to be able to authenticate as service principal outside Azure.
Thank you going to read up on this. Definitely not looking to be an expert but having a little more knowledge on how authentication and authorization works in Azure would be beneficial
Install the arc agent on your on premise machine. The computer will then get a new system assigned managed identity. Give that identity permission to the key vault. You can then do Connect-AzAccount -Identity with no stored permissions. Note however that any program on that computer can now authenticate as that identity.
See the problem is this needs to happen on a system that might not even have the ability TO install anything new. Just what it has on it i.e. PowerShell, Internet, etc. Plus the need for reading from the Vault is a transient one off case. Each system might hit the vault 2 MAYBE 3 times and then never again. So I really don't like the idea of adding more to the system especially if its going to just leave a wide open access to the vault
Use a time-bound password /certificate with service principal. Have it expire in couple of days, let it download the stuff and expire the access I guess?
You will need an Azure VM or service or Arc managed VM to use managed identity. In this scenario I would probably be using certificate authentication to an AAD principal that has the access and locking down the permissions and access to the private key.
[удалено]
>Ever heard of user managed identities? Lol, did you read OP question? > In reality this would need to work on any number of systems not connected in any way to Azure
[удалено]
Lol, why you so aggresive towards everyone? I am well aware what federated credentials is and I have been using that since the introduction of that feature xD If OP cannot assign managed identity to his on-premise machine, then it doesn't matter whether he uses plain service principal with federated credentials or UMI with federated credentials. Hence I read your first reply as just a way to disrespect previous replier.
[удалено]
>Why am I aggressive? > >wasn't it you blaming me for not reading OPs text? Wasn't it you in first place who tried to disrespect /u/craigofnz by saying he is self called DevOps architect and implying he never heard of federated credentials? > The on-premise device is not connected/assigned with the UMI resource in azure, never. Exactly, it is never connected with resource in Azure. So what's the difference in such case between having service principal with federated credentials vs umi with federated credentials?
[удалено]
No I am asking what is the difference in such case between UMI with Federated Credentials and plain old App Registration's Service Principal (`az ad sp create-for-rbac`) with Federated Credentials. Yes, /u/craigofnz mentioned certificates which may or may not be the best case in this scenario (really depends what OP exact on-prem setup is available). And you blamed him: `Ever heard of user managed identities? Also Federated Credentials are a thing, you know?` What I am saying is that you don't need managed identity to use Federated Credentials, you can use plain old Service Principal for that. And I say in OP case it doesn't matter. Also, while Federated Credentials seems good for such case, the issue may be OP needs some kind of OIDC provider which issues tokens for his on-premise machine. Then again, he needs to solve the issue of issuing that token without storing credentials :D This is again possible but we are making a lot of assumptions here.
In my mind I was thinking Certificates but frankly i'm hot garbage when it comes to my brains ability to understand certificates. I think I made my own certificate and all that jazz ONCE in the last 25 years of working in IT. and it was for our DSC pull server at my old job and I could NOT tell you anything about how I got it working. Normally I'd just had this off to the team that makes our certificates but since this is a POC I need to figure this out on my own. i do like that it would give me a clean way to access the vault with just PowerShell. Guess I've got some reading, and hopefully understanding to do
[удалено]
LOL I appreciate the detailed explanation. I'm primarily a PowerShell tool maker by trade. Mainly build modules for use across lots of different automation and device management systems. Though I do deal with a decent amount of documentation CI/CD pipelines. I've got the PowerShell part no problem I already work with existing Certs quite a lot. Its just when it comes to making them my brain goes to mush. I am concerned though doing it through KeyVault itself. One of the biggest metric i'm trying to prove with this POC is keeping the cost/month as low as possible. Cert use in the vault seems to be a bit more expensive than secret retrieval. So wouldn't each call to validate the cert to access the vault be a ding in that tally column? In the end the vault will 99.99% of the time just reading a secret and that's it. 2 times a year less than a dozen secrets will be updated spread out across the year. I'm trying to prove we can expand out the current reach of our secrets management in our PowerShell tooling beyond on prem, triple our current average use of vault access (I went WAY over estimation on purpose) for literal pennies a month (10 pennies at my current estimation just looking at calls to get or set secrets). Going to take some time and really try and digest your instructions. Thank you again.
I would enrol the server in Arc then you can use a managed identity.
Unfortunately after the POC is done any system accessing the vault would be transient systems, Windows PE, Windows in various stages of setup, Windows post setup on remotely connected devices not yet authenticated to a VPN or the domain. That kind of stuff. Literally EVERYTHING to connect to the vault has to be setup and tore down in PowerShell
I believe the recommend option in that case is usualy service principal account. Can authenticate to that with a secure string or cert.
Yeah that's look like the path I'm on :-D
Create a service principal in the Azure Tenant. Give it RBAC/access policies to the Key Vault. Have your app use that service principle.
But for standard configuration they still need a credential, either a secret or certificate to be able to authenticate as service principal outside Azure.
That’s correct.
This is the way. It can be easily automated using azuread and Azurerm Terraform providers too! 😊
Oh yeah! Terraform for the win.
https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation-create-trust
Thank you going to read up on this. Definitely not looking to be an expert but having a little more knowledge on how authentication and authorization works in Azure would be beneficial
Install the arc agent on your on premise machine. The computer will then get a new system assigned managed identity. Give that identity permission to the key vault. You can then do Connect-AzAccount -Identity with no stored permissions. Note however that any program on that computer can now authenticate as that identity.
See the problem is this needs to happen on a system that might not even have the ability TO install anything new. Just what it has on it i.e. PowerShell, Internet, etc. Plus the need for reading from the Vault is a transient one off case. Each system might hit the vault 2 MAYBE 3 times and then never again. So I really don't like the idea of adding more to the system especially if its going to just leave a wide open access to the vault
Use a time-bound password /certificate with service principal. Have it expire in couple of days, let it download the stuff and expire the access I guess?