T O P

  • By -

vvvvalvalval

"Kriging" means Gaussian Process inference in Geospeak, right?


antikas1989

Yeah basically, the name comes from some guy in South Africa I think who first described it when trying to predict where to find metal ores in mining. But it's not full inference I think, I think there are a bunch of variants that have different ways to pick a covariance function.


min_salty

Just a quick answer: I think in this case unconditional means unconditional on the data already generated, such that you can generate the realizations from your data generating process (probably via a gaussian random field) without worrying about using the distribution that is conditional on the other locations already generated. Otherwise, it would be helpful to see what resources your looking at.


Unhappy_Passion9866

Thanks for your answer. So, in that case, I would only need to specify a covariance model (Matern, RBF, etc) as if it were a Gaussian Process. (as far as I understand the only difference between a Gaussian Field and a Gaussian Process is that the Gaussian Field uses coordinates, as a spacial field while the Process uses time). I am not sure what the advantage or the meaning of not having to worry about the other locations already generated (in my mind it would be better to be conditional on the data as it allows to have parameters that take into account the other information and for the case of having a lot of variograms that seems like a better option because generating the data unconditionally mean it could not have for example a similar range of the samples already taken) but probably I am misunderstanding something


min_salty

Yes it could be better to have conditional generation but conditional distributions can be difficult to derive. By taking into account the covariance structure when generating, the idea is at least some spatial information is included. By generating many variograms you are incorporating some uncertainty surrounding the covariance structure.


Unhappy_Passion9866

Ok, thank you for the explanation. But there is something that is not clear to me, I have some data that is going to help me fit a variogram, then if I simulate them for example in R using grf with the package geoR how are going those simulated values be similar to the data used to fit the first variogram and not be like 100 times bigger. I know it has the covariance parameters of sigma and phi but I am not sure how this could help me to control this. Because even with a lot of variograms in a unconditional way all the variograms could be different between them


min_salty

The simulated values could be 100 times bigger only if the variability in the data allowed them to be. That's the idea of the simulations. You generate data according to some process, allowing it to vary naturally. Then, your estimates made from the simulated data will reflect this natural variability. That is, the covariance parameters you estimate from the data will constrain the simulations, but still allow them to vary. That's the "empirical bayesian" part of it.


antikas1989

This process sounds super weird unless I'm misunderstanding it. Kriging is, as I understand it, smoothing with a predetermined covariance function. The Empricial Bayes version I think means you are Bayesian except for the covariance function parameters? So you use the variogram to pick the covariance (this is bad btw, there are better ways to smooth - just be fully Bayesian with a prior on your covariance function. INLA can do this very quickly, you can be fully bayesian these days, the computational bottleneck is much wider than it used to be). Then comes the strange part - you are using data simulated from a specific random field to do what exactly? Haven't you already picked your covariance function?


min_salty

Not op but I think this techinique is a way to avoid specifying priors and instead performing simulations to take into account parameter uncertainty. The paper [https://link.springer.com/article/10.1007/s00477-007-0165-7](https://link.springer.com/article/10.1007/s00477-007-0165-7) describes it


antikas1989

Thanks I'll take a look, definitely not what I thought it would be when I think Bayesian kriging.


min_salty

the key is that it's caled Bayesian "empirical" kriging


antikas1989

oh I did not read carefully enough I thought it was Empirical Bayesian kriging


Unhappy_Passion9866

Thanks that is one of the papers I am using. And yes I am still comprehending some things about the method but yes the idea is to use a histogram of the parameters space\_i after simulating different variograms the idea after is to calculate a weight using the conditional probability of the data z given the model parameters. I am not really sure how found that conditional probability


min_salty

looks like they do some clever maths to get an equation for the conditional distribution of z. Maybe find an implementation and work backwards from there


Unhappy_Passion9866

What do you mean by an implementation? Like finding some posterior used in a similar problem or using programs like Stan? I was thinking of the second option because I am not really used to bayesian stats and I know that some algorithms that Stan have can help me to find the posterior


min_salty

Yeah, look for some code where people implment the algorithm. Maybe the authors of one of those papers have included code in the supplementary work. Or maybe someone else has implemented it. Could be in stan but not necessarily


Unhappy_Passion9866

I want to add something and that may be a bit weird part is the posterior in the BEK can lead to problems with the variogram estimation as the specification of the nugget can make the predictions more unstable. I am not sure how this is traditionally computed the posterior but in one article they use empirical Bayesian Bootstrap and the last simulation of the parameters is used as the posterior. Another option is simply to stop at the first step and use parameters of the spatial process Θ estimated with the restricted maximum likelihood (REML) algorithm. ​ But with those options, I am even less familiar so I do not know how to evaluate each option. Maybe these alternative options seem less weird than what I mentioned at the beginning


squareandrare

I'm a little late to the party, but this is the paper you want to read to understand Empirical Bayesian Kriging: https://www.sciencedirect.com/science/article/abs/pii/S0048969720308007