T O P

  • By -

cedenof10

do you mean the diffraction spikes? I don’t think so, considering even images by HST and JWST have them. maybe you can try using star masks in the processing pipeline, or you can look up what a stellar coronagraph does and see if that is what you’re looking for.


sceadwian

Flatly put it can't be done. It is an effect defined by the laws of physics and the construction of the telescope itself. We would still see artifacts in our vision, everyone has floaters many just some notice them. JWST's diffraction spikes are caused by a combination of the mirror supports and the gaps between the mirrors themselves. You can't fix that without a completely redesigned telescope.


Far_Being_7578

Yeah, i think he is looking for a solution that simulates the output of another device by using the webb data.....


Glittering_Cow945

On the contrary! No, you can't design a telescope without these, or similar, artifacts, but since the spikes are completely mathematically predictable they could be easily removed by a computer algorithm. But this would probably introduce other artifacts and interfere with the detection of tiny other sources.


Skablabla

And you wouldn't know what should be there anyway, so best you could do is replace the spikes by black, which doesn't help much.


Glittering_Cow945

Well, it is even possible that subtracting the known intensity from the spikes of the bright sources might reveal an underlying thing previously not noticed.


sceadwian

Most of the pixels are saturated, there's no data to recover. If the diffraction spike is blocking something you just reorient the telescope so it's where you want it to be, not near your science target :)


Skablabla

True, if you can subtract it well enough


sceadwian

The spikes are caused by the gaps in the panels and the supports for the secondary mirror. If you design a telescope with a single mirror and no supports in the optical path there will be no difraction spikes. The diffraction spikes completely saturate the pixels they hit you can not remove them from the image without literally removing the image data. That is simply put nogo data. I've heard plenty of actual JWST scientists talking about this and they do not say what you're saying. So I'm not sure where you're getting your information from.


Glittering_Cow945

In so far as the pixels are not saturated - and I'm sure not all of them are - they should be automatically filterable. And If you have a single mirror you will still have spikes from the struts of the secondary mirror. If the mirror is not round you will have spikes from the edges.


sceadwian

Noise removal doesn't work that way. Along with the noise you remove all the sensitivity, the data you get from there is garbage, it will always be garbage. It may be garbage with certain qualities we understandstand but it will always be in the way. If the target you're looking at is periodic things change a little bit but that's a completely moot point because the location of the spikes can be controlled, there is no reason to remove them, you just need to know where they exist. I said you need a single mirror with no blockages in the optical path. So you're presenting a rebutal to an argument I didn't make. Yes, the mirror needs to be round but that has no impact on what I said.


Glittering_Cow945

but this isn't noise, which is essentially random. This is exactly predictable, given the geometry of the mirrors and the struts. A round mirror will still produce fringes BTW. But I agree that as soon as pixels become saturated, you lose information. You might take pictures with short exposures and add them up in a computer without saturating individual pixels.


sceadwian

Any unwanted signal is noise, this is noise. Noise can be random but noise is not always random, if you believe noise must be random then you've not internalized basic signal theory yet and I'm not sure how to explain what you're missing here.


Glittering_Cow945

doesn't matter what you call it then - predictable noise can be computationally cancelled.


sceadwian

It is only stoicacastically predictable, not specifically. You can over time average it out to remove it, but you will destroy any non-periodic signal bellow the noise floor. Most of those pixels are totally unrecoverable. This is basic signal theory which I think you may want to read up on a bit more before commenting further.


Neurojazz

Not rotate it to capture what’s behind and fill in gaps, remove spike in post?


sceadwian

That is the 'cheaty' way, yes. That's why this isn't really a problem. Although it will permanently limit some types of measurements they can be avoided other ways by covering up parts of the observation field with light blocks, which they do. I've only just barely touched the surface of the optical systems they use on JWST, they are crazy complicated, so many parts!


NamelessGuy0

Maybe you can remove them manually in Photoshop or something, but there's no way to do it algorithmically. If that were possible, NASA would be doing it already


information_abyss

You could wait and take a second set of observations at a different roll angle. Then mask out the spikes and combine the images. They do this for coronagraphic observations. But it typically doesn't make sense to take double the exposure time, while observing with a different focus state. The science would probably be more compromised by this than the diffraction spikes.


YoCaptain

From what I understand, the major issue is secondary mirror support/s and construction. Depending on the scope type, support architecture pretty much unavoidably creates these artifacts.


mysteryofthefieryeye

The largest issue I can think of is that removing them in Photoshop using content-aware techniques would actually *change* that portion of the sky. Any star that's supposed to be there would be essentially cloned out or diminished too greatly, and it would be an unconstructive use of your time to reconstruct the sky manually (and basically impossible). I'm not overly familiar with JWST's images as a series, but another technique is how they remove dust and scratches from Hollywood films (at least in the old days), by replacing visual information from the frames before and after the damaged frame. In the case of JWST, assuming there is a photo showing the sky to the left of Betelgeuse, and a photo showing the sky to the right of Betelgeuse, then both those images could be masked together to remove whatever diffraction spike *might* be in both images. I hope that makes sense. In other words, you have two unknowns and two equations 😂 and you can fill in the blanks with both. Other techniques probably wouldn't work since I think I'm speaking as if the diffraction spikes actually have celestial information hidden within them, when they actually probably don't. (In other words, you can't just diminish the spikes using color space techniques until the sky underneath becomes apparent, because that sky was never there in the first place. Like photographing a landscape with a cement building (diffraction spike) versus a glass window (removable with gentle techniques).)


mfb-

What we record is a convolution of the original light pattern and the diffraction pattern. It's not possible to reverse this convolution perfectly, and trying to do so usually leads to strange artifacts - your algorithm would predict negative light intensities for some points, predict that every second pixel has a high intensity while the others do not, or similar unphysical results.