Hacker News new | past | comments | ask | show | jobs | submit login

The sample images show significant vignetting and spherical aberration. The focal plane is curved and clearly not flat over the field of view of the image. There is indeed not obvious chromatic aberration, but I suspect that it is because other issues are significant – there is no mention of the lens's MTF/OTF or any significant tests.

One of the other commenters has highlighted that the lenses in question are part of this telescope kit – https://www.surplusshed.com/pages/item/T1686.html – which is indeed consistent with a reasonably good on-axis or paraxial design but with poor off-axis performance (like most telescopes). I presume these are two apochromatic doublets [1] but it's a bit hard to see.

[1] https://en.wikipedia.org/wiki/Apochromat




I wonder how well software could fix that? Could one of the hack firmwares be modified to correct the lens in camera? Seems like software correction would open a lot of possibilities for DIY lenses.


If you can take some pictures of a known reference field then you can certainly get a significant improvement.

[0] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.37...


It’s information completely lost, it can be guessed with DALL-E but I don’t think that’s helpful in lense assessment.

The spherical lense creates a sphere of field of focus. The best way to correct for this is to have a spherical sensor.

Canon and Sony have indeed been working on curved sensor manufacturing, but it seems like a moonshot.


I don't think this is correct. The camera is applying a PSF (point spread function) to the image. If you have a known target image you can find the inverse of the PSF and apply that as a correction (deconvolution).


Spherical aberration is caused by the focus deviating across the image, there being only one point in the centre where the image can truly be in focus. So although the image can be improved in the way you describe, you can’t recover the information that is out of focus.


There's a bit of literature out there claiming the opposite. Certainly if you can take multiple Z images you can correct it that way, but there are methods which work directly to de-blur (aka, in focus) an image.


If these methods work, they could make you billions. I’d say go after it!


I know next to nothing about how sensors are manufactured, but we do have the technology to build flexible displays. So what are the major challenges in building "flexible" (for lack of a better word, obviously they would be fixed in place) sensors?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: