Better Quality, Quicker
As the old adage goes, you can do something quickly, or you can do it well. The point is well taken, but here at DT, we’ve made our name providing clients both. Today we’re going to explore Z-Stacking, one of the many advanced imaging techniques that we’ve made fast and easy, with the same uncompromising quality our clients have come to expect.
One of the most challenging projects in precision imaging today is capturing detailed subjects with significant depth. Any high-end camera can render sharp images in the X and Y dimensions (along the length and width of an object), but maintaining a sharp image along the Z axis (its depth dimension, into and out of the subject) pushes the limits of even the best optical systems (we’ll discuss why in just a minute).
Just take a look at the image below – the base of the antennae and some of the scales here and there are nice, sharp, and in focus, but much of the wings and eyes aren’t. Don’t just take my word for it though – click on the image to view it in full, zoomable 100 megapixel form and see for yourself.
So there we have it – with conventional imaging, only a small section of the field is sharp.
Before we move on, you might be thinking that this is an intentionally bad “before” image, designed to exaggerate the problem, but I promise it’s not. In fact, this EXACT image you’re looking at right here is used to help generate our final result. You’ll see how in a bit.
Today, we’re going to show you how to overcome optical limitations using a combination of DT’s cutting-edge hardware and powerful computational imaging techniques.
And better yet, we’re going to show you how to automate the whole process, while actually IMPROVING your overall image quality.
(For those of you to whom it’s relevant, we’re talking about automated FADGI 4-star and Metamorphoze Strict compliant imaging. Do I have your attention now?)
Now how are we going to do all this? Through the magic science of Z-Stacking, that’s how!
Why can’t we just do it using normal photography methods?
Before we start talking about the solution, let’s talk about the physics that make this process so difficult. If you’re familiar with depth of field and diffraction, feel free to skip onto the next section!
A large part of what makes imaging subjects with depth difficult at high magnifications comes from the competing effects of two phenomena, known as depth of field and diffraction.
When capturing an image, depth of field describes the thickness of a slice along the Z (depth) axis which is totally in focus. This depth of field becomes thinner and thinner as magnification increases, meaning that less and less of our subject is in focus.
We can increase our depth of field by decreasing the size of our aperture, but beyond a certain point, a strange thing starts to happen – our image quality begins to degrade and our subject looks blurry!
This loss of image quality is the result of our second effect, diffraction, or the spreading out of light as it passes through a small aperture. While an aperture can cut out light coming in at off-axis angles, rendering more of the image in focus, making the aperture too small causes light that would otherwise move in a straight line spread out into a cone, similar to how placing your finger on the end of a hose makes water spray out in a wider beam.
These competing effects mean that one has to choose between a sharp, thin slice of an image, or a blurrier, thicker section. Neither is acceptable for critical imaging, so a more clever solution is necessary.
As imaging techniques have continued to improve in the digital age, new computational methods have made it possible to extract content from a set of source images and produce composite images that could never be captured naturally. Z-Stacking (also called Focus Stacking, or Extended Depth of Field) is one example of these powerful computational imaging methods.
There are a handful of complex Z-Stacking algorithms out there (with all sorts of intimidating names), but the basic concept is relatively straightforward: a stack of images focused at different locations of an object can be mathematically combined into a single, fully focused image.
Below you can see one of these Z-Stacking programs (in this case, one called HeliconFocus) in action: 25 frames (including the one you saw above) are combed bit by bit to detect in-focus portions, resulting in the white and black model being developed on the right. Eventually, these will be compiled into a single image.
This incredibly powerful method makes generating razor-sharp, completely-in-focus images of objects with significant depth possible, overcoming the limitations of diffraction and narrow depth of field!
Problem solved, right?
Unfortunately, it’s not quite as simple as it sounds. The quality of the final image depends highly on the quality of the input images, and generating a good set of input images involves taking dozens of shots at focal points that are sometimes only microns apart. This requires equipment with an extraordinary amount of stability and precision to ensure proper adjustment between shots, as even the slightest misalignment or vibration can lead to bad composites or blurry images.
As it is with many advanced algorithms, if we don’t have good quality images to begin with, we won’t get good quality results. In short, Z-Stacking is what we lovingly refer to as a “GIGO” process:
Garbage In, Garbage Out.
So while Z-Stacking holds great promise for making useful, otherwise impossible to capture images, in order to do it effectively, we need an exceptional camera on a stable platform, with a precise way of adjusting the camera position to sweep the plane of focus across the subject.
That’s a pretty tall order.
It turns out that there are a few commercially available solutions out there already, but they all have some sort of shortcoming – whether it’s a lack of precision, limited automation options, or poor software integration, there’s always a compromise.
And if you’ve worked with us before, you know we don’t do compromise.
It was clear that we needed something better for our clients – something more precise, robust, and easy to use. So we took our technology and expertise, and set out to make a better Z-Stacking system.
Here’s what we came up with:
For stable support, the DT Atom Rapid Capture Station was the obvious starting point. The Atom is DT’s most versatile imaging station, with an machined flat, inert, tabletop for handling sensitive materials, broad-spectrum Photon lights, and a light box configuration for transmissive imaging. Custom built in the US from aerospace grade aluminum, the Atom provides the critical stability necessary for high resolution imaging at extreme magnifications.
Then we needed to choose a camera, and we knew that only the best would do. The iXG 100MP is DT’s flagship camera, developed in conjunction with Phase One, the world leader in high quality camera solutions. With a 100MP sensor 2.25x larger than full-frame “professional” DSLRs, razor sharp Schneider-Kreuznach lenses, and motor-driven contrast detection autofocus, no other reprographic camera provides such a high quality, easy to use solution.
Of course, for our loyal legacy users, this entire setup maintains reverse compatibility with the DT R-Cam, iXR, and Phase One XF, and all digital backs, letting you make the most of the gear you currently own!
The key to this setup, though, is the newly developed DT AutoColumn. A precise, remote controlled, robotic camera positioning system, the AutoColumn allows minute, 300 micrometer adjustments of the iXG (or other camera), delicately sweeping even the thinnest depth of fields across deep specimens.
We’re biased, but with a grand total of just three components, we think it’s a pretty elegant system. Minimal parts, maximum functionality.
But the hardware is just half the battle. Even more critically perhaps, the Autocolumn and the iXG are seamlessly integrated into Phase One’s powerful Capture One software, and can be remotely focused and positioned hands-free.
As you can see in the image below, the controls to nudge the camera a few microns up or down are built directly into Capture One, making vibration free camera adjustment easy. With fine adjustment commands, we can repeat the process time and time again, until we’ve generated dozens, or even hundreds of images for Z-Stacking! And a single-button, fire-and-forget solution is available for alpha testing, making the process as simple as possible.
So now that we have our system, and we’ve taken our input images, what does this all look like?
Without further ado, here’s our final result:
Voila! Our entire sample, in razor sharp detail all across the frame. But don’t take our word for it, click on the image to see the full, zoomable 100 megapixel version and see for yourself.
We hope that this overview has been informative, and that you now know how to accomplish quick, easy, Z-Stacking. If you have any questions, comments, or concerns, feel free to reach out and contact us at … [insert stuff here]