Next-generation high-speed X-ray imaging with Dynamic Vision Sensing
The beamline for TOmographic Microscopy and Coherent rAdiology experimentTs (TOMCAT) is currently the world-record holder in several disciplines. Highlight experiments include the first in vivo tomographic imaging of the fly’s flight motor [1], the realization of in vivo microscopic tomography of the lung at the alveolar level [2] as well as the demonstration of 1000-tomograms per second 3D imaging [3]. The backbone of all these developments, among others, has been the in-house developed Gigafrost detector which has allowed for continuous data streaming rates to a dedicated backend server of up to 8 GB/s [4]. Nonetheless, despite state-of-the art CMOS technology, the temporal resolution in many applications remains limited while the data amount per se poses several challenges as well.
Project overview
The underlying project is to develop X-ray based neuromorphic imaging to demonstrate the capability of high temporal super-resolution and reduced data production rates in dynamic experiments. In doing so, we will develop an event-assisted temporal super-resolution pipeline for 3D object imaging, which will be tested using the TOMCAT beamline. Thereby we will use an event-based sensor and fuse its information with that of a traditional image sensor to: (a) temporally super-sample frame data to artificially de-blur images; and (b) reduce streaming data rates by limiting image data acquisition and replacing it with more cost and power effective event acquisition. As the first example, we would like to study the in situ properties of additive manufacturing materials which perform differently under different strain rates and typically are only visible in quasi-static experiments.
There are existing lines of research focusing on augmenting frame-based visual information with events in this way to gain insight about what happens in the visual scene between frames when the sensor is essentially ‘blind’. Certain event cameras (such as the Inivation DAVIS 346) also have the ability to acquire both frames and events from a single sensor, and the ability to interpolate intermediate frames from events is already implemented in the accompanying software [6]. This approach has also been investigated with a setup consisting of two separate synchronized cameras [7], where the temporal resolution of the resulting video was successfully improved by retracing the events between two frames to produce intermediate frames.
The benefit of this approach is that it can significantly increase the temporal resolution of the scan without significantly increasing the amount of data that needs to be captured and stored. For high-speed tomographic imaging the proposed method could be a game-changer in the field for either achieving improved temporal resolution and/or achieving a significant reduction of data amount in dynamic imaging experiments. From an ESA perspective, the findings would serve as proof-of-principle application and feasibility evaluation of event-assisted temporal supersampling for fast dynamic tracking. Fields of application of the proposed technology include in-orbit servicing and health monitoring operations, X-ray astronomy and navigation during critical operations in complex dynamical environments such as comet interception, planetary landing and fly-bys.
References
[1] S. M. Walker et al., “In Vivo Time-Resolved Microtomography Reveals the Mechanics of the Blowfly Flight Motor,” PLoS Biology, vol. 12, no. 3, p. e1001823, Mar. 2014, https://doi.org/10.1371/journal.pbio.1001823. (https://www.youtube.com/watch?v=yIKl_JMrYuE)
[2] G. Lovric et al., “Tomographic in vivo microscopy for the study of lung physiology at the alveolar level.,” Scientific reports, vol. 7, no. 1, p. 12545, Oct. 2017, https://doi.org/10.1038/s41598-017-12886-3
[3] F. García-Moreno, P. H. Kamm, T. R. Neu, F. Bülk, M. A. Noack, M. Wegener, N. von der Eltz, C. M. Schlepütz, M. Stampanoni, J. Banhart, “Tomoscopy: Time-Resolved Tomography for Dynamic Processes in Materials, Advanced Materials 23 September 2021 (online)”, doi: https://doi.org/10.1002/adma.202104659
[4] Mokso, Rajmund, Christian M. Schlepütz, Gerd Theidel, Heiner Billich, Elmar Schmid, Tine Celcer, Gordan Mikuljan, et al. “GigaFRoST: The Gigabit Fast Readout System for Tomography.” Journal of Synchrotron Radiation 24, no. 6 (November 1, 2017): 1250–59. https://doi.org/10.1107/S1600577517013522
[5] G. Lovric et al., “Dose optimization approach to fast X-ray microtomography of the lung alveoli”, Journal of Applied Crystallography, 46 (4), 856–860, Aug. 2013, https://doi.org/10.1107/S0021889813005591
[6] Inivation DV-GUI manual. https://inivation.gitlab.io/dv/dv-docs/docs/accumulator-module/#what-does-the-accumulator-do
[7] Tulyakov, Stepan, Daniel Gehrig, Stamatios Georgoulis, Julius Erbach, Mathias Gehrig, Yuanyou Li and Davide Scaramuzza. “Time Lens: Event-based Video Frame Interpolation.” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021): 16150-16159. https://doi.org/10.1109/CVPR46437.2021.01589