Building a Control System for the Flamingo Light Sheet Microscope

What is this all about?

Flamingo already has Control_System, a great C++ based user interface that runs on a Mac laptop, complete with full control over all of the details of the microscope, and the ability to control the live view on the Linux system. However, that doesn't work well for collaborators wanting to use the system remotely, due to design constraints (making the system isolated for remote deployment). What we are doing mimics some aspects of the original GUI, but with added enhancements like a 3D map of where the view and sample is within the sample chamber, provides the ability to remember and jump back to various coordinates of interest within a volume, and integrates with all the potential goodness of using Napari as a viewer. Additionally, provided the data is transferred and saved locally to the computer running this Python interface, it opens up the possibility of reactive, data-driven imaging, integration with deep learning analyses, and complex acquistions beyond the limitations of the cuboid volumes and simple fixed exposure time lapses of the original software.

Where it is at

The first milestone was just getting something working again. A minimal TCP interface that could talk to the microscope over its binary protocol, send workflow files, and issue basic commands. A mock server so we could actually develop and test without needing physical access to the hardware. It was bare bones, but it worked, and that felt like a big deal at the time.

From there, things grew fast. We did a full MVC refactoring of the codebase, going from a tangled mess of imports and circular dependencies to a clean layered architecture with models, views, controllers, and services. Over 400 tests. Dependency injection throughout. The whole thing went from "I hope this doesn't crash" to something that actually feels maintainable.

Feature-wise, here's what "we've" added over the last several months:

  • Stage movement controls with position verification and a chamber visualization window
  • Live camera feed streaming from the microscope over TCP
  • 3D sample visualization using napari for viewing accumulated fluorescence data
  • LED 2D overview scanning for quick sample orientation, with interactive tile selection
  • A full workflow system supporting Z-stacks, time-lapses, tile scans, and multi-angle acquisition
  • Configuration auto-discovery so the software finds your microscope on the network without manual setup
  • Window geometry persistence because nobody wants to re-arrange their windows every session
  • Multi-laser illumination controls with support for 7+ laser channels and LED sources

What Gave Us Trouble

The 3D visualization was probably the biggest headache. Mapping physical stage coordinates (millimeters, with offsets and non-zero origins) into napari's voxel space turned out to be way harder than expected. Napari's coordinate system doesn't line up intuitively with physical space, the camera orientation defaults left the chamber appearing upside down, and you can't even customize the axis labels beyond 0, 1, 2. We ended up writing a detailed lessons-learned document and ultimately had to approach it incrementally, one axis at a time, validating at every step.

We also ran into a fun race condition during hardware testing where the first click in the stage control window would always time out after 30 seconds. Turned out the motion tracker was being lazily initialized inside a background thread, after the move command had already been sent. The microscope would finish moving before we even started listening for the "motion complete" callback. Subsequent clicks worked fine because the tracker was already initialized by then.

The binary protocol itself had its share of surprises too. At one point the storage drive refresh wasn't working because we were using the wrong command code (confusing a callback flag value with the actual command), and the response data was coming back after the 128-byte header in a way we weren't parsing. Lots of staring at hex dumps.

Two-photon time lapse of murine cranial blood flow, showing rolling white blood cells and green leukemic cells, with a dextran background. Placeholder for more Flamingo specific images.

Courtesy of Dr. Nadia Carlesso's lab, preparation by Christina Abundis.

Short Term Goals

  • Finishing multi-position workflow support
  • Testing the tile Z-stack to Sample View integration with real hardware
  • Improving the LED 2D overview with better default behavior and validation
  • Getting the advanced camera settings (AOI, capture ranges) fully wired up

Long Term Goals

  • Full napari integration for image viewing and analysis within the control interface
  • Ellipse tracing for automated sample finding
  • More robust error handling at system boundaries
  • Potentially evaluating alternatives to napari for the 3D visualization (PyVista has been on the radar)
  • Making the whole setup easier for new users to get running on their own machines

Wrapping Up

It's been a lot of work going from a broken codebase to something that can actually run real acquisition workflows on the microscope. The approach of building a minimal working system first and then layering features on top turned out to be the right call. Having a mock server for testing without hardware was invaluable and saved countless hours of "well I'll test it when I'm in the lab next."

The codebase is open source and lives at github.com/uw-loci/Flamingo_Control. Developed at the University of Wisconsin Laboratory for Optical and Computational Instrumentation (LOCI).

QPSC: Controlling Microscopes from QuPath

I've been building QPSC (QuPath Scope Control), an open-source system that lets researchers control their microscope directly from QuPath. The core idea is simple: draw a box around a region of interest in QuPath, and the software handles the rest. It moves the stage, captures tiles, stitches everything into a pyramidal image, and drops the result right back into your QuPath project.

The motivation came from a real gap in the pathology research workflow. Researchers identify regions of interest in scanned whole-slide images, then require restaining or alternate types of imaging in order to clearly visualize collagen presence and structure. QPSC connects QuPath's annotation tools to live microscope hardware through Micro-Manager and Pycro-Manager, enabling high throughput imaging and collection of multiple modalities within a single microscope, with integrated project metadata to make sorting related images/samples and positions easier.

The PPM Focus

We are working in collaboration with Dr. Agnes Loeffler from Cleveland MetroHealth, who has been using PPM for several years now, with an entirely manual microscope. That collaboration has already demonstrated real clinical value, including improved detection and differentiation of crystals in joint fluids and the ability to differentiate chronic pancreatitis from invasive pancreatic adenocarcinoma on human patient samples. Despite these successes and several published articles, the complete lack of automation makes it difficult to scale this to the kind of studies that would be needed to ensure an impact with the larger pathology community.

QPSC automates the entire PPM acquisition sequence: rotating the polarizer, adjusting per-angle exposure times (which can vary by orders of magnitude between crossed and uncrossed positions), and collecting the multi-angle image stacks needed for quantitative analysis. For labs using prism-based cameras like JAI sensors, the system also handles per-channel white balance calibration so that hardware exposures are tuned independently for each color channel.

A standalone Python library, ppm-library, handles the image processing and analysis side. It includes debayering, flat-field correction, and a calibration pipeline that uses a sunburst reference slide to map hue values to actual fiber orientation angles. The end result is quantitative angle maps of collagen organization, not just pretty pictures.

Clinical Applications

The bigger picture is computational PPM with improved color metrics and fiber sensitivity as a tool for clinical pathology. Being able to convert human tissue samples into digital PPM images and reproducibly compute collagen signatures across entire slides opens the door to prognostication in a wide variety of malignancies. We've identified three areas where we think this can make the most impact:

Tumor-associated collagen signatures (TACS) have proven biomedical significance in malignant tumors, but TACS hasn't been adopted for clinical use because the histologic context of collagen alignment in normal tissue and benign processes like scarring and inflammation is poorly understood. Automated whole-slide PPM imaging would let researchers systematically compare collagen alignment patterns across benign and malignant tissues at a scale that isn't feasible with manual microscopy.

With the current PPM tool, we already see a range of signal intensity across different types of collagen. Improving the system's sensitivity to early collagen associated with cancer invasion has direct clinical relevance, particularly for situations like intramucosal colorectal carcinoma where early cancer is difficult to detect on routinely stained (H&E) tissues alone.

Since PPM highlights collagen differences between chronic pancreatitis and invasive pancreatic adenocarcinoma, we're also working toward detecting collagen signatures characteristic of pancreatic adenocarcinoma in fine needle aspiration (FNA) samples. Preliminary data show that even in tiny FNA samples (around 5 mm2), there is enough stroma to detect collagen signatures with PPM. An automated system could improve the diagnostic sensitivity of FNA by accurately analyzing the collagen component of these small tissue samples during the workup of radiologically detected pancreatic masses.

What's Built

The system is modular, split across a QuPath extension (Java), a microscope command server (Python), a hardware abstraction layer (Python/Pycro-Manager), and the PPM library mentioned above. Here's what works today:

Bounding box and existing-image acquisition workflows with automatic tiling, stitching (to OME-TIFF or OME-ZARR), and QuPath project import. Multi-angle PPM acquisition with automatic polarizer rotation and per-angle exposure control. Calibration tools for white balance, polarizer alignment, autofocus tuning, and flat-field background collection. A pluggable modality system so new imaging modes (brightfield, SHG, fluorescence) can be added without rearchitecting. A stage map window with real-time position tracking, click-to-navigate, and macro image overlay. Multi-sample project support that tracks image collections, physical offsets, and coordinate system metadata across a whole project.

What's Next

On the near-term roadmap: improved computational PPM with better color metrics and fiber sensitivity to drive the clinical applications described above, deep learning pixel classification integrated into QuPath with a Python backend for training and inference, live preview scanning to rapidly populate the stage map with a low-res overview of the entire slide (in the case where no slide scanner overview image is available), SHG and multiphoton modality support for labs doing second harmonic generation imaging, and a publication describing the system and demonstrating it on real tissue samples.

The project is pre-release and actively developed at UW-Madison LOCI. All code is on GitHub.

Using Micro-Manager and Pycro-Manager

Microscopy is a way of observing objects that we cannot see with our eyes - sometimes these objects are amazingly small (MINFLUX has obtained ~2nm resolution), while other times we use them to record time lapse data at framerates where even if we observed the data, we could not make sense of it except by reviewing the recordings multiple times. Various modalities allow everything from 3D live imaging of whole cell volumes for indefinite amounts of time through holography, to the use of small self assembling hollow structures for ultrasound microscopy, to standard light based brightfield and fluorescent microscopes! And even more standard microscopy can be used creatively to look at the lifetime of fluorescent molecules rather than simply the location of their emission, or pulse lasers such that the arrival time of multiple photons simultaneously results in the production of shorter wavelength photons from second and third harmonic generation. All of these tools generate increasing amounts of high resolution data that give something many other methods lack: spatial and temporal relationships.

To learn more about the PycroManager, read this article in Nature Methods: