The eukaryotic cytoskeleton plays essential roles in cell signaling and trafficking, broadly associated with immunity and diseases in humans and plants. To date, most studies describing cytoskeleton dynamics and function rely on qualitative/quantitative analyses of cytoskeletal images. While state-of-the-art, these approaches face general challenges: the diversity among filaments causes considerable inaccuracy, and the widely adopted image projection leads to bias and information loss. To solve these issues, we developed the Implicit Laplacian of Enhanced Edge (ILEE), an unguided, high-performance approach for 2D/3D-based quantification of cytoskeletal status and organization. Using ILEE, we constructed a Python library to enable automated cytoskeletal image analysis, providing biologically interpretable indices measuring the density, bundling, segmentation, branching, and directionality of the cytoskeleton. Our data demonstrated that ILEE resolves the defects of traditional approaches, enables the detection of novel cytoskeletal features, and yields data with superior accuracy, stability, and robustness. The ILEE toolbox is available for public use through PyPI and Google Colab.

Higher eukaryotes have evolved a complex suite of cellular signaling mechanisms to regulate many biological processes, such as growth, development, movement, reproduction, and response to environmental stimuli. Not surprisingly, to maintain the integration and sustainability of a conceptual signaling network, another “physical network” is deployed as the framework to organize all subcellular structures and the conduit to transport molecules and information (Lian et al., 2021)—namely, the cytoskeleton. There are four basic types of eukaryotic cytoskeletal elements: actin, microtubule, intermediate filament, and septin. Shared by plants and animals are actin and microtubule, which can be generalized as a cytoplasmic web-like matrix, physically integrating the plasma membrane, vesicles, and organelles, and functionally connecting intercellular signaling to the extracellular environments through a highly dynamic series of temporally and spatially regulated changes of architecture (Blanchoin et al., 2014; Brouhard, 2018). However, structurally complete and functionally identical intermediate filaments or septin have not yet been identified in plants (Utsunomiya et al., 2020; Onishi and Pringle, 2016). Altogether, the cytoskeleton controls numerous cellular processes such as movement, shaping, cellular trafficking, and intercellular communication (Nick, 2011). It also provides the mechanical force required for chromosome separation and plasma membrane division during mitosis and meiosis (Carlton et al., 2020). In addition to its role within the cytoplasm, the cytoskeleton also participates in various processes within the nucleus, including RNA polymerase recruitment, transcription initiation, and chromosome scaffolding (Kristó et al., 2016).

As the cytoskeleton is vital for cell activities and life, the dysfunction of cytoskeletal dynamics generally leads to severe diseases. For example, in the development of Alzheimer’s disease, amyloid development triggers the dysregulation of actin dynamics within dendritic spines, leading to synaptotoxicity (Rush et al., 2018). Similarly, the dysfunction of microtubule dynamics can trigger neuropsychiatric disorders (Marchisella et al., 2016). In breast cancer, the cancer cell can deploy aberrant actin aggregation to resist cytotoxic natural killer (NK) cells from the immune system (Al Absi et al., 2018). In the case of plant diseases, the function of the cytoskeleton is similarly required. Recent work has demonstrated that the dynamics of the actin and microtubule of the host are specifically manipulated by pathogens to paralyze plant immunity during an infection (Li and Day, 2019). Indeed, the eukaryotic cytoskeleton not only serves as an integrated cell signaling and transportation platform but it is also “a focus of disease development” at the molecular level for humans and plants. Hence, understanding the architecture and dynamics of the cytoskeleton is broadly associated with life, health, and food security, attracting significant interest across different fields of biology and beyond.

Over the past several decades, confocal microscopy-based methods using fluorescent markers have been developed to monitor changes in the cytoskeletal organization (Melak et al., 2017). While showing advantages in real-time observation and offering an intuitive visual presentation, these approaches possess critical limitations—namely, they are subject to human interpretation and, as a result, often suffer from bias. As a step to reduce such limitation(s), the emergence of computational, algorithm-based analyses offers a solution to describe the quantitative features of cytoskeletal architecture. However, while previous studies introduced the concept of using generalizable image processing pipelines (Lichtenstein et al., 2003; Shah et al., 2005) to transfer the task of evaluation away from humans to computer-based quantitative indices, several key bottlenecks were still prevalent. First, most previous quantitative algorithms are limited to 2D images. As a result, these approaches require users to manually generate z-axis projections from raw data, a process that results in an incredible amount of information loss, especially within the perpendicular portion of the cytoskeleton. Second, many current approaches require users to set thresholds manually to segment cytoskeletal components from images, a task that results in a sampling bias. Lastly, the performance of existing algorithms varies significantly depending on the sample source. This hurdle imposes a considerable disparity in the algorithm performance for plants—which possess a dominance of curvy and spherically distributed filaments—compared to the animal cytoskeletal organization, which is generally straight and complanate (Liu et al., 2020, 2018; Alioscha-Perez et al., 2016). In fact, while the sample source dramatically impacts the ability to evaluate features of cytoskeletal function across all eukaryotes, most current approaches are developed based on cytoskeletal images from animal cells, indicating potential systemic bias when applied to other types of image samples, such as plants.

Previous work has described the development of a global-thresholding-based pipeline to define and evaluate two key features of cytoskeleton filament organization in living plant cells: cytoskeletal density, defined by occupancy, and bundling, defined by statistical skewness of fluorescence (Higaki et al., 2010). While this approach utilizes manual global thresholding (MGT), which can potentially introduce a certain level of user bias, it still outperforms most standardized adaptive/automatic global or local thresholding techniques, such as Otsu (Otsu, 1979) and Niblack (Niblack, 1985) methods. More recent advances in MGT-based approaches include the introduction of the coefficient of variation (CV) of fluorescence as an index to measure the degree of filament bundling, which serves as an improvement, substituting skewness with advanced overall robustness and utility of the original algorithm (Higaki et al., 2020). However, this analysis pipeline still consumes a considerable amount of time and effort from users for massive sample processing and leaves unaddressed two critical issues of rigor in image processing and analysis: information loss and human bias.

If the biological question of a study does not necessarily require an address of the thickness of the cytoskeleton, another strategy is to directly obtain the skeletonized image (thinned filament, with 1-pixel width), while bypassing the segmentation of entire filaments. Previous successful approaches utilized a linear feature detector (Gan et al., 2016) or a steerable filter for ridge recognition (Kittisopikul et al., 2020) to extract the skeleton information from 2D datasets, which enabled the automated computation of cytoskeleton orientation/directionality and potentially some additional features (e.g., topology). However, such strategies face potential challenges if filament thickness or other features depending on the binary image of the entire filament (e.g., occupancy) is required. While the linear feature detector, as aforementioned, may not be appropriate for plant samples with a curvy cytoskeleton, the use of steerable filters (Jacob and Unser, 2004) actually arouses significant inspiration—instead of ridges, edges can potentially define the entire filament (area within the edge) for a complete cytoskeleton analysis algorithm. However, if such filters were directly applied to edges, the obtained first-/second-order derivatives would still require subjective human input for thresholding. Additionally, it is also challenging to categorize multiple pieces of edges to one filament since edges may be separated or linked by fault because of noise and filament diversity of the samples. Therefore, how to properly utilize gradient information becomes a primary concern for developing a high-performance cytoskeleton analysis algorithm.

In this study, we developed the Implicit Laplacian of Enhanced Edge (ILEE), a 2D/3D compatible unguided local thresholding algorithm for cytoskeletal segmentation and analysis based on native brightness, first-order derivative (i.e., gradient), and second-order derivative (i.e., Laplacian) of the cytoskeleton image altogether (see Fig. 1). The study herein supports ILEE as a superior image quantitative analytic platform that overcomes current limitations such as information loss through dimensional reduction, human bias, and intersample instability; ILEE can accurately recognize cytoskeleton from images with a high variation of filament brightness and thickness, such as those of live plant samples.

As a key advancement in the development of ILEE, we constructed an ILEE-based Python library, namely ILEE_CSK, for the fully automated quantitative analysis of cytoskeleton images. To improve ILEE_CSK’s capability to explore the features of the cytoskeleton, we proposed several novel indices—linear density, diameter_TDT, diameter_STD, segment density, and static branching activity to enable/enhance the measurement of (de-)/polymerization, bundling, severing-and-nucleation, and branching dynamics of the cytoskeleton. Together with other classic indices transplanted from previous studies, ILEE_CSK supports 12 cytoskeletal indices within five primary classes: density, bundling, connectivity, branching, and directionality. Our data suggested that ILEE outperforms other classic algorithms by its superior accuracy, stability, and robustness for the computation of cytoskeleton indices. Furthermore, we provided evidence demonstrating higher fidelity and reliability of 3D-based cytoskeletal computational approaches over traditional 2D-based approaches. In addition, using a series of experiment-based images from pathogen-infected plant cells, we demonstrated that ILEE has an improved sensitivity to distinguish biological differences and the potential to reveal novel biological features previously omitted due to insufficient approaches. In sum, this platform not only enables the efficient acquisition and evaluation of key cytoskeleton filament parameters with high accuracy from both projected 2D and native 3D images but also enables accessibility to a broader range of cytoskeletal status for biological interpretation.

The library, ILEE_CSK, is publicly released on GitHub (https://phylars.github.io/ILEE_CSK/). We also developed the ILEE Google Colab pipelines for data processing, visualization, and statistical analysis, which is a convenient, user-friendly, crossplatform interface requiring no programming experience.

The ILEE pipeline

Raw images from a confocal laser scanning microscope are obtained from the sensor detection of in-focus photons from a fixed focal plane. Since the cytoskeleton is a 3D structure that permeates throughout the cell, current approaches used to capture the filament architecture rely on scanning pixels of each plane along the z-axis at a given depth of step; finally, these stacks are reconstructed to yield a 3D image. However, due to limited computational biological resources (e.g., imaging conditions with low signal-to-noise ratio, lack of suitable algorithms and tools, and insufficient computational power), most studies have exclusively employed the z-axis-projected 2D image, which results in substantial information loss and systemic bias in downstream analyses.

In our newly developed algorithm, we integrated both 2D and 3D data structures into the same processing pipeline to ameliorate the aforementioned conflict (Fig. 1 a). In short, this pipeline enabled automatic processing of both traditional 2D and native 3D z-stack image analysis. As shown in Fig. 1 b, cytoskeleton segmentation using ILEE requires three inputs: an edge-enhanced image, a global gradient threshold that recognizes the edges of potential cytoskeletal components, and the Laplacian smoothing coefficient K (described below). With these inputs, a local adaptive threshold image is generated via ILEE, and the pixels/voxels with values above the threshold image are classified as cytoskeletal components, which generate a binary image (Fig. 1 c). The binary image is further skeletonized (Lee et al., 1994) to enable the downstream calculations of numerous cytoskeleton indices, the sum of which comprises the quantitative features of cytoskeletal dynamics (Fig. 1 c). Additionally, because the 2D and 3D modes share a common workflow, all of the calculated cytoskeleton indices also share the definition for both dimensions. This feature enables a horizontal comparison of both modes by the user, which we assert will significantly contribute to the community involvement and production of large image datasets for further examination and development. In general, the ultimate goal of this approach and the resultant algorithm is to construct a pipeline that enables the automated detection of the eukaryotic cytoskeleton from complex biological images in an accurate and unbiased manner.

Image decomposition and processing strategy

One of the central problems of automated cytoskeletal image processing is how to accurately recognize cytoskeletal components—a task that is highly challenging because object pixels (i.e., cytoskeleton components) generally have a high dynamic range of signal intensity within and among individual samples due to varied bundle thickness, the concentration of the fluorescent dye, and its binding efficiency. As a framework to further understand this challenge, an image from confocal microscopy is conceptually comprised of three components: (1) true fluorescence, which is emitted by the dye molecules within the pixel, (2) the diffraction signal transmitted from neighboring space, and (3) the ground noise generated by the sensor of the imaging system (Fig. 2 a). During confocal imaging, the ground noise fluctuates around a constant due to the fixed setting of photon sensors, while the diffraction signal is positively correlated with the actual local fluorescence. Therefore, an ideal cytoskeleton segregation algorithm should be an adaptive local thresholding approach that refers to both ground noise and local signal intensity.

Identification of coarse background

To identify the approximate level of the ground noise for downstream fine thresholding, a general strategy is to define a highly representative “feature value” of ground noise, based on which a global threshold can be estimated to segment the “coarse background.” Guided by this strategy, we first tested a method to directly use the peak-of-frequency of the image brightness as the representative value. Unfortunately, this approach is not reliable because the brightness distribution near the theoretical peak is always turbulent and non-monotone (Fig. S1 a), which makes the peak inaccurate and unmeasurable. However, during our experiments, we noticed an interesting phenomenon whereby global thresholding always rendered the maximum number of segmentations of the background approximately at the peak-of-frequency of ground noise. This is because individual pixels in ground noise are subject to a normal-like distribution; therefore, two adjacent pixels would likely fall into different binary categories when the threshold equals the mean (peak) of the distribution. Inspired by this discovery, we designed a different algorithm that determines the coarse background based on the morphological feature of the ground noise—namely, non-connected negative element scanning (NNES; Fig. 2 b).

In brief, NNES scans the total number of non-connected negative elements (NNE) at different thresholds to identify one that results in the maximum count of NNE (i.e., the peak; Fig. 2 b i) to be used as the representative value. Our analysis suggested that NNES generates a curve with a smooth, easily measurable maximum, which is ideal for representative value identification (Fig. S1, b and c). Next, the global threshold for the coarse background (Fig. 2 b iii) will be determined using a linear model trained by the representative value rendered by NNES and manual global thresholding (MGT), a global threshold determined by researchers experienced in manual cytoskeleton image analysis (Figs. S1 and S2). Generally, NNES maintains stability and accuracy over different samples that vary in brightness distribution. Therefore, the global threshold identified through NNES and the estimation model is used in this study to define the coarse background area for downstream processing. Additionally, we further improved the 3D coarse background estimation model by introducing another published dataset (Cao et al., 2022) with image augmentation for diverse ground noise pattern (Fig. S3). In total, these improvements serve to further extend the applicability and reliability of the ILEE_CSK library beyond this study.

Cytoskeleton segmentation by ILEE

The core strategy of ILEE is to accurately identify the edges of all cytoskeletal filament components and apply an implicit Laplacian smoothing (Desbrun et al., 1999) on only the edges. This leads to the generation of a threshold image where high gradient areas (i.e., the edges of cytoskeleton filaments) smoothly transition into each other (Fig. 3 a). As illustrated in Fig. S4 a, the general local threshold trend changes as a function of the baseline values of the cytoskeleton edges. This is because ILEE selectively filters out high-frequency noise while preserving salient geometric features of individual actin filaments by leveraging the spectral characteristics of Laplacian operators (Fig. S4, b and c). Thus, ILEE can filter the background regardless of the general local brightness level, demonstrating that its function does not require an operating kernel, which would otherwise restrict the performance when evaluating filaments of varying thickness. Moreover, the edges of cytoskeletal components can be smoothed and elongated using a significant difference filter (SDF; Fig. S5) and a Gaussian filter, the sum of which serves to enhance the continuity of edges and contributes to the accuracy of edge detection (Fig. 1 b and Fig. 3 a). For this reason, we name our algorithm “Implicit Laplacian of Enhanced Edge.”

For computation, the ILEE algorithm builds a linear system based on Laplacian operators to achieve local adaptive thresholding for edges of cytoskeletal components (see Materials and methods for detail). In brief, we first used the information from the coarse background to estimate a global threshold of the gradient magnitude (gthres) and classified pixels/voxels above gthres as boundary elements (Iedge; Fig. 3 a, 2.1). Following the boundary identification, we constructed an n × n (n is the total number of pixel/voxel in the image) selection matrix S and a sparse diagonal matrix whose diagonal corresponds to the flattened pixel/voxel in order; the diagonal element was 1 if the pixel/voxel possessed a norm of the gradient above gthres. As the pixels/voxels belonging to the boundary of the cytoskeleton filament were marked, Iedge was mathematically defined as shown in Fig. 3 a, equations 2.2.1 and 2.2.2, where L is the Laplacian matrix and K, or the implicit Laplacian smoothing coefficient, is a weight that adjusts the influence of the Laplacian operator. Therefore, the local threshold image can be rendered by solving the linear equation shown in Fig. 3 a, 2.2.3.

For a given image input, the performance of ILEE depends on two parameters: gthres, which defines the edge, and K, which determines the weight of detail (i.e., high-frequency components) to be filtered. To calculate gthres for an input image, we used brightness values of the area identified as the coarse background by NNES. Since the ground noise is approximately subject to a normal distribution, we hypothesized a deducible statistical relationship between the image gradient, defined by the Scharr operator (Scharr, 2000), and the native brightness of pixels/voxels within the coarse background. Using a normal random array that simulates the noise with a 2D or 3D data structure, we demonstrate that the distribution of the background gradient magnitude is also normal-like, and both mean (μG) and standard deviation (σG) of the gradients are directly proportional to the standard deviation of their native pixel values (σcbg). Hence, we calculated the proportionality coefficient (see Fig. S6). For the 3D mode, since the voxel size on the x- and y-axis is different from that of the z-axis, the proportionality coefficient of μG and σG over σcbg will vary for different ratios of the xy unit:z unit (see Fig. S7). To solve this problem, we simulated artificial 3D noise images and trained a multiinterval regression model that accurately (R2 > 0.999) calculates the proportionality coefficient of μG and σG over σcbg for different xy:z ratios of the voxel. Finally, using this approach and randomly selected actin image samples, we trained a model, gthres=μG+k(σcbg)*σG, to determine the gthres as an ILEE input for this study (Fig. S8). For 3D mode, we additionally developed enhanced gthres estimation models to further extend the applicability and reliability of ILEE_CSK library beyond the current study (Fig. S9).

To determine the appropriate setting of K, we first tested how different K values influenced the result of the local threshold image (Ithres of Fig. 3 a). As shown in Fig. S10 a, at the optimal gthres, a low value of K generated an Ithres that was highly consistent with the selected edge; when K increases, the total threshold image shifted toward the average value of the selected edges, with an increasing loss of detail. As for the resultant binary image, a lower K enables the accurate recognition of thin and faint actin filament components, and yet cannot cover the entire width of thick filaments. Conversely, a high K value detects thick actin filaments with improved accuracy, resulting in a binary image that is less noisy; however, thin and/or faint filaments tend to be omitted as false-negative pixels (Fig. 3 b and Fig. S10 a). To overcome this dilemma, we applied a strategy using a lower K1 and a higher K2 to compute two different binary images that focus on thin/faint components and thick components, respectively. Then, we generated a full outer-joint image that contains all cytoskeleton components in these two binary images. This approach led to improved recognition of the cytoskeleton with varying morphologies (see Fig. 3 b).

As described above, K1 controls the performance of thin and faint filaments. Since the theoretical minimum thickness of a distinguishable cytoskeletal component for most scenarios is approximately equal to one pixel/voxel unit, K1 can be fixed as a constant to recognize the finest cytoskeletal components from a complex and heterogeneous set of input samples. Using this approach, we identified an empirical optimal K1 of 2.5. However, users may adjust K1 if they are processing exceptional images with an ultrahigh resolution, where the cytoskeleton monomer is larger than two pixels/voxels for extreme conditions. On the other hand, since different image samples have different distributions of cytoskeleton thickness, K2, which controls the performance over-thick filaments, must be guided according to the maximum thickness among all samples in an experiment batch. To ensure that the algorithm is fully unguided, our strategy is to estimate an appropriate K2 from an estimated maximum thickness using all samples from a single batch of experiments, including multiple biological replicates (if applicable). To do this, we used Niblack thresholding (Niblack, 1985; API Ref. [1]) first to generate a coarse binary image (which is sufficiently accurate for the thickest portion of the filament); from this, we calculated the mean of the top 5% of the Euclidian distance transformation (DT) values of all positive pixels (see Materials and methods for additional information). Next, the top 5% means of all individual images were averaged to estimate K2 via a trained model using a sample set with manual portraited binary ground truth (Fig. 3 c and Fig. S10, b–d). Therefore, individual images from all groups in the same batch of an experiment can be processed by K2 as estimated using the model above. In this, the bias of human input was avoided. When processing 3D images, we additionally provided a parameter that allows using a single K, rather than K1 and K2, which balances the accuracy over thin/faint and thick/bright filaments while saving computation time.

Computational analysis of cytoskeleton indices

Through the image processing pipeline, cytoskeletal indices were automatically calculated from the binary image generated by ILEE. As a substantial expansion from three previously defined cytoskeletal indices (e.g., occupancy, skewness, and CV; Higaki et al., 2010; Higaki et al., 2020), we introduced 12 indices in total (Fig. 1 a). Particularly, we focused on 9 of the 12 indices in this study as they are normalized and ready for biological interpretation (see Fig. S11 for visualized demonstration of each index). The other three indices (total connected element, total branch, and total node) serve as critical intermediate data that potentially contribute to the development of novel custom methods for advanced users and are callable through our library. All indices fall within five classes—density, bundling, connectivity, branching, and directionality, which describe the quantifiable features and functions of cytoskeletal morphology and dynamics. It is noteworthy that we interpolated the z-axis for 3D images to make a 1:1:1 length unit ratio for the three axes (i.e., cubic voxel) because the cuboid voxel generated by a confocal system is not compatible with most of the indices. Additionally, the indices require a certain level of image postprocessing (e.g., oversampling) to further enhance their accuracy. The detailed mathematical definition and processing of each index are described in Materials and methods.

Here, we define a novel set of cytoskeletal indices to enable the measurement of certain cytoskeletal features. For the class “density,” we developed linear density, a feature that measures filament length per unit of 2D/3D space. For the class “bundling,” we developed two new highly robust indices, the diameter by total DT (diameter_TDT) and the diameter by skeleton DT (diameter_SDT), both of which compute the thickness from the filament DT data but with different sampling strategies (see Materials and methods). Both quantify the physical thickness of filament bundles, while another two indirect indices, skewness and CV, estimate the relative bundling level based on the statistical distribution of fluorescence intensity. For the class “connectivity,” we defined segment density, which estimates the sum of severing and nucleation activity within each unit of length of the cytoskeleton. A severing event separates one filament into two halves, while a nucleation event generates a new segment of the filament. As both generate new segments, it is impossible to distinguish them through static images. However, if a researcher can assume one of these activities does not change significantly among samples at specific experimental conditions, the change in segment density can be attributed to the other activities. This is an important consideration in terms of the biological behavior of the cytoskeleton as it enables the decoupling of actin depolymerization and severing, key activities facilitated by the actin depolymerizing factor (ADF) and cofilin family of proteins (Tanaka et al., 2018). For the class “branching,” our algorithm is based on Skan, a recently developed Python library for the graph-theoretical analysis of cytoskeletons (Nunez-Iglesias et al., 2018). To further explore the relationship between filament morphology and the biological activity of branching, we specifically designed a novel index, static branching activity, which we define as the total number of branches emerging from any non-end-point node per unit length of the filament. In total, this index measures the abundance/frequency of cytoskeletal branching. Finally, our library supports estimating the level of directional cytoskeletal growth by the local anisotropy index, which measures how local filaments tend to be directional or chaotic. This approach is reconstructed from an ImageJ plug-in FibrilTool (Boudaoud et al., 2014), but we extended this algorithm to 3D (Fig. 1 c).

It is noteworthy that segment density and static branching activity estimate the intensity of dynamic processes through static images. By the most rigorous mathematical criteria, a dynamic status cannot be directly defined without introducing time. However, the introduction of the time dimension (using a video sample) generally sacrifices spatial resolution due to the technical limit of microscopy devices. While total internal reflection fluorescence microscopy (TIRF) or variable-angle epifluorescence microscopy (VAEM) are used to collect 2D videos (Henty-Ridilla et al., 2014), such methods may be limited by the insufficient quantity of filaments in the field of vision and manual counting of the dynamic events, both of which may introduce bias compared with a larger 3D data structure. Alternatively, we introduce segment density and static branching activity as indirect indices measuring these activities. Here, we assume that the occurrence of severing/nucleation/branching events and the net regeneration speed of the cytoskeleton has established (or is very near to) a dynamic equilibrium; at this point, the number of pieces or branches of the filament is positively correlated with the speed of severing/nucleation or branching. In practice, most experimental schemes set an interval between the treatment (if any) and imaging, which is long enough to reach the dynamic equilibrium. Therefore, we expect these two novel indices to provide interpretable estimates of severing/nucleation and branching, especially for confocal images.

ILEE displays high accuracy and stability over actin image samples

To evaluate the performance of ILEE in terms of its accuracy and compatibility over diverse samples, we selected a dataset of actin images from Arabidopsis leaves with a diverse morphology and compared ILEE with numerous traditional global and local thresholding algorithms, including MGT. First, to evaluate the accuracy of each algorithm in terms of filament segregation, we manually prepared the ground truth binary image from each of the samples using a digital drawing monitor (Fig. 4 a, ground truth). Next, we used each ground truth binary image as a reference and compared the filament architecture obtained by ILEE, MGT, and six additional adaptive thresholding algorithms. These additional thresholding algorithms include Otsu (Otsu, 1979), Triangle (Zack et al., 1977), Li (Li and Tam, 1998), Yan (Yen et al., 1995), Niblack (Niblack, 1985), and Sauvola (Sauvola and Pietikäinen, 2000; Fig. 4). Out of additional concern for rigor, we anticipated that false-positive pixels might be obtained due to operator’s bias during the generation of each of the ground truth images (even when the operator is experienced in the actin imaging field). Therefore, we further analyzed each segment of false-positive pixels by its shape and connectivity to the matched filaments and identified the actin-like false-positive pixels as possible real actin components (see Materials and methods).

As shown in Fig. 4 a (visualized demonstration), Fig. 4 b (quantitative analysis), and Fig. 4 c (bias analysis), ILEE offers superior performance with improved accuracy, reduced false-positive/negative occurrences, and the lowest bias over local filament thickness compared with current approaches. It is noteworthy, however, that the adaptive global thresholding approaches (from Otsu to Yan) were generally able to detect some thick and bright bundles of the cytoskeleton. However, these approaches are unable to capture faint filaments, and as a result, generate a high false-negative rate. Conversely, both adaptive local thresholding approaches, Niblack and Sauvola, generate numerous block-shaped false-positive elements and fail to capture the near-edge region of thick filaments. For MGT and Li methods, although they showed satisfactory match rates and lower averaged false-positive/negative rates, their performance is far less stable than ILEE (Fig. 4 b).

Next, we evaluated the accuracy and stability of cytoskeletal indices using ILEE versus other commonly used image analysis algorithms. To do this, we first computed the ground truth indices from the manually generated binary images; then, quantitative measurements were collected from all methods and normalized by the relative fold to the results generated from the corresponding ground truth image. As shown in Fig. 4 d, ILEE showed the best overall accuracy and cross-sample stability compared with all other quantitative approaches, including the highest accuracy for occupancy, skewness, CV, and diameter_TDT. However, we also observed that in terms of the morphology-sensitive indices (i.e., linear density, segment density, and branching activity), ILEE did not fully conform to the data collected from ground truth binary images. Upon further inspection, we determined that this is because the manually portrayed ground truth images and ILEE results showed different tendencies in judging the pixels in the narrow areas between two bright filaments (see Discussion). While other approaches displayed obvious, somewhat predictable, inaccuracies, the MGT and Li methods still generated satisfactory results, which echoes their performance in actin segmentation. However, the performance of these two algorithms among diverse and complex biological samples seemed to be less stable than ILEE as visually suggested by their standard deviation.

To accurately evaluate the stability and robustness of ILEE performance, we continued to analyze the variance coefficient of all groups (Fig. S12), uncovering that ILEE is the only approach that simultaneously maintained high accuracy and stability. Next, we tested the robustness of ILEE and other approaches against noise signal disturbance by adding different levels of Gaussian noise to the image dataset (Figs. S13, S14, and S15). Using this approach, we observed that ILEE is still the best-performing algorithm, maintaining stable and accurate image binarization and cytoskeleton indices against increasing noise. Taken together, these results demonstrate that ILEE has superior accuracy, stability, and robustness over MGT and other classic automated image thresholding approaches in terms of both cytoskeleton segmentation and index computation.

3D ILEE preserves more information and gains higher fidelity over 2D ILEE

While the 2D ILEE generally outcompetes MGT and other commonly used automated algorithms (Fig. 4), the comparison of cytoskeleton segmentation approaches using the manually painted ground truth is currently limited to 2D images because it is extremely difficult and error-prone to select the “3D ground truth” voxel by voxel. However, as the 3D mode theoretically preserves information from the z-axis and renders higher accuracy, it is necessary to circumvent these challenges and verify its merits. Therefore, we turned to a different strategy using synthetic artificial 3D actin images with known ground truth to investigate the performance of the 3D mode over 2D (Fig. 5 a and Fig. S16).

First, we constructed an actin image simulation model, which mimics three different fractions of image brightness (real actin, ground noise, and diffraction noise; see Fig. 2 a) using three independent statistical solutions (Fig. S16 a; also see Materials and methods). In brief, we utilized a training dataset of 31 diverse 3D samples to generalize the principles that describe the position-based brightness distributions of voxels for the three fractions independently. Next, we adopted only the skeleton image generated by 3D ILEE from the training samples as the topological frame and refilled the whole image with new brightness values via the actin image simulation model. The advantage of this, rather than generating it de novo, was to ensure that the topological structure of the artificial filaments mimics the shape of the genuine actin to the maximum. Accordingly, we generated 31 3D artificial images and processed them using the ILEE 2D and 3D modes.

Our primary question was how 3D ILEE structurally improves the information loss by the 2D pipeline and whether it yields an improved accuracy of the cytoskeletal indices. As shown in Fig. 5 b, we observed that the 2D pipeline resulted in considerable information loss with reduced accuracy. Indeed, while the 3D segmentation almost accurately covers the filament voxels, the 2D pipeline only captured 8% of the data points as pixels and 42% of the total length of filaments, suggesting a potentially biased sampling. Consequently, 3D ILEE generally produces index values closer to the ground truth than 2D ILEE, which indicates the 3D mode indeed possesses a better absolute accuracy/fidelity.

In some scenarios, studies are interested in the relative difference between experimental groups for biological interpretations rather than the absolute accuracy of quantifiable features. Therefore, we also measured the “comparative sensitivity” of 3D and 2D ILEE, which reflects their capability to determine relatively high and low index values. To learn this, the index values of the ground truth images, as well as those computed through 3D and 2D modes, were first normalized to the fold of the minimum value of the corresponding group, followed by linear correlation analysis between the ground truth and ILEE outputs of each sample. As shown in Fig. 5 c, the 3D mode has a higher correlation for seven of the nine indices, among which diameter (TDT), segmentation activity, and anisotropy were considerably improved. Interestingly, we observed that the 2D mode performed slightly better for indices of the density class (e.g., occupancy and linear density). We posit that this is because z-axis projection may increase the contrast between samples with low and high filament density. In conclusion, our data suggest that the 3D mode of ILEE indeed eliminates the issues related to image projection and provides reliable results with higher accuracy and comparative sensitivity.

ILEE leads to the discovery of new features of actin dynamics in response to bacteria

The primary impetus for creating the ILEE algorithm was to develop a method to define cytoskeleton organization from complex samples, including those during key transitions in cellular status. For example, previous research has demonstrated that the activation of immune signaling is associated with specific changes in the cytoskeletal organization (Henty-Ridilla et al., 2014; Henty-Ridilla et al., 2013; Li et al., 2017; Lu et al., 2020). Complementary to these studies, other research identified the temporal and spatial induction of changes in the cytoskeletal organization as a function of pathogen (e.g., Pseudomonas syringae) infection and disease development (Guo et al., 2016; Kang et al., 2014; Shimono et al., 2016). The sum of these studies, which broadly applied MGT-based quantitative analysis of cytoskeleton architecture, concluded that the virulent bacterial pathogen triggers elevated density (by occupancy) but did not induce changes in filament bundling (by skewness) in the early stages of infection. Since one of our major motivations herein was to develop an ILEE-based toolkit—supported by novel cytoskeletal indices—to investigate the process of pathogen infection and immune signaling activation, we collected raw data from a previous study (Lu et al., 2020) describing a bacterial infection experiment using Arabidopsis expressing an actin fluorescence marker (i.e., GFP-fABD2; Fig. 6 a), followed by confocal imaging analysis by ILEE as well as MGT conducted by three independent operators with rich experience in actin quantificational analysis (Fig. 6, c–n). Additionally, because researchers sometimes apply a universal global threshold to all images from a batch of biological experiments to avoid tremendous labor consumption, we included this approach and aimed to observe its performance as well. In this experiment, the only categorical variant is whether sampled plants are treated with bacteria (EV) or not (mock). In total, nine indices that cover features of density, bundling, severing, branching, and directional order are measured and compared.

Our first question is whether the operator’s bias generated by MGT will influence the results and conclusions of the experiment. We thereby analyzed the correlation of the MGT results by the three individual operators and found only a weak correlation between different operators (Fig. 6 b), which indicates MGT indeed introduces bias and potentially impacts quantitative results. Interestingly, while minor statistical discrepancies between MGTs by different operators are found in some indices (i.e., skewness and segment density), most of the MGT results (either adaptive or fixed) showed the same trend as 2D ILEE, but with far higher standard deviation or lower stability (Fig. S17 a) over a certain biological treatment. This indicates that the historical data based on MGT should be generally trustworthy despite the biased single data points, but an accurate conclusion must be based on a high sampling number that hedges the deviation of individual data points. Since ILEE has less systemic error over biological repeats, we also tested whether ILEE renders higher statistical power to identify potential significant differences. As suggested by Fig. S17 b, we found that ILEE has the lowest P values among t tests conducted for indices with a trend of difference. We believe these advantages demonstrate ILEE as a better choice for actin quantification.

Next, we attempted to understand whether different indices of the same class, particularly density and bundling, can reflect the quantitative level of the class in accordance, or instead show inconsistency. For density, we correlated the occupancy and linear density values of all methods over actin images of both mock and EV groups and found that occupancy and linear density measurements are in high conformity, with a Pearson coefficient at 0.98 (Fig. S18). For bundling indices, we were interested in their level of conformity because direct indices (based on the binary shape) and indirect indices (based on the brightness frequency distribution) are entirely different strategies to measure bundling. Using the same approach of correlation analysis, we found that diameter_TDT and diameter_SDT display a strong positive correlation (Fig. 6 j), while skewness and CV have merely a medium-low correlation (Fig. 6 i), which echoes the previous report demonstrating that skewness and CV have different performance on the bundling evaluation (Higaki et al., 2020). Unexpectedly, we also found that CV (as a representative of indirect indices) and diameter_SDT (as a representative of direct indices) have a striking zero correlation (Fig. 6 k). This is perplexing as it raises the question of whether skewness or CV should be regarded as an accurate measurement of bundling (see Discussion). This discrepancy is also observed by 3D ILEE, whose CV and diameter_SDT over mock versus EV revealed the converse results with significant differences. In general, we believe that the biological conclusion that the Pst DC3000 treatment renders increased actin bundling should be further studied.

Lastly, we asked if additional features of plant actin cytoskeletal dynamics in response to virulent bacterial infection can be identified by the newly introduced indices and the enhanced performance of ILEE. As shown in Fig. 6, we observed a significantly regulated segment density, local anisotropy, and static branching activity triggered by the bacteria. At a minimum, these discoveries potentially lead to new biological interpretations and contribute to identifying other immune-regulated processes as a function of actin dynamics. However, while most 2D approaches were consistent and in agreement with the other indices, the segment density estimated by 3D ILEE indicates a significant but opposite conclusion. As suggested by the ground-truth-based comparison on accuracy between 3D and 2D ILEE (Fig. 5), we believed this discrepancy was due to the information loss and misinterpretation by the z-axis projection during the 2D pipeline. Hence, we have higher general confidence in the results and conclusions through the 3D mode.

ILEE has broad compatibility with various sample types

Cytoskeleton imaging from live plant samples is arguably one of the most challenging types of images to evaluate due to the dynamic topology and uneven brightness of actin filaments. While we demonstrated that ILEE shows superior performance over plant actin samples, ILEE and the ILEE_CSK library are generally designed for non-specific confocal images of the cytoskeleton, and therefore applicable to other types of samples. To investigate the compatibility of ILEE with other types of image samples, we tested ILEE on both plant microtubules (Faulkner et al., 2017) and animal cell actin images (Fig. S19). Importantly, we found that ILEE and the ILEE_CSK library can process both plant and animal images with a satisfying performance. This is encouraging as ILEE can substitute or improve the Hough transform, a straight-line detection algorithm commonly used for animal cytoskeleton (generally straight and thick), but with some limitations in neglecting and miscalculating curvy cytoskeleton fractions (Liu et al., 2020; Liu et al., 2018). With the advancement of ILEE, Hough transform-based analysis may not be essential, and the potential cytoskeleton indices that rigorously require the Hough transform can still utilize ILEE as a provider of binary image input for more accurate results.

Herein, we describe the creation of ILEE, an accurate and robust filament segregation algorithm for the unguided quantitative analysis of the organization of the eukaryotic cytoskeleton. As presented, this approach supports the in vivo analysis of both 2D and native 3D data structures, enabling an unbiased evaluation of cytoskeletal organization and dynamics. In addition to the development of key methods, we also generated a publicly available Python library that supports the automated computation of 12 filament indices of five classes of morphological features of the cytoskeleton. As described above, the data indicate that ILEE shows superior accuracy, robustness, and stability over existing cytoskeleton image analysis algorithms, including the widely adopted MGT approaches (Higaki et al., 2010; Lu and Day, 2017). As a result of this new approach, we have developed an open-access library to conduct ILEE-based cytoskeleton analysis, which eliminates limitations imposed by the traditional 2D MGT approaches, including the introduction of user bias and different types of information loss. Here, we would like to further explain and discuss several interesting discoveries and issues involved in this research.

Robustness of the NNES-based prediction of global gradient thresholds

The gradient threshold (gthres) defines the selected edge of actin filaments for implicit Laplacian transformation, the appropriateness of which greatly determines the performance of ILEE. To calculate gthres without imposing a user-biased input, our strategy utilized specific feature values collected from the NNES curve and human-guided MGT to train a prediction model for the rapid rendering of a coarse area of the image background. Through this approach, we were able to deduce the corresponding gthres by the mathematical relationship between the statistical distribution of the native pixel values and the Scharr gradient magnitude of the coarse background (Figs. S6, S7, S8, and S9, and Materials and methods). This step might first appear unnecessary since, alternatively, the most straightforward strategy was to directly train a prediction model using the image gradient histogram and human-determined gthres. However, as the gradient operator (see Materials and methods) for any given object pixel is influenced by the values of the surrounding pixels, the calculated gradient on the edge of the background is highly impacted (overestimated) by the contrast of foreground (cytoskeleton) versus the background. In other words, the brightness frequency distribution of the background gradient will change at elevated cytoskeleton fluorescence levels, even though the background per se remains the same. The outcome of this is a significant decrease in the accuracy of gained gthres. For this reason, we assert that gthres should be mathematically deduced from a predetermined background region rather than directly predicted via human-trained models or calculated from the frequency distribution of gradients even using the predetermined background.

ILEE and visual inspection have different tendencies on the topology between two bright filaments

Our data demonstrate that ILEE generally shows dominant robustness and accuracy for most indices compared with manually portrayed ground truth binary images. However, there are three indices (linear density, segment density, and static branching activity) where ILEE renders stable yet dramatically lower output than those derived from ground truth images (Fig. 4 d). Since the results are stable, we anticipate there may be systemic inclinations in either the human-portrayed ground truth or the ILEE algorithm—or both. After inspecting the binary images generated by ILEE compared with the ground truth, we identified a potentially critical reason: ILEE is less likely to judge the faint, ambiguous thin “connections” inside the narrow space between two bright filaments as positive filaments. As demonstrated in Fig. S20, while the ground truth and ILEE binary image look very similar, their skeleton images—which represent their topological structure—show a discrepancy between two bundles of bright filaments. Considering their procedure of generation, we speculate this is because (1) ILEE fully outer-joins the binary results by a lower K1 and a higher K2 (Fig. 3 c), among which K2 sacrifices the sensitivity to local filaments at high signal regions to improve the sensitivity at low signal regions, including the edges of thick filaments and (2) human eyes may recognize “imaginary filaments” that may not exist in such complex areas of images. According to our knowledge, there is no overwhelming evidence suggesting either 2D ILEE or the human eye is more accurate, but 2D ILEE is indeed more stable and conservative. However, 3D ILEE may solve this paradox because many “adjacent” bright bundles are artifacts out of the z-axis projection, which is distant enough in 3D space to offer ILEE a satisfactory resolution to split the filaments.

The 2D and 3D modes of ILEE may lead to different conclusions

For the analysis of leaf samples upon bacterial pathogen treatment (Fig. 6), 2D ILEE generally agreed with MGT while offering higher robustness and stability. Interestingly, this is not always the case for 3D ILEE since it led to different statistical conclusions from 2D approaches in linear density, skewness, diameter (SDT), segment density, and static branching activity. However, such differences are understandable because the 3D mode indeed computes indices with higher accuracy and overcomes many limitations of the 2D data structure. As demonstrated in Fig. 5, b and c, 3D ILEE provide more or less improved absolute fidelity and comparative sensitivity for skewness, diameter (SDT), segment density, and static branching activity; therefore, our current data support that 3D ILEE is more trustworthy for these indices. Similarly, the discrepancy of linear density over the 2D mode versus 3D in Fig. 6 d also agrees with our observation that 2D ILEE has higher comparative sensitivity for indices reflecting density. Hence, in terms of any potential discrepancy between the 2D and 3D modes, we offer this general recommendation: for occupancy only, 2D ILEE has better capability to distinguish biological differences (but not necessarily the absolute quantification), and for all other cytoskeleton features, 3D ILEE is more trustworthy and dependable. Although it is currently difficult to make a definite conclusion about whether the current 2D or 3D mode is more accurate across all scenarios, it is predictable that the 3D computation will gradually substitute for the classic 2D data structure, with mainstream personal computers gaining increased computational power in the future.

Potential development and prospect

While ILEE has already remedied many disadvantages of traditional methods such as MGT, we are still working to further advance the ILEE approaches presented herein. Our goal is to ultimately establish algorithms and a toolbox that provide cytoskeletal indices at perfect accuracy and effectively answer the specific demands in this field of study. As such, we offer the following as a list of potential upgrades and applications to be integrated into the library:

Deep learning–based cytoskeleton segmentation algorithm with a “foreign object” removal function. As presented herein, ILEE enables the generation of trustworthy binary images on a large scale, which enables the construction of deep learning models to identify cytoskeleton components from confocal images with potentially better performance. Deep learning is also the key to solving the ultimate problem of all current cytoskeleton segmentation algorithms (including ILEE)—the inability to detect and erase non-cytoskeleton objects with high fluorescence, such as the nucleus and cytoplasm. As one approach to circumvent this limitation, free fluorescence proteins can be used as a cell-permeable false signal. This will enable training a model to recognize and exclude the non-cytoskeleton-like foreign objects, ideally rendering pure cytoskeletal datasets.

Vectorized intermediate image. After generating the different image (i.e., Idif, Fig. 3 a) using ILEE, one computational limitation of our Python-based algorithm is the tradeoff between the demand for unlimited high-resolution imaging versus limited computational power. Accordingly, an ideal strategy is to transfer the pixel/voxel image to a 2D vector image or 3D polygon mesh for index calculation. This will further enhance the accuracy of ILEE given an acceptable requirement of computational power.

Regions of interest and organelle segmentation. There is currently a high demand in plant cell biology to quantify cytoskeletal parameters in stomatal guard cells, as well as additional cellular and subcellular architectures. In future releases of ILEE, we aim to develop approaches to enable automated recognition and selection of regions of interest, such as stomata, for various demands by the community.

Compatibility to x-y-t and x-y-z-t data, where t represents time. We are in the process of developing a 4D-compatible analysis of cytoskeletal dynamics that tracks filament organization over time. This approach will provide a temporal evaluation of supported indices with high accuracy and robustness.

Plant genotypes and growth

Arabidopsis thaliana ecotype Col-0 (wild type) expressing the actin cytoskeleton fluorescence marker GFP-fABD2 (Lu et al., 2020) was used in this study. Arabidopsis seeds were stratified for 2 days in the dark at 4°C and then sown into the soil. All plants were grown in a BioChambers model FLX-37 walk-in growth chamber (BioChambers) at 20°C under mid-day conditions (12 h of light/12 h of dark) with 60% relative humidity and a light intensity of ∼120 μmol photons m−2s−1.

Bacteria growth and plant inoculation

Plant pathogenic bacteria Pseudomonas syringae pv. Tomato DC3000 carrying an empty broad host vector pVSP61 (Pst DC3000/EV; Loper and Lindow, 1994) was grown on NYGA plates (5 g/l peptone, 3 g/l yeast extract, 2% glycerol, and 15 g/l agar) plus 50 μg/ml rifampicin and 50 μg/ml kanamycin for cultivation and inoculation. Bacterial treatments for actin dynamics analysis were conducted following previously described methods (Lu et al., 2020). Briefly, 2-wk-old Arabidopsis Col-0/GFP-fABD2 was dipped for 30 s in Dip-inoculation solution (10 mM MgCl2 + 0.02% Silwet-77) with DC3000/EV at a concentration OD600 = 0.2 (ca. 2 × 107 colony forming units/ml). Confocal images were collected 24 h after inoculation.

Mouse cancer cells sample

Yale University Mouse Melanoma line YUMMER1.7D4 cells (#SCC243; EMD Millipore) were cultured in DMEM (#30-2006; ATCC) supplemented with 10% FBS (#10437-028; Gibco), 1% Pen-strep (#15140122; Thermo Fisher Scientific), and 1% NEAA (#11140035; Gibco). For staining actin stress fibers, ∼10,000 cells were seeded onto a glass coverslip maintained in a six-well plate overnight. First, semiconfluent cells were fixed with 3.7% formaldehyde for 15 min at RT, washed three times with PBS, then blocked in PBS supplemented with 2% BSA and 0.1% Triton for 1 h at RT. Next, cells were incubated with 100 nM rhodamine–phalloidin (#PHDR1; Cytoskeleton, Inc) in a blocking buffer for 30 min at RT in the dark and then washed with PBS three times each for 5 min with gentle shaking at RT. Stained cells on the coverslip were mounted in ProLong Glass Antifade Mountant with DAPI (#P36982; Thermo Fisher Scientific). Slides were cured overnight at room temperature and then imaged.

Confocal microscopy

For plant leaf actin imaging, 2-wk-old Col-0/GFP-fABD2 plants were used for data collection and analysis. Images of epidermal pavement cells and guard cells were collected using a laser scanning confocal microscope (Olympus FV1000D) at RT. Optical setting: 65×/1.42 PlanApo N objective with a 488 nm excitation laser and 510–575 nm emission filter (For GFP fluorescence); and the imaging medium is Olympus Low Auto-fluorescence Immersion Oil (Olympus # IMMOIL-F30CC). Z-stack Images were collected at a resolution of 800 × 800 × 25 (xyz) with a 12-bit dynamic range. Voxel size was 0.132 μm at the x- and y-axis and 0.5 μm at the z-axis. For animal cell actin images, YUMMER1.7D4 cell stained by rhodamine–phalloidin were sampled by the same confocal system. Optical setting: 100×/1.40 UPLSAPO with a 559 nm excitation laser and 570–670 nm emission filter (for rhodamine). The same immersion oil was used. Z-stack images were collected at a resolution of 800 × 800 × 10 (xyz) with a voxel size of 0.039 μm at the x- and y-axis and 0.16 μm at the z-axis. The microscope operation software is FV10-ASW, ver. 04.02.02.09 (Olympus Corporation).

Manually portrayed ground truth binary image

Seven raw projected images of 400 × 400 pixel size with diverse visual appearance (e.g., actin density, shape, thickness, and fluorescence brightness) were selected from our actin image database. Using a pen-drawing display (HUION KAMVAS 22 Plus, GS2202) and GIMP software, we slightly enhanced the brightness of low-value pixels to clarify the actin structure and carefully portrayed the actin component of the selected image sample at a single-pixel level. The portrayed layer was extracted and transferred to the binary format for further evaluation.

Double-blind MGT analysis

For the MGT process of the mock versus DC3000/EV-inoculated sample pool in Fig. 6, we erased the name, randomized the order of the image files, and distributed them to three independent cell biologists with rich experience in cytoskeleton analysis (referred to as OA, OB, and OC) to let them determine the global threshold value of each sample manually using the approach described previously (Lu et al., 2020). Briefly, ImageJ, or equivalent GUI(s), were used to real-timely mark the pixels over a manually tunable threshold, letting the operator visually determine a value segmenting the cytoskeleton at the best performance according to his/her subjective opinion. Once completed, we restored the grouping of the samples for batch analysis. We use Python to mimic the MGT pipeline that was generally conducted by ImageJ using the determined thresholding value. Operator OA also provided a universal threshold value (referred to as OA_fix, Fig. 6) that applies to all samples as a commonly used fast MGT approach.

Statistical analysis and data visualization

All data analysis was conducted in “Spyder” IDE (integrated development environment) with Python 3.8/3.10 environment. Image and data visualization, including built-in calculation of mean and standard deviation, were conducted by “matplotlib” and “seaborn” libraries (API Ref. [10, 11]). t tests, including t-test-based multiple comparisons, were conducted using “scikit-posthocs” library (API Ref. [12]). Linear correlation analysis was conducted by “SciPy” library (API Ref. [3]). Linear and non-linear modeling was conducted using “scikit-learn” library (API Ref. [13]). Fitting and generalization of specific statistical distribution were conducted by “SciPy” library (API Ref. [3]). The rationale of statistical analysis is described in the corresponding figure legends.

Determination of K2 for sample batches

For both 2D and 3D modes of ILEE, each 12-bit single-channel 3D image I(x,y,z) in a batch of samples was transferred into a 2D image Iproj(x,y) by z-axis maximum projection, where each pixel was Iprojx,y=maxIx,y,z|zN+, z<zmax. Iproj was processed by a Niblack thresholding (API Ref. [1]) to render Inibthres(x,y), with the parameter optimized to k = −0.36 and window_size = 2int(25l)+1 for best performance, where l is the mean of the x and y resolutions of Iproj. A binary image defined as Ibinaryx,y=1, if Iprojx,y>Inibthresx,y; 0, else was generated. The binary image was processed through Euclidean distance transformation (API Ref. [2]), and the mean of the highest 5% values was used as the input of the K2 estimation model (see Fig. 3 c) that outputs individual recommended K2. Finally, the mean of all individual K2 will be output as the recommended K2 for the entire group.

ILEE

An abbreviated workflow of ILEE is illustrated in Fig. 3 b. Here, we describe the overall process in further detail. For the 2D mode, the input image structure is Ipix=Iprojx,y=maxIx,y,z|zN+, z<zmax. For the 3D mode, I(pix) = I(x,y,z). First, I is treated by a significant difference filter (SDF; Fig. S5) to render ISDF, where:
It is noteworthy that the coefficients 0.293 and 0.707 represent a linear interpolation to estimate the value on the diagonal that is 1 pixel from the center (Fig. S5). In total, the SDF substitutes a pixel by the mean of the eight adjacent pixels if the absolute difference between it and the mean is higher than two folds of the standard deviation of the surrounding pixels. Then, ISDFis input to a discrete Gaussian filter with a 3 × 3(×3) weighting kernel at σ = 0.5 to render Ipre=Gauss(ISDF), which is the smoothed preprocessed image. Since confocal microscopy has different resolutions on the x/y and z axes (hereby named as Uxy and Uz), we adjusted the weighting kernel from OGauss to OGauss′ in the 3D mode particularly by a scaling operator, as shown below:
From Ipre, the gradient magnitude image G is rendered through the Scharr operator (Scharr, 2000) as:
for the 2D mode, or:
for the 3D mode. Next, to calculate the gradient threshold (gthres) as the input for ILEE, a global threshold tcbg to determine the coarse background (Icbg, as flattened image) was calculated by the non-connected negative element scanning (NNES) function that satisfies the following:
If I is processed using the 2D mode, as demonstrated previously (see Fig. S6), the statistical mean and STD of gradient magnitude of Icbg (μG.cbg and σG.cbg, respectively) are univariate proportional functions of the STD of Icbg, σcbg, not influenced by the mean of Icbg. Particularly, μG and σG, of the coarse background area of the gradient image is:
With these, we established a gthres estimation model, where gthres = μG.cbg + k (σcbg)σG. Using the optimized parameters described in Fig. S8, gthres is deduced as:
If I is processed by the 3D mode, the inconformity of Uxy and Uz will not only impact the weighting of Scharr operator but also influence the proportional coefficient of σcbg to μG and σG. We simulated the accurate mathematical relationship between the proportional coefficient ks and Uz/Uxy (see Fig. S7), so gthres for 3D can be calculated as:

The total process above to compute gthres is referred to as the function gthres.estimate(Ipre,Icbg) in Fig. 3 a (1.3).

The critical step to generate the threshold image of the input sample I is implicit Laplacian smoothing. This algorithm builds a linear system using the Laplacian operator to achieve local adaptive thresholding based on the edges of cytoskeletal components. Leveraging the spectral characteristics of discrete Laplacian operators, we could filter out high-frequency components (aka, high fluorescence fractions of the cytoskeleton) while preserving the low-frequency salient geometric features of background fluorescence. First, an edge image is defined as Iedge(pix) = Ipre(pix), if G(pix)>gthres; 0,else, but we transform Iedge into a flattened image vector Iedge, which can be represented as:
where S is a (sparse) selection matrix, which is a diagonal matrix with i-th diagonal entry being 1 if the i-th pixel has a gradient above gthres.
Next, according to the concept of the Laplacian operator, we constructed the (sparse) Laplacian matrix L that satisfies:
where the Laplacian operators in different modes are defined as:
Therefore, we can establish an implicit Laplacian linear system that targets the edge:
where Ithres is the unknown to be solved by the conjugate gradient method and restored to the 2D/3D array Ithres, and K is a weight that adjusts the influence the Laplacian operator has on the result (see Fig. 3 b).
Finally, a difference image Idif is calculated as Idif= IpreIthres. To further remove noise, a temporary binary image Ibinary.temppix=1, if Idifpix>0; 0, else is generated, and the sizes of all positive connected components (elements) are counted. An adjusted difference image Idif_adj is generated by lowering the values of potential noise into the mean of negative pixels:
Using Idif_adj, the various versions of the binary image for the computation of different indices are calculated as:
for the 2D mode, where Interp3,3 (Idif) is an oversampled image by bicubic interpolation whose resolution on both x- and y-axis are upscaled by threefold, or
for the 3D mode, where Interp1,1,1/f(Idifpix) aims to restore the voxel to cubic shape by only interpolating the z-axis to 1/f folds of its original resolution. Interp3,3,3/f(Idif) additionally enhances the resolution for all three axes threefold as an optional process.

Computation of cytoskeletal indices

Occupancy: the frequency of the positive pixels in the computed binary image, or ∑Ibinary(pix)/N for the 2D mode and ∑Ibinary.ori(pix)/N for the 3D mode, where N is the number of total pixels.

Linear density: the length of the skeletonized filament per unit of 2D or 3D space. For 2D mode, Ibinary.ovspis skeletonized using Lee’s approach (Lee et al., 1994; API Ref. [9]) to render the skeletonized image Isk.Then, linear density is calculated as:

For the 3D mode, Iskis rendered by Ibinary, and we use the sum of the Euclidean lengths of all (graph theory defined) branches obtained by the Skan library (Nunez-Iglesias et al., 2018) as the total length of the skeletonized filament and divide it by N. This is because sphere-cavities structures existent in the 3D skeletonized images are not applicable to the concept of length.

Skewness: the (statistical) skewness of the fluorescence value of positive pixels, or mathematically:
where Npos, μpos, and σpos represents the count, mean, and standard deviation of positive pixels in the raw image I.CV: the (statistical) coefficient of variance of the fluorescence value of positive pixels, or mathematically:
Diameter_TDT: average filament diameter estimated by the Euclidian distance transformation of the total binary image. The Euclidian distance transformation map Idis is calculated as an image with the same shape as Ibinary.ovsp, but the value of each pixel is:
where innp is the coordinate of the nearest negative pixel to (x,y,z). Therefore, the Diameter_TDT is mathematically defined as:
Diameter_SDT: average filament diameter estimated by Euclidian distance transformation values of Ibinary.ovsp,but sampled using only positive pixels on Isk, or mathematically as:

Total connected element: the number of connected components in the binary image. For images captured in the 2D mode, it is Count.of.NNE(Ibinary.ovsp). In the 3D mode, the count of non-connected elements is given by the Skan library (Nunez-Iglesias et al., 2018) using Isk as the input. Total connected element is a nonstandardized intermediate index for the developer’s use.

Segment density: the count of connected components in the binary image per length unit of the skeletonized filament. For images captured in the 2D mode, it is Count.of.NNE(Ibinary.ovsp)/length(x,y). In the 3D mode, both the count of non-connected elements and the length of the skeletonized filament are called from the Skan library using Isk computed from Ibinary as the input.

Total branch: the number of all graph theory-defined branches. Isk is obtained from Ibinary.ovsp for the 2D mode or Ibinary for the 3D mode, respectively, and is next processed by the Skan library. The total number of recognized branches is collected. Total branch is a non-standardized intermediate index for the developer’s use.

Total node: the number of all graph theory-defined nodes. Isk is processed by the Skan library and the total number of recognized nodes is collected. Total node is a non-standardized intermediate index for the developer’s use.

Static branching activity: the branching point count per unit length of skeletonized filament. Isk is obtained from Ibinary.ovsp for the 2D mode or Ibinary for the 3D mode, respectively, and is next input into the Skan library. The total number of type-3 and type-4 branches (i.e., graph theory-defined branches at least greater than three branches at one node, or “biologically defined branch”; Nunez-Iglesias et al., 2018) is collected and then divided by the length of the skeletonized filament.

Local anisotropy: We performed a local averaging of the filament alignment tensor, which is constructed as follows. First, we calculate the unit direction vector for each straight filament segment gi. Then, the covariance matrix for each segment is obtained from the following equation:

This rank-2 tensor is independent of the orientation of the line segment and can thus be averaged over a region containing a collection of unoriented line segments. We weighed each filament tensor in a circular/spherical neighborhood by the length of every filament to produce a smoothed tensor field. The eigenvector corresponding to the largest eigenvalue indicates the primary orientation of filaments in this region. The difference between the maximum and the minimum eigenvalues is an indicator of the anisotropy in this region. If all the eigenvalues are the same, the indicator is 0, which implies an isotropic region. If the eigenvalues other than the maximum are all nearly 0, all the filaments in this region are parallel to each other. In this case, they are all aligned with the maximum eigenvector, the dominant filament direction of this region.

Judging actin-like false-positive filaments

For visual comparison of algorithms, we judge whether a false-positive filament is actin-like by its shape and connectivity to matched filaments. First, we define a slim index (SI), which is the ratio of the skeleton length/perimeter of a segment. If a false-positive segment does not connect to any known matched pixel, then those with SI > 0.3 are judged as actin-like. Otherwise (if a false-positive segment connects to know matched pixel), we additionally require that a segment containing actin-like false-positive pixels should have higher SI and skeleton length versus the segment without these pixels, plus the 0.3 SI threshold.

Simulation model of 3D artificial actin image

For each of the 31 training samples I(x,y,z), the binary image Ibinary and skeleton image Isk were previously obtained via ILEE, as described above. Three independent fractions reflecting artificial actin (Iatf.a), artificial ground noise (Iatf.gn), and artificial diffraction noise (Iatf.d) are simulated independently (Fig. S16).

To calculate Iatf.a, the brightness of the real actin (Ia) of all training samples is collected as:
Here, we define the “reference voxel” of a non-zero voxel in Ia as its nearest voxel whose position has a positive value in Isk. In other words, for each non-zero voxel of Ia(x,y,z), the distance from (x,y,z) to its nearest reference voxel (PR=(xR,yR,zR)) on the skeleton image is defined as DR(x,y,z), where
Meanwhile, the distance from the reference voxel to the nearest non-actin voxel, DR2N(x,y,z), is calculated as:
Next, the “relative eccentric distance,” Drlt(x,y,z), and the relative brightness compared to the reference voxel, Irlt(x,y,z) are defined as:
Note that Drltand Irlt default to 0 whenever the denominator is 0. As Drlt is discrete, each possible value of Drlt can be counted as n(Drlt). Using the Irlt and Drlt of all voxels with Drlt ≤ 1 in the entire 31 images, a polynomial regression model between Drlt and Irlt is fitted using log10n (Drlt) as weights and adjusted into a piecewise polynomial function:
Similarly, the STD of the Irlt is also fitted as below:
Finally, the artificial actin is generated using random values subject to:
where N means normal distribution. The random values are clamped to [0,4095].
To calculate Iatf.gn, two different strategies are utilized to generate two groups of voxels. For the voxels positioned in the coarse background of I, we apply a fold of change that is subject to a normal distribution to each I(x,y,z) as the new ground noise plus a Gaussian filter (OGauss′, see ILEE section of Methods) to neutralize the magnified gradient; for the rest beyond the coarse background, we gave random values that are subject to a Burr-12 distribution (API Ref. [3]) fitted by voxels in the coarse background of I(x,y,z), that is:
where c, d, μb, and σb are the fitted parameters of the Burr-12 distribution using the training voxels { (x,y,z) | I(x,y,z)≤tcbg }. Similarly, the random values are clamped to [0, 4095].
Third, for Iatf_d, voxels of the positions in I that are neither coarse background nor real actin are used to fit an exponential distribution:
where μe and σe are the fitted parameters of the exponential distribution using the training voxels { (x,y,z) | I(x,y,z)≥tcbg }. Similarly, the random values are clamped to [0, 4095].
Finally, these fractions of brightness are summed to generate the complete artificial image Iatf, as:

Other image thresholding approaches

Otsu (API Ref. [4]), triangle (API Ref. [5]), Li (API Ref. [6]), Yen (API Ref. [7]), Niblack (API Ref. [1]), and Sauvola (API Ref. [8]) thresholding algorithms are obtained from the “scikit-image” library (van der Walt et al., 2014).

Library API

Python library API Ref. [1]-[14], for additional image thresholding/processing, statistical analysis, modeling, and simulations, are as follows:

API Ref. [1] Niblack thresholding: https://scikit-image.org/docs/stable/api/skimage.filters.html#skimage.filters.threshold_niblack; API Ref. [2] Distance transformation (DT): https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.distance_transform_edt.html; API Ref. [3] statistical analysis; data distribution fitting and simulation: https://docs.scipy.org/doc/scipy/reference/stats.html; API Ref. [4] Otsu thresholding: https://scikit-image.org/docs/stable/api/skimage.filters.html#skimage.filters.threshold_otsu; API Ref. [5] Triangle thresholding: https://scikit-image.org/docs/stable/api/skimage.filters.html#threshold-triangle; API Ref. [6] Li thresholding: https://scikit-image.org/docs/stable/api/skimage.filters.html#skimage.filters.threshold_li; API Ref. [7] Yen thresholding: https://scikit-image.org/docs/stable/api/skimage.filters.html#skimage.filters.threshold_yen; API Ref. [8] Sauvola thresholding: https://scikit-image.org/docs/stable/api/skimage.filters.html#skimage.filters.threshold_sauvola; API Ref. [9] Skeletonization: https://scikit-image.org/docs/stable/api/skimage.morphology.html#skimage.morphology.skeletonize; API Ref. [10] Visualization (“matplotlib”): https://matplotlib.org/stable/index.html; API Ref. [11] Visualization (“seaborn”): https://seaborn.pydata.org; API Ref. [12] T-test (multiple comparison): https://scikit-posthocs.readthedocs.io/en/latest/generated/scikit_posthocs.posthoc_ttest; API Ref. [13] modeling: https://scikit-learn.org/stable/modules/classes.html?highlight=linear+fit.

Online supplemental material

Fig. S1 shows NNES (Non-connected negative elements scanning) that can identify the coarse background. Fig. S2 shows the performance of NNES-based adaptive global thresholding and its prediction model (3D). Fig. S3 shows enhanced 3D NNES-based adaptive coarse thresholding model for samples with greater diversity. Fig. S4 shows visualized explanation of the core ILEE algorithm. Fig. S5 shows a significant difference filter. Fig. S6 shows that the mean and STD gradient magnitude of ground noise is directly proportional to STD of the noise. Fig. S7 shows that the ratio of x-y unit and z unit influences the proportional coefficient of σNoiseμG and σNoiseσG relationship. Fig. S8 shows determination of the global gradient threshold. Fig. S9 shows enhanced global gradient threshold estimation model for 3D mode. Fig. S10 shows the impact of K and the training for K2 estimation. Fig. S11 shows visualized demonstration of concepts of cytoskeleton indices. Fig. S12 shows the stability of ILEE and other classic image thresholding approaches for cytoskeleton segregation in confocal images. Fig. S13 shows the visualized comparison of robustness of ILEE and other algorithms by segmentation accuracy. Fig. S14 shows the quantificational comparison of robustness of ILEE and other algorithms by segmentation accuracy. Fig. S15 shows the quantificational comparison of robustness of ILEE and other algorithms by index rendering stability. Fig. S16 shows an illustrated introduction of the actin image simulation model to generate 3D cytoskeleton images with ground truth. Fig. S17 shows the comparison of ILEE vs MGT over stability and capability to distinguish biological differences among real biological samples. Fig. S18 shows the correlation of occupancy and linear density. Fig. S19 shows the performance of ILEE on other type of biological sample. Fig. S20 shows that ILEE and the human eye have different tendencies to judge the topological structure of the cytoskeleton, especially between two bright bundles.

We would like to thank Dr. Yi-Ju Lu (National Taiwan University), Dr. Silke Robatzek (Ludwig-Maximilians University—Munich), and Dr. Weiwei Zhang (Purdue University) for generously providing raw images of actin and tubulin for algorithm development and cytoskeleton analysis. We would like to thank Dr. Yi-Ju Lu and Dr. Masaki Shimono (University of Nevada-Reno) for providing individual MGT evaluations for our experiments. We would like to thank Rongzi Liu (University of Florida) for providing critical advice on statistical analysis and modeling. We would like to thank Dr. Richard Neubig (Michigan State University) and Dr. Noel Day (Zoetis), Dr. Brittni Kelley (Michigan State University), and Emily Ribando-Gros (Michigan State University) for critical advice on manuscript preparation. We would like to thank Sarah Khoja (Michigan State University) for improving the reading experience of the ILEE_CSK documentation website.

Research in the laboratory of B. Day was supported by grants from the National Science Foundation (MCB-1953014) and the National Institutes of General Medical Sciences (1R01GM125743). Research in the laboratory of Y. Tong is supported by a grant from National Science Foundation (III-1900473).

The authors declare no competing financial interests.

Author contributions: P. Li, Z. Zhang, B. Day, and Y. Tong participated in the design and conception of ILEE pipeline and experiments. P. Li and Z. Zhang developed the ILEE algorithm and ILEE_CSK library. P. Li and B.M. Foda performed the experiments. P. Li, Z. Zhang, B. Day, and Y. Tong analyzed and interpreted the data. P. Li, Z. Zhang, and B. Day wrote the manuscript. All authors actively edited and agreed on the manuscript before submission.

Al Absi
,
A.
,
H.
Wurzer
,
C.
Guerin
,
C.
Hoffmann
,
F.
Moreau
,
X.
Mao
,
J.
Brown-Clay
,
R.
Petrolli
,
C.P.
Casellas
,
M.
Dieterle
, et al
.
2018
.
Actin cytoskeleton remodeling drives breast cancer cell escape from natural killer-mediated cytotoxicity
.
Cancer Res.
78
:
5631
5643
.
Alioscha-Perez
,
M.
,
C.
Benadiba
,
K.
Goossens
,
S.
Kasas
,
G.
Dietler
,
R.
Willaert
, and
H.
Sahli
.
2016
.
A robust actin filaments image analysis framework
.
PLOS Comput. Biol.
12
:e1005063.
Blanchoin
,
L.
,
R.
Boujemaa-Paterski
,
C.
Sykes
, and
J.
Plastino
.
2014
.
Actin dynamics, architecture, and mechanics in cell motility
.
Physiol. Rev.
94
:
235
263
.
Boudaoud
,
A.
,
A.
Burian
,
D.
Borowska-Wykręt
,
M.
Uyttewaal
,
R.
Wrzalik
,
D.
Kwiatkowska
, and
O.
Hamant
.
2014
.
FibrilTool, an ImageJ plug-in to quantify fibrillar structures in raw microscopy images
.
Nat. Protoc.
9
:
457
463
.
Brouhard
,
G.J.
2018
.
Microtubule dynamics: An interplay of biochemistry and mechanics
.
Nat. Rev. Mol. Cell Biol.
19
:
451
463
.
Cao
,
L.
,
W.
Wang
,
W.
Zhang
, and
C.J.
Staiger
.
2022
.
Lipid signaling requires ROS production to elicit actin cytoskeleton remodeling during plant innate immunity
.
Int. J. Mol. Sci.
23
:
2447
.
Carlton
,
J.G.
,
H.
Jones
, and
U.S.
Eggert
.
2020
.
Membrane and organelle dynamics during cell division
.
Nat. Rev. Mol. Cell Biol.
21
:
151
166
.
Desbrun
,
M.
,
M.
Meyer
,
P.
Schröder
, and
A.H.
Barr
.
1999
.
Implicit fairing of irregular meshes using diffusion and curvature flow
. In
Proceedings of the 26th annual conference on Computer graphics and interactive techniques—SIGGRAPH ’99
.
ACM Press
,
New York
.
317
324
.
Faulkner
,
C.
,
J.
Zhou
,
A.
Evrard
,
G.
Bourdais
,
D.
MacLean
,
H.
Häweker
,
P.
Eckes
, and
S.
Robatzek
.
2017
.
An automated quantitative image analysis tool for the identification of microtubule patterns in plants
.
Traffic
.
18
:
683
693
.
Gan
,
Z.
,
L.
Ding
,
C.J.
Burckhardt
,
J.
Lowery
,
A.
Zaritsky
,
K.
Sitterley
,
A.
Mota
,
N.
Costigliola
,
C.G.
Starker
,
D.F.
Voytas
, et al
.
2016
.
Vimentin intermediate filaments template microtubule networks to enhance persistence in cell polarity and directed migration
.
Cell Syst.
3
:
252
263.e8
.
Guo
,
M.
,
P.
Kim
,
G.
Li
,
C.G.
Elowsky
, and
J.R.
Alfano
.
2016
.
A bacterial effector Co-opts calmodulin to target the plant microtubule network
.
Cell Host Microbe
.
19
:
67
78
.
Henty-Ridilla
,
J.L.
,
M.
Shimono
,
J.
Li
,
J.H.
Chang
,
B.
Day
, and
C.J.
Staiger
.
2013
.
The plant actin cytoskeleton responds to signals from microbe-associated molecular patterns
.
PLoS Pathog.
9
:e1003290.
Henty-Ridilla
,
J.L.
,
J.
Li
,
B.
Day
, and
C.J.
Staiger
.
2014
.
ACTIN DEPOLYMERIZING FACTOR4 regulates actin dynamics during innate immune signaling in Arabidopsis
.
Plant Cell
.
26
:
340
352
.
Higaki
,
T.
,
N.
Kutsuna
,
T.
Sano
,
N.
Kondo
, and
S.
Hasezawa
.
2010
.
Quantification and cluster analysis of actin cytoskeletal structures in plant cells: Role of actin bundling in stomatal movement during diurnal cycles in Arabidopsis guard cells
.
Plant J.
61
:
156
165
.
Higaki
,
T.
,
K.
Akita
, and
K.
Katoh
.
2020
.
Coefficient of variation as an image-intensity metric for cytoskeleton bundling
.
Sci. Rep.
10
:
22187
.
Jacob
,
M.
, and
M.
Unser
.
2004
.
Design of steerable filters for feature detection using canny-like criteria
.
IEEE Trans. Pattern Anal. Mach. Intell.
26
:
1007
1019
.
Kang
,
Y.
,
J.
Jelenska
,
N.M.
Cecchini
,
Y.
Li
,
M.W.
Lee
,
D.R.
Kovar
, and
J.T.
Greenberg
.
2014
.
HopW1 from Pseudomonas syringae disrupts the actin cytoskeleton to promote virulence in Arabidopsis
.
PLoS Pathog.
10
:e1004232.
Kittisopikul
,
M.
,
A.
Vahabikashi
,
T.
Shimi
,
R.D.
Goldman
, and
K.
Jaqaman
.
2020
.
Adaptive multiorientation resolution analysis of complex filamentous network images
.
Bioinformatics
.
36
:
5093
5103
.
Kristó
,
I.
,
I.
Bajusz
,
C.
Bajusz
,
P.
Borkúti
, and
P.
Vilmos
.
2016
.
Actin, actin-binding proteins, and actin-related proteins in the nucleus
.
Histochem. Cell Biol.
145
:
373
388
.
Lee
,
T.C.
,
R.L.
Kashyap
, and
C.N.
Chu
.
1994
.
Building skeleton models via 3-D medial surface Axis thinning algorithms
.
CVGIP Graph. Models Image Process.
56
:
462
478
.
Li
,
P.
, and
B.
Day
.
2019
.
Battlefield cytoskeleton: Turning the tide on plant immunity
.
Mol. Plant Microbe Interact.
32
:
25
34
.
Li
,
C.H.
, and
P.K.S.
Tam
.
1998
.
An iterative algorithm for minimum cross entropy thresholding
.
Pattern Recognit. Lett.
19
:
771
776
.
Li
,
J.
,
L.
Cao
, and
C.J.
Staiger
.
2017
.
Capping protein modulates actin remodeling in response to reactive oxygen species during plant innate immunity
.
Plant Physiol.
173
:
1125
1136
.
Lian
,
N.
,
X.
Wang
,
Y.
Jing
, and
J.
Lin
.
2021
.
Regulation of cytoskeleton-associated protein activities: Linking cellular signals to plant cytoskeletal function
.
J. Integr. Plant Biol.
63
:
241
250
.
Lichtenstein
,
N.
,
B.
Geiger
, and
Z.
Kam
.
2003
.
Quantitative analysis of cytoskeletal organization by digital fluorescent microscopy
.
Cytometry A
.
54
:
8
18
.
Liu
,
Y.
,
K.
Mollaeian
, and
J.
Ren
.
2018
.
An image recognition-based approach to actin cytoskeleton quantification
.
Electronics
.
7
:
443
.
Liu
,
Y.
,
A.
Nedo
,
K.
Seward
,
J.
Caplan
, and
C.
Kambhamettu
.
2020
.
Quantifying actin filaments in microscopic images using keypoint detection techniques and A fast marching algorithm
.
Proc. Int. Conf. Image Proc.
.
2020
:
2506
2510
.
Loper
,
J.E.
, and
S.E.
Lindow
.
1994
.
A biological sensor for iron available to bacteria in their habitats on plant surfaces
.
Appl. Environ. Microbiol.
60
:
1934
1941
.
Lu
,
Y.-J.
, and
B.
Day
.
2017
.
Quantitative evaluation of plant actin cytoskeletal organization during immune signaling
. In
Plant Pattern Recognition Receptors: Methods and Protocols
.
L.
Shan
, and
P.
He
, editors.
Springer
,
New York, NY
.
207
221
.
Lu
,
Y.-J.
,
P.
Li
,
M.
Shimono
,
A.
Corrion
,
T.
Higaki
,
S.Y.
He
, and
B.
Day
.
2020
.
Arabidopsis calcium-dependent protein kinase 3 regulates actin cytoskeleton organization and immunity
.
Nat. Commun.
11
:
6234
.
Marchisella
,
F.
,
E.T.
Coffey
, and
P.
Hollos
.
2016
.
Microtubule and microtubule associated protein anomalies in psychiatric disease
.
Cytoskeleton
.
73
:
596
611
.
Melak
,
M.
,
M.
Plessner
, and
R.
Grosse
.
2017
.
Correction: Actin visualization at a glance
.
J. Cell Sci.
130
:
1688
.
Niblack
,
W.
1985
.
An Introduction to Digital Image Processing
.
Prentice Hall
.
Hoboken
.
Nick
,
P.
2011
.
Mechanics of the cytoskeleton
. In
Mechanical Integration of Plant Cells and Plants
.
P.
Wojtaszek
, editor.
Springer
,
Berlin, Heidelberg
.
53
90
.
Nunez-Iglesias
,
J.
,
A.J.
Blanch
,
O.
Looker
,
M.W.
Dixon
, and
L.
Tilley
.
2018
.
A new Python library to analyse skeleton images confirms malaria parasite remodelling of the red blood cell membrane skeleton
.
PeerJ
.
6
:e4312.
Onishi
,
M.
, and
J.R.
Pringle
.
2016
.
The nonopisthokont septins
. In
Methods in Cell Biology
.
Elsevier
.
Netherlands
.
1
19
Otsu
,
N.
1979
.
A threshold selection method from gray-level histograms
.
IEEE Trans. Syst. Man Cybern.
9
:
62
66
.
Rush
,
T.
,
J.
Martinez-Hernandez
,
M.
Dollmeyer
,
M.L.
Frandemiche
,
E.
Borel
,
S.
Boisseau
,
M.
Jacquier-Sarlin
, and
A.
Buisson
.
2018
.
Synaptotoxicity in Alzheimer’s disease involved a dysregulation of actin cytoskeleton dynamics through cofilin 1 phosphorylation
.
J. Neurosci.
38
:
10349
10361
.
Sauvola
,
J.
, and
M.
Pietikäinen
.
2000
.
Adaptive document image binarization
.
Pattern Recognit.
33
:
225
236
.
Scharr
,
H.
2000
.
Optimal operators in digital image processing
.
PhD thesis
.
178 pp
.
Shah
,
S.A.
,
P.
Santago
, and
B.K.
Rubin
.
2005
.
Quantification of biopolymer filament structure
.
Ultramicroscopy
.
104
:
244
254
.
Shimono
,
M.
,
Y.-J.
Lu
,
K.
Porter
,
B.H.
Kvitko
,
J.
Henty-Ridilla
,
A.
Creason
,
S.Y.
He
,
J.H.
Chang
,
C.J.
Staiger
, and
B.
Day
.
2016
.
The Pseudomonas syringae type III effector HopG1 induces actin remodeling to promote symptom development and susceptibility during infection
.
Plant Physiol.
171
:
2239
2255
.
Tanaka
,
K.
,
S.
Takeda
,
K.
Mitsuoka
,
T.
Oda
,
C.
Kimura-Sakiyama
,
Y.
Maéda
, and
A.
Narita
.
2018
.
Structural basis for cofilin binding and actin filament disassembly
.
Nat. Commun.
9
:
1860
.
Utsunomiya
,
H.
,
M.
Fujita
,
F.
Naito
, and
T.
Kaneta
.
2020
.
Cell cycle-dependent dynamics of a plant intermediate filament motif protein with intracellular localization related to microtubules
.
Protoplasma
.
257
:
1387
1400
.
van der Walt
,
S.
,
J.L.
Schönberger
,
J.
Nunez-Iglesias
,
F.
Boulogne
,
J.D.
Warner
,
N.
Yager
,
E.
Gouillart
, and
T.
Yu
.
2014
.
scikit-image: Image processing in Python
.
PeerJ
.
2
:e453.
Yen
,
J.-C.
,
F.-J.
Chang
, and
S.
Chang
.
1995
.
A new criterion for automatic multilevel thresholding
.
IEEE Trans. Image Process.
4
:
370
378
.
Zack
,
G.W.
,
W.E.
Rogers
, and
S.A.
Latt
.
1977
.
Automatic measurement of sister chromatid exchange frequency
.
J. Histochem. Cytochem.
25
:
741
753
.

A preprint of this paper was posted in bioRxiv on February 25, 2022.

Author notes

*

P. Li and Z. Zhang contributed equally to this paper.

This article is available under a Creative Commons License (Attribution 4.0 International, as described at https://creativecommons.org/licenses/by/4.0/).