April 29, 2014
New Release! Version
New Release Highlights!
- Cell image synthesis
- Improved vesicular organelles model
- Eliminate object/object and object/boundary overlap during generation
- Ability to combine models learned from images of different resolution, and synthesize images at desired resolution
- Ability to synthesis random walks in shape space from diffeomorphic models of cell and nuclear shape
- Including directed random walks using Willmore energy and shape space density
- New capabilities for cell and nuclear shape model learning
- Build nuclear models from images without nuclear marker
- Build joint diffeomorphic models of cell and nuclear shape
- Per-cell representations for easy model building and comparison
- Export to SBML-spatial and mesh formats for interfacing with tools such as CellBlender and VCell.
- Parallelization of model learning pipeline
- Reporter tools
- assess learned models
- compare models
- compare per cell parameters within or across models
The CellOrganizer project provides tools for
Model learning captures variation among cells in a collection of images. Images used for model learning and
instances synthesized from models can be two- or three-dimensional static images or movies.
- learning generative models of cell organization directly from images
- storing and retrieving those models in XML files
- synthesizing cell images (or other representations) from one or more models
learn models of
These models can be conditional upon each other. For example, for a given synthesized cell instance, organelle position is dependent upon the cell and nuclear shape of that instance.
- cell shape
- nuclear shape
- chromatin texture
- vesicular organelle size, shape and position
- microtubule distribution.
Cell types for which generative models for at least some organelles have been built include human HeLa cells, mouse NIH 3T3 cells, and Arabidopsis protoplasts. Planned projects include mouse T lymphocytes and rat PC12 cells.
Synthesized Cell Images
(click to view)