NEW YORK – After closing a $20 million Series A financing round last December, Stanford University spinout Deepcell is transitioning this year from a period of quiet technology development to developing commercial inroads for its artificial intelligence-driven cell isolation technology, which the company believes can support a new generation of molecular, phenotypic, and translational research.
Founded in 2017 by Stanford professor Euan Ashley, his postdoc Maddison Masaeli, and their collaborator, computer scientist Mahyar Salek, the company has developed a platform to isolate, analyze, and classify individual cells from tissue or blood samples, using a combination of image-based machine learning and microfluidics. The method allows for the delivery of intact and viable single cells with the ability to select for and separate morphological cell subpopulations without the bias of predetermined parameters.
According to the company, the technology can isolate cells occurring down to frequencies as low as one in a billion, with potential applications across areas including single-cell genomics, liquid biopsy, prenatal diagnosis, characterization of cellular and molecular interactions in specific disease states, and drug development.
Although several microfluidic cell-sorting technologies have now been developed that can isolate cells free of the bias of molecular surface markers, such as EPCAM, these approaches still rely on predetermined morphological characteristics like cell size, deformability, or other observable phenotypic features.
According to Deepcell, corralling cells based on pre-defined features disallows the type of open-ended, hypothesis-generating analyses that have been transformative for the field of genomics.
The company has set out to try to make cell-based analyses more like molecular research, where novel signatures and other discoveries can be gleaned by mining unbiased datasets. To do so, it has harnessed machine learning as a way to map morphological differences that wouldn't otherwise be distinguishable and to isolate cells accordingly. These cells can then be studied genomically, cultured and grown, evaluated as disease biomarkers, etc.
"People have approached [cell sorting] using mostly traditional approaches, [mostly] one parameter at a time. What we're trying to do here for the first time is to define morphology broadly as a quantitative analyte," Deepcell CEO Maddison Masaeli said in an interview.
"The technology relies heavily on deep learning, trying to extract information that is morphologically relevant, but perhaps not communicable with human language, [creating] this massive database, which is very high dimensional, similar to a genomic or proteomics data set… but the dimensions of that space are completely machine driven," Masaeli added.
"It provides a platform where you can visualize … how different cell types or cell states cluster with each other, and then you can take action on that. We can go and physically sort out cells that have unique morphological features even if you might not even know what those features are."
The nuts and bolts of the system is a proprietary microfluidics benchtop instrument, which allows in-line cell imaging to inform supervised and unsupervised classification and sorting, informed by what has grown into a billion-image database.
According to Deepcell CTO Mahyar Salek, the resulting output can be fed to any downstream molecular analysis, and the cells remain viable. They're unperturbed, so you could even culture them," he said.
Peter van der Spek, a researcher at Erasmus University in The Netherlands who is familiar with Deepcell but unaffiliated with the company, said that he views the platform as exciting for a variety of applications.
"We are very much fascinated by the AI platform in order to basically see functional changes," he said.
For example, "if you're looking at blood, you have T cells, which are very important in immune responses, and [perhaps] you want to know whether the T cells are activated or not activated. The [Deepcell] platform, without staining, without any dye, can just based on the morphology determine whether the T cell is activated or not."
"If your pathologist looks at the microscope at these cells, he doesn't see it. But there are subtle changes which are recognized by the image recognition software using the AI platform where they can basically say, well, this pool of cells is activated versus a pool of cells which is not activated."
This ability can be easily benchmarked, he added, using cells with known genomic mutations or phenotypic signatures and testing the platform's ability to distinguish them.
"The technology sounds very simple, but there are fundamentals that need a lot of infrastructure build out that we have spent a lot of time on" over the past three years, Masaeli said.
"This has required a lot of work in terms of making sure we are feeding quality data into these artificial intelligence architectures … and people who are familiar with deep-learning algorithms know that before you hit a critical mass, the outcomes are not super impressive. You have to hit a certain amount of data for it to all makes sense.
"Right now, we're at around a billion images of annotated single cells, which is a massive data set that we can now play around with in a lot of different applications or shapes and forms and makes sense out of that data," Masaeli said.
Although Deepcell is a pioneer in applying this type of deep-learning approach to microfluidic cell separation, similar image-trained artificial intelligence methods are increasingly being explored for tissue slide analysis.
For example, healthtech firm Owkin recently debuted a machine-learning technology for analyzing histological slides, showing that it could accurately predict RNA-seq expression of tumors based solely on digitized histopathology images.
Coming out of its early focus on internal development, the company is now beginning to publish data on its technology and is taking its first steps toward offering its platform commercially with early-access users.
The company is presenting new data at the Advances in Genome Biology and Technology conference, which is being held virtually this week, on the performance of its platform in preserving cell viability. It also shared a poster at the Cell Bio virtual meeting in December demonstrating that the technology could distinguish specific cell lines and fetal cells from a background of normal blood cells with high accuracy.
With spike-in experiments, the company also showed it could detect target cells down to a concentration of 1 in 100,000.
"Our plan in terms of commercialization is that there are certain areas that we are internally very excited about, and we are working with collaborators to develop those applications," Masaeli said. "At the same time, we are slowly but surely transitioning into providing the pipelines and tools so that we can enable … customers to develop their own applications, [with us] taking more of an advisory or a support role in the future."
The firm's current pool of collaborators included clinical researchers in areas like melanoma, where tumor cells don't express the right surface markers for traditional cell-isolation tools to work effectively.
According to van der Spek, other promising applications in genomics could include cancer diagnostic challenges like classifying cancers of unknown primary, or analysis of mutations in areas like autoimmune disease.
The platform could potentially distinguish morphological patterns specific to certain tumor types, tracing mysterious cancers to their origin in the body, he said. There's also evidence that the technology can give a functional readout, marking cells as harboring gain-of-function or loss-of-function mutations based on their appearance.
As the company moves forward, Masaeli said Deepcell will likely have to raise additional money. "Right now, one of the challenges is that we have more requests than we can handle as a 20-something-person company, so expanding is definitely something that we are interested in," she said.