At A Glance
- Chief Scientific Officer & VP, Product Development, Vialogy
- CTO and VP for eServices and Platform Development at ViaChange, software company for securities trading (March 2000-Nov. 2001).
- Head of the Ultracomputing Technologies Group with Information Systems at Information and Exploration Systems Divison at NASA’s Jet Propulsion Lab, Pasadena, Calif. (1989 - 2000).
- MBA in Strategic Management Pepperdine University, Malibu, Calif., 1993.
- PhD, Computer Science, Louisiana State University, Baton Rouge, La., 1989.
- BS, Computer Science, Indian Institute of Technology, New Delhi, India, 1986.
Sandeep Gulati spent nearly a dozen years doing ultracomputing, at NASA’s Jet Propulsion Laboratory. But over the past two, he has taken some of the core technologies involved in outer space technology, and applied them to the inner workings of microarray data analysis. The result, called quantum resonance interferometry, could possibly provide microarrays with the lift they need to blast out of the research niche and into wider use by biopharma and diagnostic companies (see story p. 1). BioArray News spoke to Gulati this week to learn the ins and outs of this technology.
Can you tell me a little bit about quantum resonance inteferometry, and how it can be applied to microarrays?
QRI is a very general-purpose method for signal amplification. The core technology is really geared towards detecting very weak signals — detecting, discriminating and quantitating. What does that mean in microarrays? Essentially, in microarray analysis for most applications, be it gene discovery or be it diagnostics, it comes down to clearly recognizing specific hybridization events. Secondly, the ability to discriminate between specific and non-specific hybridization, so the ability to detect, is what brings down false negatives. The ability to discriminate reduces false positives. And then the third element is quantitation, quantitation of expression, to be able to do that very robustly over an extremely large dynamic range.
So how does it work?
There’s a class of algorithms that rely on injecting noise to do useful things. So what we’ve basically figured out is a method of improving upon the general noise injection technique, stochastic resonance. [This] is a classical method that was invented about 25 years ago. And there is another advancement called quantum stochastic resonance. Stochastic resonance relies upon using classical Gaussian noise and injecting it: Quantum stochastic resonance relies upon using quantum noise, and quantum resonance interferometry is a step beyond quantum stochastic resonance. …. In a simple radar, based on the Doppler principle, you design a pulse, you send it at a body which is moving, and you look at the return. Those are dumb radars. But [for] aircraft, and especially when you are detecting incoming missiles and things, you use more complex radars: These radars basically send out spectral pulses. So they use lots of hardware, and [you] design high-energy pulses which you shoot out. Those pulses interact with the material … and that interaction produces some new information or backscatter or a reflection. By looking at the backscatter or reflection and changes in its properties, one can tell what’s the material of the plane and be able to discriminate between a plane and a boat. … So what we do, what QRI does, is the same principle done algorithmically in software.
… We don’t have aircraft to analyze but we have, for example, microarrays. Or protein arrays, which have many features. For us the ability to detect each feature — think of an Affymetrix microarray with 20,000 genes and 11 features basically as literally a million-feature system … and think of our radar pulses as like a torch light, so we have a torch light we shine on a million elements of microarray, and as a result are able to individually resolve each feature. … You design the spectral pulse. You essentially target or shoot it at a material. You look at the interaction, the destructive or constructive interference, and you look at the result.
Is it actual little packets of noise that you’re injecting?
It is really an information packet. In mathematical terms, the operation, which is an interference operation, is a convolution of two spectral elements: an image which we get, the microarray image, extract out the features, and we do transformations to reduce it to some spectral data; and then the noise we are injecting is other spectral data — mathematically — and we do convolution operations. We implement them digitally. So conventional computers and desktop PCs and servers can handle the calculations — and not only handle them, but do them extremely fast … So we are able to work with one feature on a chip basically on a millisecond time-scale, which means a chip like an Affymetrix human chip, we analyze in about 10 to 12 seconds.
Now, one thing I am not quite getting is that you start with the image on the chip, and that image is only as good as the imaging software. That would tend to limit the amount of sensitivity that you could obtain from that image …
The difference is, passive methods rely on taking away the noise, removing noise. They see the noise as interfering. Our mathematics sees it other way around. We treat life to be all noise … and we say the signal interferes. So we start with the same image which everybody else starts with, then we have transformations where we reduce it to the spectral regime. … [I]n our representation or in our spectral transformations, if signal is present, then you see spectrum which is different than from what noise spectrum would be.
I think I get what you are saying …
The key message is that, over the last 50, 60, or 70 years, folks in the optics or the photonics community, the astrophysics community and the physics [community], have really come up with both models of some complex devices and have tremendously evolved hardware devices. We’ve been able to emulate the same principle and do those things algorithmically.
And you have patented this invention, right?
Most [of our] patents [include] the steps to do that, and also the notion of active signal processing. But I think what’s more important is, that over the last two or three years, based on a number of studies we have done, we have been able to show some significant improvements in sensitivity, specificity, quantitation.
With Affymetrix data analysis, there’s often a problem when the background is greater than the signal. You get negative numbers, and they say just make it zero. But it sounds like that problem is completely removed in your analysis.
We’ve found a way to bypass it. Affymetrix’s device for us is a physical hardware system. And because it’s a good physical hardware system, which works through good physics, and it’s a very consistent or a coherent system, our methodology works with the platform very well. If you are dealing with stable platforms — for example, aircraft are fairly stable systems — and so with platforms which are stable with good coherence properties, we get much, much better performance. So we’ve been able to bypass that [problem].
Can you apply QRI to a spotted array platform?
The method is applicable to a photolithographic array or a spotted array or a simple oligo array. The question [is] where do we provide [the] most enablement. Also, realize that with those arrays there’s a huge cost difference as well. With the arrays that are expensive, they put a lot of work in designing a good platform, a good assay. With some of the spotted arrays, you have a cost difference of a hundred, a thousand times. So what it really comes down to is, yes, our method is applicable, [but] what is the precision, sensitivity, reproducibility we will achieve with platforms? … On the regular spotted arrays, the improvement we provide could be a factor of ten less than in well-designed, very standardized arrays, or the high-quality spotted arrays.
So what other improvements are you developing?
The real interesting thing we are right now doing with microarrays is turning microarrays into a quantitative platform. People have been using quantitative platforms to really validate microarrays. So what we are now looking at [are] some pretty extensive studies where we can directly go from microarray expression and fold changes done by [this] method and be able to relate them to a number of TaqMan cycles once you see them … so that if you were doing 100 microarrays, and you only end up doing TaqMan for 5 genes, and you compare with a calibration curve, you would actually get quantitative response across the entire range of the chip itself.
So people won’t have to do TaqMan validations on every result …
We think that will lead to a lot of time and money savings. So most of my current studies are focused on developing high-precision curves which will essentially produce information that people can directly compare with what they’re getting off TaqMan. That’s one area. The second is looking at a better technique for essentially generating biomarkers. … [This] is a methodology which basically is able to generate one or more biomarkers in samples. So you can kind of segment it out and say, you have these seven biomarkers which, each of them are five, ten 20, 30 genes, and these are quantitative expression levels. So you [are] able to auto-generate robust biomarkers based on empirical samples. I think these two things will help in the adoption of microarrays.