A team led by Arizona State University researcher Stuart Lindsay has applied recognition tunneling technology to the identification of amino acids and peptides.
The approach, which Lindsay and his colleagues have previously applied to DNA sequencing, could enable single-molecule detection of proteins at picomolar and possibly femtomolar concentrations, he told ProteoMonitor.
Recognition tunneling uses pairs of electrodes coated in layers of recognition molecules, which bond weakly to target analytes when they pass through the gap between the electrodes. The signal generated by an analyte's passage through this gap can be used as a fingerprint to identify a particular molecule. These fingerprints are established via machine-learning algorithms that train to recognize molecules based on various components of their signals.
"What happens is you take a pure sample, and you repeat the measurement many times," Lindsay said. "The point being that these are very complicated signals and some of the information is useful, some is noise, and some is very dependent on the particular sample preparation or measurement you are making."
In repeating the measurement multiple times, the researchers are able to determine what signal features do and don't change from measurement to measurement, as well as which change or hold steady from sample to sample.
"This gives rise to a pruning system that [identifies] what the useful parameters are," Lindsay said. "Then the computer takes the data and tests each parameter in turn to see how much it contributes to the assignment."
Ultimately, he said, this process leads to a pared down set of signal features that can be used to reproducibly identify target analytes in different samples.
The ASU researchers demonstrated the approach in a paper published this week in Nature Nanotechnology, using it to identify methylated glycine molecules and to distinguish between the enantiomers L- and D-asparagine and the isobaric amino acids leucine and isoleucine. They also distinguished between three short peptide chains: GGG, GGGG, and GGLL.
This published work was done using a scanning tunneling microscope in a buffered solution to create the tunnel gap. Since then, the researchers have moved to a chip-based device that offers significantly improved performance, Lindsay said, noting that "in the several generations of implementation since the [Nature Nanotechnology] paper was written, the technology has improved exponentially."
The ultimate goal, he said, is to couple the recognition tunneling approach to nanopore technology, essentially functionalizing nanopores with recognition molecules, so that peptides or proteins would generate signal as they passed through and could potentially be sequenced de novo. Researchers including Oxford Nanopore Co-founder Hagan Bayley and the University of California, Santa Cruz's Mark Akeson are exploring the use of nanopores themselves for protein identification, but, Lindsay said, recognition tunneling could offer an advantage over these approaches in that the technique generates much more information-rich signals.
"And if you have signals that are more feature rich, then you can analyze more chemical species in a given run," he said.
The system is still far from being "a robust commercial technology," Lindsay said, but, he added, "in principal, all the parts are there."
In particular, he noted that in recent work Bayley and Akeson have demonstrated the feasibility of pulling proteins and peptides through nanopores in a controlled manner – a prerequisite for the system he envisions.
Lindsay and ASU have licensed the technology to Roche for DNA sequencing purposes, and the company is currently funding efforts in his lab to develop a solid state recognition tunneling device for DNA.
Lindsay suggested, however, that the technology could prove most useful for protein analysis. While PCR enables analysis of even extremely low-abundance genomic features, proteomics, he noted, "is handicapped by the need to have [in many cases] at least nanomolar concentrations of a target. So the goal here is to do single-molecule detection, which, depending on what the embodiment is in a final device, ought to readily go to picomolar and possibly femtomolar levels."
He added that while genomics has received considerably more attention and funding to date, he believes that, ultimately, proteomics could prove more medically valuable.
"I am shocked at how much the investment community still wants ever cheaper genomes when it seems to me that the real medical importance lies in proteomics," he said. "People's heads are stuck in the genomics sand. Many of the next-next-generation technologies [like nanopores and recognition tunneling] used for genomics are directly applicable [to proteomics], and it seems to me incredible that people are not clamoring for this at the moment."
Lindsay noted that one potential near-term application of the recognition tunneling approach could be investigating isoforms in enriched protein populations – quantifying, for instance, the ratios of different phosphoforms of a particular analyte for purposes such as monitoring the activity of kinase inhibitors.
The larger goal of de novo protein sequencing in real biological samples, on the other hand, remains a ways off, he said.
"I am prepared to bet that once we start passing peptides through nanopores with recognition tunneling electrodes in them, we will be horrified by how complicated the signal is," he said, adding that it was impossible to say how feasible such an approach might be without further research into the question.
In the meantime, Lindsay said, the two most immediate obstacles facing implementation of the technology are the manufacture of robust, mass produced chips and developing sample prep protocols.
In terms of chip production, Lindsay said that he was "pretty confident" it would happen relatively soon. On the sample prep front, he and Stanford University researcher Ron Davis are currently putting together a proposal to develop a chip-based sample prep workflow for the approach. Broadly speaking, Lindsay said, the sample prep process would be similar to that of mass spec-based approaches, requiring, for instance, depletion of high-abundance proteins to optimize sensitivity.