First-generation protein nanoarrays hold great promise, but it will take years before they reach the point where they can allow researchers to characterize complex samples, according to two scientists at Sweden’s Lund University.
In a paper to be published in the October issue of Drug Discovery Today, Christer Wingren, an associate professor of immunotechnology at Lund, and Carl Borrebaeck, a professor of immunotechnology at the university, assess a broad spectrum of nanoarray designs and provide an overview of the state of development of the tools.
Not surprisingly for a field that just five years ago seemed to be pushing the boundaries of scientific possibilities [See PM 02/11/02], protein nanoarrays, according to Wingren and Borrebaeck, need some growing up before they will be of any use to the research community.
“I think I would say that we’re still in the early days of generating nanoarrays,” Wingren told ProteoMonitor this week. “The proof of concept is there and it has been shown in some really nice papers that you can do it, but I think the field has started to realize and identify some of the major challenges in going from small proof-of-concept work to a more large-scale effort.”
In the upcoming paper, which Wingren provided to ProteoMonitorpre-publication, he and Borrebaeck looked at nine nanoarray designs: planar arrays; well-based arrays; nanovial arrays; attovial arrays; nanowire arrays; random arrays; bead arrays; nanoparticle arrays; and cantilever arrays.
The development of protein nanoarrays, they say in the paper, is driven by the complexity of the proteome. While microarrays have served DNA researchers well for more than a decade, the density requirements for proteomics research are much higher, they note. A cell or tissue lysate could contain 100,000 proteins, for example, as opposed to less than 25,000 genes, “highlighting the need for megadense arrays.” To create such arrays, nanotechnology-based approaches are necessary.
By scaling down, Wingren said, the probe density on the substrate can be increased and more antibodies can be put on the assay leading to “significantly larger read-out in the end.”
But to date, the development of nanoarrays has been held back by “the lack of large enough numbers of high-performance probes, lack of substrates enabling non-purified probes to be directly applied, speed of the dispenser, spot size, as well as lack of robust nanotechnology approaches enabling us to extend the current microarray format into nanoarrays.”
As a result, “it is also evident that several technological features must be evolved further, in a process similar to what conventional protein microarrays have already passed through, before the potential of the technology will become transparent,” Wingren and Borrebaeck say in their paper.
Those include issues such as scaling up, reproducibility, sensitivity, applicability, sample loading, and compatibility with complex samples and reagents and buffers, the two Lund scientists write.
According to Wingren, while each nanoarray design they examined had its individual strengths and drawbacks, the one area where each design falls short is in its ability “to functionalize each individual spot, or nanospot, with an antibody.”
In the paper, Wingren and Borrebaeck said that the process of fabricating and functionalizing high-density nanoarrays remains a key challenge. Techniques such as soft-lithographic techniques, microcontact printing, and dip-pen nanolithography can and have been used to generate “nanopatterns with biological functions,” and can be used to create patterns of nanosized features.
But to date, no one has yet devised a way to functionalize each individual feature with different proteins.
“[I]t is also evident that several technological features must be evolved further, in a process similar to what conventional protein microarrays have already passed through, before the potential of the technology will become transparent.”
At this point, Wingren said, most of the designs have the capacity to handle about five to 10 different proteins, but anything above that is “really challenging.” By comparison, a conventional protein microarray can handle up to 10,000 different proteins.
And in order for nanoarrays to have any use, they would need even more functionality than that, Wingren said. Otherwise, there would be no point in developing a nanoarray that has no greater functionality than a microarray, he said.
In Borrebaeck’s own lab, of which Wingren is part, researchers have developed the first generation of attovial-based arrays. They have been able to put down about four different antibodies on the nanoarray, demonstrating proof of concept. “But going from that to scaling it up — that’s a major bottleneck,” Wingren said. “We are trying to find ways to deposit a unique antibody into each vial, and that’s where we’re putting a lot of effort at the moment.”
A keyword search of “nanoarray” on PubMed produced only 15 results, but Wingren said that in addition to Borrebaeck’s lab, many other researchers are working on developing nanoarrays.
“A lot of it addresses these basic parameters: If you want to fabricate a high-density nanoarray, how will you functionalize each individual spot? Will it be a nanodispenser, or should it be through some self-addressing efforts, or how should you actually do that?” he said. “Also in the area of reading out the chip, how should you actually do the detection step? Should it be fluorescence or should it be some kind of label-free effort?”
During the next few years, he and Borrebaeck write, the effort to develop protein nanoarrays will be crucial “for setting the stage for miniaturized nanoarrays within high-throughput proteomic applications.”
Within the next five years, Wingren said, for nanoarrays to find a place in proteomics research, they will have to be technologically developed enough that their developers can say “’OK, we can start scaling it up, and you can see that they can actually be part of your workflow.’”