By Matthew Dublin
Henry Newman of enterprisestorageforum.com and Jeff Layton, enterprise technologist for HPC at Dell, have devised a plan to test the limits and define problems associated with the scalability issues of Linux file systems. Both Newman and Layton agree that one of the big problems with Linux file systems is the metadata scan rate:
Let's say you have 100 million files in your file system and the scan rate of the file system is 5,000 inodes per second. If you had a crash, the time to fsck could take 20,000 seconds or about 5.5 hours…THIS IS NOT ACCEPTABLE. Today, a 100-million file system should not take that much time, given the speed of networks and the processing power in systems. Add to this the fact that a single file server could support 100 users and 1 million files per user is a lot, but not a crazy number. The other issue is we do not know what the scan rate is for the large file systems with large file counts. What if the number is not 5,000 but 2,000? Yikes, for that business. With enterprise 3.5 inch disk drives capable of between 75 and 150 IOPS per drive, 20 drives should be able to achieve at least 1,500 IOPS. The question is what percentage of hardware bandwidth can be archived with fsck for the two file systems?
According to their article, the file system community has not taken these concerns seriously, which is why the concept of a 500 TB single name space Linux file system is still — surprisingly — years away.
This is just the beginning of the series these guys are writing on the issues with Linux file systems, which you can follow here. They plan on publishing the description of their test next, followed by testing reports, and finally an analysis of the results.