Over the last three months I had the pleasure to run Fedora 20 Linux on the Laptop I am using for work. Last week, I was forced to downgrade to Windows 7. (Mainly, because my employers system administrators don't support everything else. I am quite ready to have the occasional fight for my freedom against the admins, but I won't accept the constant struggle. To name just the most important problem: Accessing an MS Exchange Server without IMAP enabled is, at best, exhausting.)
Why the word "downgrade"? Because my machine is so much slower now. I am a developer. My Eclipse is open for 10 hours a day and I can't count the number of invocations of Ant, Maven, Make, and other build systems. (Ant, and Maven, being my personal favourites.) Of course, the machine isn't actually slower. It is the same hardware, after all. Same amount of RAM, still without an SSD. However, and that's a fact:
Running one and the same build system against the same project on Windows 7 takes more time than doing just that on Linux.
If you don't believe me, try the following: Install a Linux VM on your Windows PC. Then run the following command, first on the VM, then on the Windows host:
git checkout https://github.com/torvalds/linux.git
What are the odds, that this command will run faster on the Linux VM than on the Windows hosts. I'd bet. And I'd win. (It's true: Linux Git on the emulated hardware wins against Windows Git on the raw iron.) Btw, for an even more convincing example, try "git svn checkout".)
This week, I decided to waste some time to think about the issue: How do I get my build system on Windows as fast as on Linux. First, let's identify the guilty party: It's none other than... (drum roll) NTFS!
I'm not making this up: Others are quite aware of the problem. See, for example,
this page. A Google search for "ntfs performance many small files" returns about 168000 hits. So, let's state this as a fact:
NTFS behaves extremely poor when dealing with lots of small files.
But that's exactly, what a build system is all about. Let's take a typical example:
- The first typical step is to remove a build directory (like "target", or "bin", or whatever you name it.)
-
- The compiler reads a lot of small source files (named *.java, *.c, or whatever) from the "src" directory.
- For any such file, the compiler creates a corresponding, translated file (named *.class, or *.o, or whatever) in the build directory.
- A packager, or linker, like "jar", or "ln" combines all these files we have just created into a single target file.
Notice something? This is the same for all build systems. It really doesn't matter, whether your build script uses XML, a DSL, JSON, or a binary format. (No, this is holy war won't have my participation.) What matters is this: All current build systems are based on the mantra of an output directory, where lots of small files are created. But, that's not a necessity. So, here's the challenge:
Let's modify our build systems in a manner that replaces the output directory with a "virtual file system". If we do it right, we can be much, much faster.
As a poof of concept, I wrote a small Java program, that extracts the Linux Kernel sources (aka the file "linux-3.14.2.tar.gz") and writes them into implemantations of the following interface:
public interface IVFS {
OutputStream createFile(String pPath) throws IOException;
void close() throws IOException;
}
For any source file (45941 files) the method createFile is invoked, the file is copied intoo the OutputStream, and the stream is closed. Finally, the method IVFS.close() is invoked. Here's my programs output:
Linux Kernel Extraction, NullVFS: 4159
Linux Kernel Extraction, SimpleVFS: 1740044
Linux Kernel Extraction, MapVFS: 78134
The three implementations are:
- The NullVFS inherits the idea of /dev/null: It is basically a write-only target. Of course, this isn't really useful. On the other hand, it shows how fast we could be, in theory, if our target were arbitrarily fast: In this case 4159 milliseconds. (This is, mainly, the time for reading the Linux Kernel sources.)
- The SimpleVFS is basically, what we have now. Files are actually created. As expected, this is really slow, and it takes more than 1740 seconds.
- Finally, the MapVFS is basically an In-Memory store. However, it might be really useful, because its close method is creating a big file with the actual contents on disk. With 78 seconds, this implementation is still close to the NullVFS. It demonstrates what might be really possible.
Conclusion: When creating one file with our actual contents, we need 78 seconds, as opposed to 1740 seconds. Of course, the IVFS interface is an oversimplification. The implementations certainly aren't thread safe. We have omitted the possibility to modify files that have previousöy been created. But the numbers are so impressive that I am personally convinced: If we a) modify our build system to use a virtual file system as the output and b) provide fast implementations, then we have much to gain, fellow developers!
In practice, this won't be so easy. The biggest hurdle I am anticipating, is the Java Compiler. Even the Java Compiler API (aka the interface javax.tools.JavaCompiler) is based on real files: We won't be able to use the Java Compiler, as it is now. Instead, we have to manipulate them to use the VFS.
ECJ, the Eclipse Java Compiler might be our best option for that.
Who'll take the first step? Well,
Gradlers,
Buildrs,
SConsers, of the world: Here's something where your users could have a
real difference!
4 comments:
This is interesting. Two things I want to mention from my experience. First is that the antivirus is usually the main performance issue on a windows packaged for an enterprise. The other is the the read access flag. This flag cause NTFS to update the system log each time a file is accessed. Deactivating it can increase a java build greatly
Henri: Any pointers on how to deactivate this log?
Thanks,
Jochen
It would be interesting to see how a file-system wrapper for ext2/3, e.g., ext2fsd or ext2 IFS, behaved in read/write performance of numerous small files...
@Jean Pierre: Any ideas on how to try that without repartitioning my hard drive? Like using a fixed size file to emulate a partition.
Post a Comment