One of the key goals for an in-memory database for unit testing is that it should be fast. And mayfly is faster than the other databases I've tried for some things, like creating or altering tables. However, it is some 3x slower than hypersonic for inserting rows ("insert into foo(a,b) values(3,4)" kinds of commands). I've been meaning to profile it for a long time now, but I finally got around to it.
First I added the JIP jar to the project and added a profile-test target to build.xml. Because JIP doesn't work with gcj, I ran it under Sun java. JIP writes a text file report, and I worked off that.
The results surprised me. They showed that 70% of the time was being consumed in the lexer and parser. Conventional wisdom is that lexing and parsing just isn't your big bottleneck in a compiler these days, but perhaps that is more for an optimizing compiler which spends more time in code generation and optimization passes. I inlined a few short methods which were bottlenecks (at the cost of a small amount of code duplication, but not so much as to be really shocking, and remember I was careful to only do this for the bottlenecks). This was also a surprise: that method invocation seemed to be such a cost. Given my limited knowledge of java internals, that sort of makes sense, but it kind of seems like a step backwards, in the sense that in the C/Pascal/etc days we made so much effort to make method invocation fast, and tell people they didn't need to make their code ugly to avoid method calls. (At the risk of belaboring the obvious, even if method calls are expensive you still don't need to make your code (very) ugly to avoid them: it is only a handful of invocations which are actually going to make a difference in your run-time, and the profile shows you which ones).
I was also able to streamline the non-parsing part of the code, mostly by taking out some extra steps (for example, transforming a column name to a Column and back to a name more times than needed). Some of that had built up through a series of changes which had left in vestiges of previous ways of doing things. So cleaning this up left the code simpler and clearer, as well as faster.
Other changes, like changing Row to be a HashMap rather than a List, didn't seem to help at all (or even hurt slightly). It has been conceptually a map for some time now, but apparently those linear searches were not particularly more expensive than the many calls to hashCode you get with the map. I guess the fact that we don't expect more than a few dozen columns in a table is responsible.
So what is next when I look at this again? For the lexer, I may have run out of obvious ideas (given that it has duties like tracking the line and column numbers of every token, and I don't see giving up that feature, which provides good error messages). For the parser, there is a lot of expression handling machinery that is involved in parsing the "3" in "insert into foo(a) values(3)". Unless I think of a better way, having the top-level expression parser look for a literal followed by something like "," or ")", and going into a fast-path special case might be worth it. I know that looks like a kluged-up wannabe bottom up parser, but I've been happy enough with recursive descent in other ways, that I have trouble seeing switching back to a parser generator. As for the execution (building up rows, modifying the tables, etc), I'd have to look at the profile more. Although I've seen some hot spots, fixed them (and perhaps created others), I don't have as much of an intuitive feel for what is slow here as I do for the lexing and parsing.
Monday, July 30, 2007
Subscribe to:
Posts (Atom)