Start up delays are a drag, so I’ve been excited about Graal’s of compiling JVM bytecode since I heard about it eightish months ago. It hasn’t helped that my language wanderlust has been pulling me toward Common Lisp and Rust, two very different languages that both compile native executables.
Recently, I created a new project primarily for my own use, so it was a good opportunity to try Graal again. (Trying to use Graal to compile a few medium-size projects hasn’t worked.) I wanted to solve the problem of me forgetting to track files in version control and do repetitive initial tasks like including the license. Before you get your hopes up, I didn’t create a super-clever über-customizable tool that will definitely work for your projects. However, some of the checks (such as including a LICENSE) are universal, and at less than 200 lines of code, it wouldn’t be hard to fork to adapt to your own workflows.
What I ended up with is a tool that prints a checklist of project organization issues for you to fix. You can view the source code, read the documentation, and even download the binaries (macOS-only, sorry) I produced with Graal from the repository.
I also used this as an opportunity to pull together some of the other useful Clojure and JVM tools I’ve found:
Test.check, a property-based testing library.
VisualVM, Oracle’s Java profiler and performance monitor that comes with the JVM
There were actually very few hiccups. Graal worked with minimal issues.
One of the most confounding issues was actually part of Clojure, the
clojure.java.shell library. After the program finished printing out the check list, it would wait around for about a minute. It turns out the library function I was using the call external programs (
hg, and the like) takes a long time to shut down if you don’t do it properly.
FYI, if you use the
clojure.java.shell module, include
(shutdown-agents) at the end of your application to close the thread pools that Clojure uses behind the scenes to start shell processes. (Agents are actually a concurrency abstraction available to your program—one I and probably others have never heard of.)
If you’re scrolling down to the benchmarks, you can stop now
|Command||Recent Hackintosh Mean [ms]||2010 MacBook Mean [ms]|
|Graal||181.3 ± 86.4||508.7 ± 299.9|
|Jar||1335.2 ± 364.7||4341.2 ± 3556.0|
Performance on Mercurial’s repository (5227 files)
|Command||Recent Hackintosh Mean [ms]||2010 MacBook Mean [ms]|
|Graal||62.3 ± 7.3||134.3 ± 34.9|
|Jar||958.9 ± 44.7||3385 ± 2722|
Performance on project-checkup’s own repository (987 files)
As promised, Graal is order of magnitude faster. More importantly, at less than 100 ms on my main computer for a medium-large repository (due to the build artifacts), the resulting binary is fast enough to not make me wait.
As a reminder, the speed you experience with
lein run is actually not real-world performance. Creating of an uberjar of your project already removes the Leiningen overhead.
I used Hyperfine to generate the benchmarks on my recent four-core desktop Hackintosh and my definitely-not-recent 2010 MacBook Pro.
I’m satisfied with the internal validity of the benchmarks—that they’re all measured the same way to allow for comparisons. I’m less satisfied with the external validity of the benchmarks—that they measure what they’re supposed to.
My concern comes from a persistent outlier in the tests. The first test tends to be a few times slower, possibly due to the file system cache. If I run the project checkup immediately upon, that makes these numbers unrealistically fast. However, if I’ve been using Mercurial or Git to do things with the project already, the caches are probably somewhat warm, and it might close to the results of testing these commands between five and ten times, as Hyperfine did in the above tests.
JVM tools can be good, actually (but overkill for this project)
Before I reached the final version benchmarked above, the Graalified version was taking about a quarter second on average and about half a second in the worst case. That seemed too long for a small amount of disk I/O and a few calls to
git. Plus, I wanted it to run instantly, so I’d have no excuse not to use it. To try to find parts I could optimize, I used the VisualVM profiler, but it saved maybe 10 to 20 percent.
It turns out I was overthinking it, and I eventually just spotted it when making other changes to the code. The issue was in the code below:
(doseq [check checks] (println (:output (perform-check check (gather-project-info)))))
The key part is
(gather-project-info), the function that actually walks the directory tree to create a dictionary of project information the checks run against.
See the bug? Every time it ran a check, it called
gather-project-info, hitting the disk again to get the information it had just collected. Simply adding a
let around the
doseq to get the data once before iterating through the sequence solved that.
If you’re wondering, the second-largest performance enhancement came from replacing the call to
hg with one to
chg, a version of Mercurial that uses a command server to avoid Python’s start-up penalty.
Despite not being what tipped me off to the embarrassing performance issue or the delay caused by
hg, using VisualVM was a pleasant experience. I tend to think of Java and the JVM in negative terms, partly because of Clojure’s unwieldy stack traces and partly due to lingering animosity toward being forced to learn Java and its over-verbose style of OOP in college CS courses. I doubt I’m the only one. I think we need to remember that the JVM itself is an impressive piece of software engineering and that being part of the Java ecosystem brings benefits. Rich Hickey didn’t pick it on a whim.
One tip: if you’re profiling a command-line tool and want to measure the entire lifetime of the process, use the Startup Profiler plugin. VisualVM doesn’t have the feature out of the box.
Testing caught a few bugs in the regexes I was using to separate the extension from the file name. I’ve written quite a few unit tests before, so that part was straightforward, and the property-based tests weren’t bad once I refreshed my memory of the generator syntax.
The two kinds of test complemented each other: the property-based tests found a bug I probably wouldn’t have thought to write a unit test for, and the unit tests found a bug I wouldn’t have bothered to write a property-based test for.
I don’t think property-based testing was overkill for this project, actually. I’m finishing this writeup a while after I did most of the work, so the time spent on testing might have faded in my memory, but I don’t remember them being particularly time-consuming to set up, and I did find a few bugs.
The part where I extrapolate wildly from one project
Here are the obligatory lessons learned:
- Graal is great for new projects! It’s harder to recommend for existing projects because it might be tripped up by perfectly okay code it doesn’t yet support, but hopefully that will change as Graal’s native binary generation feature supports more and more of the JVM’s capabilities.
- Separate functional and non-functional bits. Adding tests was really straightforward as a result of having the checks themselves be referentially transparent (i.e. pure) functions. The code that reads the file system doesn’t have tests currently, but it’s simple code that I’ve written in Clojure multiple times before.
- Don’t be afraid of lower levels… Using the JVM tools was surprisingly easy, and it makes we want to use lower-level tools.
- …But don’t overthink things either. On a small project, don’t break out tools like a profiler until you’ve checked your program’s logic for stupid mistakes.
I didn’t come up with any of these ideas. Pretty much everyone around Clojure or other JVM languages is excited about Graal, Gary Bernhardt did a great talk about having functional internals with a non-functional external interface, and exploring the lower layers is a topic Julia Evans explores all the time in blog posts and zines. It’s neat to see those all come together in a real-world project, though.