Typometer README

Typometer is a tool to measure and analyze visual latency of text / code editors.

Editor latency is delay between an input event and a corresponding screen update, in particular case – delay between keystroke and character appearance. While there are many kinds of delays (caret movement, line editing, etc.), typing latency is a major predictor of editor usability.

Check my article Typing with pleasure to learn more about editor latency and its effects on typing performance.

Download: typometer-1.0.1-bin.zip (0.5 MB)

Java 8 or latter is required to run the program. You can download Java from the official site.

Features

Screenshots

Main window:

Typometer, main window

Frequency distribution chart:

Typometer, frequency distribution chart

Principle

The program generates OS input events (key presses) and uses screen capture to fully automate the test process.

At first, a predefined pattern (".....") is inserted in editor window in order to detect screen metrics (start position, step, background, etc.).

After that, the program types a predefined number of "." characters into the editor (with given periodicity), measuring delays between key presses and corresponding character drawings.

To achieve high accuracy of measurement, only a single pixel is queried for each symbol. Moreover, the program can use fast native API (WinAPI, XLib) calls on supported platforms, offering AWT robot as a fallback option.

There are two modes of testing available:

Usage

To register only essential editor latency, text must be rendered directly to framebuffer, without intermediate image processing that might introduce additional delay. Prefer stacking window managers to compositing window managers for the testing purposes, particularly:

Close all programs that add system-wide keyboard hooks, as they might process the keyboard events synchronously and affect the results (for example, Workrave is known to noticeable increase the typing latency).

You may consider switching your machine in a particular hardware mode (power scheme, integrated / discrete graphics, etc.). In power save mode (and on battery), for example, editor responsiveness is usually much lower, so it's possible to detect significant performance glitches which are less frequently observable otherwise.

Before you start benchmarking, make sure that other applications are not placing noticeable load on your system. It's up to you whether to "warm up" VM-based editors, so they can pre-compile performance-critical parts of their code before proceeding.

If possible, enable non-block caret (i. e. underline / vertical bar instead of rectangle) in editor. This might increase measurement accuracy.

Typical action sequence is the following:

  1. Specify a measurement title, like "HTML in Vim" (optional, can be set later).
  2. Configure test parameters (optional).
  3. Launch an editor, maximize its window.
  4. Open some data in the editor, for instance, a large HTML file (optional).
  5. Place editor caret in desired context (like comment, etc.), at the end of short / empty line.
  6. Start benchmarking process in the program.
  7. After a corresponding prompt, transfer focus to the editor window.
  8. Wait for test completion, don't interfere with the process.

You can always interrupt the testing process simply by transferring focus back to the program window.

After test result is acquired, you may either analyze the singular data by itself or perform additional tests (different editors / conditions) to do comparative analysis.

Both source- and aggregate data is easily accessible, you can:

It's possible to merge results either by inserting data from an existing CSV file, or by appending data to a CSV file on saving.

Recipes

Here are a few tips on how you can use the tool to detect performance bottlenecks in text / code editors:

If you're implementing a text / code editor, take a look at the programming techniques to significantly reduce the drawing latency.

Troubleshooting

To make benchmarking possible, correct screen metrics must be detected at the initial step. The program attempts to recognize a custom pattern (5 new dots) in order to determine the following parameters:

Because there are many editors (and multiple versions of each editor), which looks different on different platforms, and there are many possible color schemes and fonts, the metrics recognition algorithm has to be very flexible. While the program sources contain a great deal of test cases, some glitches are still probable.

Here's a list of typical problems and corresponding solutions:

Feel free to contribute by creating additional test case images (check /src/test/resources directory for examples).

Pavel Fatin, https://pavelfatin.com