How ide-probe can automate interactions with IntelliJ
See how your company might benefit from automating interaction with IntelliJ by preparing high-level integration tests and benchmarks.
A well-known client of ours was the original inspiration for the ide-probe solution. Our client provides a short message social networking service.
It’s a challenge to deliver a working IDE in large organisations handling big and complex projects while using multiple plugins and tools. Large codebases typically use specific tooling and may require custom plugins for IntelliJ. The integration of IntelliJ, the plugins and other tools is complex. IntelliJ and the plugins are frequently updated, and these updates often introduce performance regressions, bugs or problems with compatibility. Users like to use the latest, greatest JetBrains features as soon as possible and hope to avoid encountering critical bugs or performance regressions.
Achieving these goals requires testing everything together while using the actual workspace and reporting or fixing all critical issues before providing users with the updated tooling. Manual testing is time-consuming (preparing the environment, importing the project, indexing and executing multiple scenarios usually takes around 3-4 hours each time, excluding any benchmarking). Additionally, manual performance testing is not reliable as it gives inconsistent results. In such an environment, the unit tests of plugin code or other tools cannot cover full user scenarios and give accurate results.
As a solution, we developed ide-probe, a library for automatic high-level tests and IntelliJ benchmarks that closely replicate user scenarios.
We prepared a set of integration tests, using ide-probe to cover common user scenarios and regressions. The tests are run before each release and give a high level of confidence that, if they pass, the IDE will work properly. Additionally, we created a set of benchmarks used to continuously monitor core metrics like project import time, indexing time, incremental compilation time etc., against multiple subprojects. If the test results change, it then helps to detect the changes in the project or build tool that may have caused the issues. Lastly, ide-probe aids build tool migration. It continuously monitors the IDE’s compatibility with the new build tool for 50+ projects.
Thanks to the automatic tests, multiple bugs and performance issues were detected and fixed before reaching our users.
This resulted in stable IntelliJ releases, and users stopped getting IDE and plugin updates containing significant performance or functionality issues (in each major release, we found at least one blocking issue and a total of 2-5 problems).
Many hours of manual testing were saved (about 100 runs so far, 3.5h each, which means roughly 2 months of work on executing tests).
The benchmarks helped us find one performance issue that originated from the build tool rather than the IDE. We also know how the performance changes over time, and we can make educated decisions based on whether to release something or not or put in more optimisation work.
Additionally, users sometimes feel that the IDE experience changed somehow; benchmark results can show that this is subjective. A complete set of benchmarks runs every day and takes about 6 hours.
Monitoring IDE compatibility with a new build tool for 50 projects that runs every day manually would take about 12 hours. That means the equivalent of 568 hours or 71 working days are saved each month because we are now using ide-probe!
Visit github.com/VirtusLab/ide-probe and find out more about our solution. Add ide-probe to your project, for free!