Jython - Python implemented in Java. JPype - Allows Python to run java commands. Jepp - Java embedded Python. JCC - a C code generator for calling Java from C/Python. Javabridge - a package for running and interacting with the JVM from CPython. Py4j - Allows Python to run java commands. Voc - Part of BeeWare suite. Unlike when you generate Java and C protocol buffer code, the Python protocol buffer compiler doesn't generate your data access code for you directly. Instead (as you'll see if you look at addressbookpb2.py ) it generates special descriptors for all your messages, enums, and fields, and some mysteriously empty classes, one for each message type. Fastest: Python; Ruby; PHP; C; Javascript V8; C; Perl5 As you can see from performance graph, processing speed slows down as the test string grow. The more graph curves up the more performance degrades. Graph reveals that performance of Java and Lua degrades dramatically.
Here's a performance comparison of LuaJIT against other VMs on differentarchitectures:
- x86/x64: Intel Core2 E8400 3.0 GHz
- ARM: Texas Instruments OMAP4460 1.2 GHz Cortex-A9
- PPC: FreeScale MPC8377E 800 MHz PPC/e300c4
- PPC/e500: FreeScale MPC8572E 1.5 GHz PPC/e500v2
- MIPS: MIPS 74Kc+FPU 78 MHz MIPS32R2
Click on the headings to see the benchmark results. You need to turn onJavaScript to see the interactive charts.
Measurement Methods
All measurements have been taken under Linux 2.6.All shown Lua benchmarks are single-threaded, so only a singleCPU core was used. The system was freshly booted and otherwise idle. Allpower-management features have been turned off, no hypervisor module wasloaded. It was ensured all executable code and data files were cached inmemory prior to each measurement.
The C code of all VMs was compiled with GCC 4.4.3 with the defaultcompiler flags given in the Makefiles (except for Lua on x86 where-O2 -fomit-frame-pointer was used).
The base for the comparisons are the user CPU times as reported by theshell built-in time command (i.e. TIMEFORMAT='%U').The accuracy of the timings is limited by the 250Hz system timerfrequency, which may result in a divergence of up to ±4ms.All benchmarks are run three times for each VM — only the bestresult is reported here.
If possible, all benchmark runs have been scaled for runtimes in themulti-second to minutes range to improve the overall measurementaccuracy. The only exceptions are for non-scalable benchmarksor when out-of-cache effects would dominate the execution time(e.g. array3d). The variance between identical runs is generally verylow (< 0.5%) and is not shown (whiskers would clump together inthe bar graph).
Startup time for running the executable of the VM itself is included inall measurements, but is negligible (< 100µs).Likewise, warm-up and compile-time for the JIT compilers is included.But, again, this has no measurable effect, since LuaJIT's compiler warmsup very quickly (LJ1: 1st call of method, LJ2: 57th loop iteration) andis exceptionally fast (compile times in the microsecond to millisecondrange).
About the Benchmarks
Most of the benchmarks have their origins in theComputer Language Benchmarks Game.It presents acomparison of the performance of different languages and implementationsfor a small set of benchmarks. Many of these benchmarks have changedover time (both spec and code) and the selection of benchmarks has varieda lot, too. Benchmark results shown in previous versions of LuaJIT or theCLBG are not directly comparable. Note that the CLBG currently only showsLua, not LuaJIT.
Most of the other benchmarks shown are Lua ports of standard benchmarks.E.g. SciMark for Lua has been splitup into individual benchmarks which are run with a fixed iteration count(to get a runtime and not an auto-scaled score).
The presented benchmark results are only indicative of the overallperformance of each VM. They should not be construed as an exactprediction for the possible speedup of any specific application. It'sadvisable to benchmark your own application code before drawingany conclusions.