by Emery Berger
Scalene is a high-performance CPU and memory profiler for Python that does a number of things that other Python profilers do not and cannot do. It runs orders of magnitude faster than other profilers while delivering far more detailed information.
numpy
arrays into Python arrays, and vice versa).You can use Homebrew to install the full version of Scalene (with memory profiling). Instead of using pip
as described below, just do this:
% brew tap emeryberger/scalene
% brew install --head libscalene
This will install a scalene
script you can use (see below).
Scalene is also distributed as a pip
package and works on Mac OS X and Linux platforms (including Ubuntu in Windows WSL2).
You can install it as follows:
% pip install scalene
or
% python -m pip install scalene
NEW: You can now install the full Scalene library and script on Arch Linux via the AUR
package. Use your favorite AUR helper, or
manually download the PKGBUILD
and run makepkg -cirs
to build. Note that this will place
libscalene.so
in /usr/lib
; modify the below usage instructions accordingly.
The following command will run Scalene on a provided example program.
% scalene test/testme.py
To see all the options, run with --help
.
% scalene --help
usage: scalene [-h] [-o OUTFILE] [--profile-interval PROFILE_INTERVAL]
[--wallclock]
prog
Scalene: a high-precision CPU and memory profiler.
https://github.com/emeryberger/Scalene
positional arguments:
prog program to be profiled
optional arguments:
-h, --help show this help message and exit
-o OUTFILE, --outfile OUTFILE
file to hold profiler output (default: stdout)
--profile-interval PROFILE_INTERVAL
output profiles every so many seconds.
--wallclock use wall clock time (default: virtual time)
--cpu-only only profile CPU time (default: profile CPU, memory, and copying)
Below is a table comparing the performance of various profilers to scalene, running on an example Python program (benchmarks/julia1_nopil.py
) from the book High Performance Python, by Gorelick and Ozsvald. All of these were run on a 2016 MacBook Pro.
Profiler | Time | Slowdown | ||||
---|---|---|---|---|---|---|
original program | 6.71s | 1.0x | ||||
cProfile |
11.04s | 1.65x | ||||
Profile |
202.26s | 30.14x | ||||
pyinstrument |
9.83s | 1.46x | ||||
line_profiler |
78.0s | 11.62x | ||||
pprofile (deterministic) |
403.67s | 60.16x | ||||
pprofile (statistical) |
7.47s | 1.11x | ||||
yappi (CPU) |
127.53s | 19.01x | ||||
yappi (wallclock) |
21.45s | 3.2x | ||||
py-spy |
7.25s | 1.08x | ||||
memory_profiler |
> 2 hours | >1000x | ||||
scalene (CPU only) |
6.98s | 1.04x | ||||
scalene (CPU + memory) |
7.68s | 1.14x |
And this table compares the features of other profilers vs. Scalene.
Profiler | Line-level? | CPU? | Wall clock vs. CPU time? | Python vs. native? | Memory? | Unmodified code? | Threads? |
---|---|---|---|---|---|---|---|
cProfile |
✔ | wall clock | ✔ | ||||
Profile |
✔ | CPU time | ✔ | ||||
pyinstrument |
✔ | wall clock | ✔ | ||||
line_profiler |
✔ | ✔ | wall clock | ||||
pprofile (deterministic) |
✔ | ✔ | wall clock | ✔ | ✔ | ||
pprofile (statistical) |
✔ | ✔ | wall clock | ✔ | ✔ | ||
yappi (CPU) |
✔ | CPU time | ✔ | ✔ | |||
yappi (wallclock) |
✔ | wall clock | ✔ | ✔ | |||
py-spy |
✔ | ✔ | both | ✔ | ✔ | ||
memory_profiler |
✔ | ✔ | |||||
scalene (CPU only) |
✔ | ✔ | both | ✔ | ✔ | ✔ | |
scalene (CPU + memory) |
✔ | ✔ | both | ✔ | ✔ | ✔ | ✔ |
Scalene prints annotated source code for the program being profiled and any modules it uses in the same directory or subdirectories. Here is a snippet from pystone.py
, just using CPU profiling:
benchmarks/pystone.py: % of CPU time = 100.00% out of 3.66s.
| CPU % | CPU % |
Line | (Python) | (native) | [benchmarks/pystone.py]
--------------------------------------------------------------------------------
[... lines omitted ...]
137 | 0.27% | 0.14% | def Proc1(PtrParIn):
138 | 1.37% | 0.11% | PtrParIn.PtrComp = NextRecord = PtrGlb.copy()
139 | 0.27% | 0.22% | PtrParIn.IntComp = 5
140 | 1.37% | 0.77% | NextRecord.IntComp = PtrParIn.IntComp
141 | 2.47% | 0.93% | NextRecord.PtrComp = PtrParIn.PtrComp
142 | 1.92% | 0.78% | NextRecord.PtrComp = Proc3(NextRecord.PtrComp)
143 | 0.27% | 0.17% | if NextRecord.Discr == Ident1:
144 | 0.82% | 0.30% | NextRecord.IntComp = 6
145 | 2.19% | 0.79% | NextRecord.EnumComp = Proc6(PtrParIn.EnumComp)
146 | 1.10% | 0.39% | NextRecord.PtrComp = PtrGlb.PtrComp
147 | 0.82% | 0.06% | NextRecord.IntComp = Proc7(NextRecord.IntComp, 10)
148 | | | else:
149 | | | PtrParIn = NextRecord.copy()
150 | 0.82% | 0.32% | NextRecord.PtrComp = None
151 | | | return PtrParIn
And here is an example with memory profiling enabled. The "sparklines" summarize memory consumption over time (at the top, for the whole program).
Memory usage: ▂▂▁▁▁▁▁▁▁▁▁▅█▅ (max: 1617.98MB)
phylliade/test2-2.py: % of CPU time = 40.68% out of 4.60s.
| CPU % | CPU % | Net | Memory usage | Copy |
Line | (Python) | (native) | (MB) | over time / % | (MB/s)| [phylliade/test2-2.py]
--------------------------------------------------------------------------------
1 | | | | | | import numpy as np
2 | | | | | |
3 | | | | | | @profile
4 | | | | | | def main():
5 | | | 92 | ▁▁▁▁▁▁▁▁▁ 11% | | x = np.array(range(10**7))
6 | 0.43% | 40.24% | 762 | ▁▁▄█▄ 89% | 168 | y = np.array(np.random.uniform(0, 100, size=(10**8)))
7 | | | | | |
8 | | | | | | main()
Positive net memory numbers indicate total memory allocation in megabytes; negative net memory numbers indicate memory reclamation.
The memory usage sparkline and copy volume make it easy to spot unnecessary copying in line 6.
If you use Scalene to successfully debug a performance problem, please add a comment to this issue!
Logo created by Sophia Berger.