The profiling package is an interactive continuous Python profiler. It is inspired from Unity 3D profiler. This package provides these features:
Install the latest release via PyPI:
$ pip install profiling
To profile a single program, simply run the profiling
command:
$ profiling your-program.py
Then an interactive viewer will be executed:
If your program uses greenlets, choose greenlet
timer:
$ profiling --timer=greenlet your-program.py
With --dump
option, it saves the profiling result to a file. You can
browse the saved result by using the view
subcommand:
$ profiling --dump=your-program.prf your-program.py
$ profiling view your-program.prf
If your script reads sys.argv
, append your arguments after --
.
It isolates your arguments from the profiling
command:
$ profiling your-program.py -- --your-flag --your-param=42
If your program has a long life time like a web server, a profiling result
at the end of program is not helpful enough. Probably you need a continuous
profiler. It can be achived by the live-profile
subcommand:
$ profiling live-profile webserver.py
See a demo:
There's a live-profiling server also. The server doesn't profile the program at ordinary times. But when a client connects to the server, it starts to profile and reports the results to the all connected clients.
Start a profling server by the remote-profile
subcommand:
$ profiling remote-profile webserver.py --bind 127.0.0.1:8912
And also run a client for the server by the view
subcommand:
$ profiling view 127.0.0.1:8912
TracingProfiler
, the default profiler, implements a deterministic profiler
for deep call graph. Of course, it has heavy overhead. The overhead can
pollute your profiling result or can make your application to be slow.
In contrast, SamplingProfiler
implements a statistical profiler. Like other
statistical profilers, it also has only very cheap overhead. When you profile
you can choose it by just --sampling
(shortly -S
) option:
$ profiling live-profile -S webserver.py
^^
Do you use timeit
to check the performance of your code?
$ python -m timeit -s 'from trueskill import *' 'rate_1vs1(Rating(), Rating())'
1000 loops, best of 3: 722 usec per loop
If you want to profile the checked code, simply use the timeit
subcommand:
$ profiling timeit -s 'from trueskill import *' 'rate_1vs1(Rating(), Rating())'
^^^^^^^^^
You can also profile your program by profiling.tracing.TracingProfiler
or
profiling.sampling.SamplingProfiler
directly:
from profiling.tracing import TracingProfiler
# profile your program.
profiler = TracingProfiler()
profiler.start()
... # run your program.
profiler.stop()
# or using context manager.
with profiler:
... # run your program.
# view and interact with the result.
profiler.run_viewer()
# or save profile data to file
profiler.dump('path/to/file')
FUNCTION
my_func (my_code.py:42)
, my_func (my_module:42)
)my_code.py
, my_module
)CALLS
- Total call count of the function.OWN
(Exclusive Time) - Total spent time in the function excluding sub
calls./CALL
after OWN
- Exclusive time per call.%
after OWN
- Exclusive time per total spent time.DEEP
(Inclusive Time) - Total spent time in the function./CALL
after DEEP
- Inclusive time per call.%
after DEEP
- Inclusive time per total spent time.OWN
(Exclusive Samples) - Number of samples which are collected during the
direct execution of the function.%
after OWN
- Exclusive samples per number of the total samples.DEEP
(Inclusive Samples) - Number of samples which are collected during the
excution of the function.%
after DEEP
- Inclusive samples per number of the total samples.There are some additional requirements to run the test code, which can be installed by running the following command.
$ pip install $(python test/fit_requirements.py test/requirements.txt)
Then you should be able to run pytest
.
$ pytest -v
-m
option.Written by Heungsub Lee at What! Studio in Nexon, and distributed under the BSD 3-Clause license.