Valgrind is licensed under the GNU General Public License,
version 2
An open-source tool for finding memory-management problems in
Linux-x86 executables.
As of version 1.0.4, there is a FAQ.txt
in the source
distribution. This might help in some common problem situations.
Valgrind is closely tied to details of the CPU, operating system and
to a less extent, compiler and basic C libraries. This makes it
difficult to make it portable, so I have chosen at the outset to
concentrate on what I believe to be a widely used platform: Linux on
x86s. Valgrind uses the standard Unix ./configure
,
make
, make install
mechanism, and I have
attempted to ensure that it works on machines with kernel 2.2 or 2.4
and glibc 2.1.X or 2.2.X. This should cover the vast majority of
modern Linux installations.
Valgrind is licensed under the GNU General Public License, version
2. Read the file COPYING in the source distribution for details. Some
of the PThreads test cases, test/pth_*.c
, are taken from
"Pthreads Programming" by Bradford Nichols, Dick Buttlar & Jacqueline
Proulx Farrell, ISBN 1-56592-115-1, published by O'Reilly &
Associates, Inc.
valgrind
at the start of the command line
normally used to run the program. So, for example, if you want to run
the command ls -l
on Valgrind, simply issue the
command: valgrind ls -l
.
Valgrind takes control of your program before it starts. Debugging information is read from the executable and associated libraries, so that error messages can be phrased in terms of source code locations. Your program is then run on a synthetic x86 CPU which checks every memory access. All detected errors are written to a log. When the program finishes, Valgrind searches for and reports on leaked memory.
You can run pretty much any dynamically linked ELF x86 executable using Valgrind. Programs run 25 to 50 times slower, and take a lot more memory, than they usually would. It works well enough to run large programs. For example, the Konqueror web browser from the KDE Desktop Environment, version 3.0, runs slowly but usably on Valgrind.
Valgrind simulates every single instruction your program executes.
Because of this, it finds errors not only in your application but also
in all supporting dynamically-linked (.so
-format)
libraries, including the GNU C library, the X client libraries, Qt, if
you work with KDE, and so on. That often includes libraries, for
example the GNU C library, which contain memory access violations, but
which you cannot or do not want to fix.
Rather than swamping you with errors in which you are not interested, Valgrind allows you to selectively suppress errors, by recording them in a suppressions file which is read when Valgrind starts up. The build mechanism attempts to select suppressions which give reasonable behaviour for the libc and XFree86 versions detected on your machine.
Section 6 shows an example of use.
-g
flag). You don't have to
do this, but doing so helps Valgrind produce more accurate and less
confusing error reports. Chances are you're set up like this already,
if you intended to debug your program with GNU gdb, or some other
debugger.
A plausible compromise is to use -g -O
.
Optimisation levels above -O
have been observed, on very
rare occasions, to cause gcc to generate code which fools Valgrind's
error tracking machinery into wrongly reporting uninitialised value
errors. -O
gets you the vast majority of the benefits of
higher optimisation levels anyway, so you don't lose much there.
Valgrind understands both the older "stabs" debugging format, used by gcc versions prior to 3.1, and the newer DWARF2 format used by gcc 3.1 and later.
Then just run your application, but place the word
valgrind
in front of your usual command-line invokation.
Note that you should run the real (machine-code) executable here. If
your application is started by, for example, a shell or perl script,
you'll need to modify it to invoke Valgrind on the real executables.
Running such scripts directly under Valgrind will result in you
getting error reports pertaining to /bin/sh
,
/usr/bin/perl
, or whatever interpreter you're using.
This almost certainly isn't what you want and can be confusing.
All lines in the commentary are of the following form:
==12345== some-message-from-Valgrind
The 12345
is the process ID. This scheme makes it easy
to distinguish program output from Valgrind commentary, and also easy
to differentiate commentaries from different processes which have
become merged together, for whatever reason.
By default, Valgrind writes only essential messages to the commentary,
so as to avoid flooding you with information of secondary importance.
If you want more information about what is happening, re-run, passing
the -v
flag to Valgrind.
==25832== Invalid read of size 4 ==25832== at 0x8048724: BandMatrix::ReSize(int, int, int) (bogon.cpp:45) ==25832== by 0x80487AF: main (bogon.cpp:66) ==25832== by 0x40371E5E: __libc_start_main (libc-start.c:129) ==25832== by 0x80485D1: (within /home/sewardj/newmat10/bogon) ==25832== Address 0xBFFFF74C is not stack'd, malloc'd or free'd
This message says that the program did an illegal 4-byte read of
address 0xBFFFF74C, which, as far as it can tell, is not a valid stack
address, nor corresponds to any currently malloc'd or free'd blocks.
The read is happening at line 45 of bogon.cpp
, called
from line 66 of the same file, etc. For errors associated with an
identified malloc'd/free'd block, for example reading free'd memory,
Valgrind reports not only the location where the error happened, but
also where the associated block was malloc'd/free'd.
Valgrind remembers all error reports. When an error is detected, it is compared against old reports, to see if it is a duplicate. If so, the error is noted, but no further commentary is emitted. This avoids you being swamped with bazillions of duplicate error reports.
If you want to know how many times each error occurred, run with
the -v
option. When execution finishes, all the reports
are printed out, along with, and sorted by, their occurrence counts.
This makes it easy to see which errors have occurred most frequently.
Errors are reported before the associated operation actually happens. For example, if you program decides to read from address zero, Valgrind will emit a message to this effect, and the program will then duly die with a segmentation fault.
In general, you should try and fix errors in the order that they are reported. Not doing so can be confusing. For example, a program which copies uninitialised values to several memory locations, and later uses them, will generate several error messages. The first such error message may well give the most direct clue to the root cause of the problem.
The process of detecting duplicate errors is quite an expensive
one and can become a significant performance overhead if your program
generates huge quantities of errors. To avoid serious problems here,
Valgrind will simply stop collecting errors after 300 different errors
have been seen, or 30000 errors in total have been seen. In this
situation you might as well stop your program and fix it, because
Valgrind won't tell you anything else useful after this. Note that
the 300/30000 limits apply after suppressed errors are removed. These
limits are defined in vg_include.h
and can be increased
if necessary.
To avoid this cutoff you can use the
--error-limit=no
flag. Then valgrind will always show
errors, regardless of how many there are. Use this flag carefully,
since it may have a dire effect on performance.
./configure
script.
You can modify and add to the suppressions file at your leisure, or, better, write your own. Multiple suppression files are allowed. This is useful if part of your project contains errors you can't or don't want to fix, yet you don't want to continuously be reminded of them.
Each error to be suppressed is described very specifically, to minimise the possibility that a suppression-directive inadvertantly suppresses a bunch of similar errors which you did want to see. The suppression mechanism is designed to allow precise yet flexible specification of errors to suppress.
If you use the -v
flag, at the end of execution, Valgrind
prints out one line for each used suppression, giving its name and the
number of times it got used. Here's the suppressions used by a run of
ls -l
:
--27579-- supp: 1 socketcall.connect(serv_addr)/__libc_connect/__nscd_getgrgid_r --27579-- supp: 1 socketcall.connect(serv_addr)/__libc_connect/__nscd_getpwuid_r --27579-- supp: 6 strrchr/_dl_map_object_from_fd/_dl_map_object
valgrind [options-for-Valgrind] your-prog [options for your-prog]
Note that Valgrind also reads options from the environment variable
$VALGRIND_OPTS
, and processes them before the command-line
options.
Valgrind's default settings succeed in giving reasonable behaviour in most cases. Available options, in no particular order, are as follows:
--help
--version
The usual deal.
-v --verbose
Be more verbose. Gives extra information on various aspects of your program, such as: the shared objects loaded, the suppressions used, the progress of the instrumentation engine, and warnings about unusual behaviour.
-q --quiet
Run silently, and only print error messages. Useful if you are running regression tests or have some other automated test machinery.
--demangle=no
--demangle=yes
[the default]
Disable/enable automatic demangling (decoding) of C++ names. Enabled by default. When enabled, Valgrind will attempt to translate encoded C++ procedure names back to something approaching the original. The demangler handles symbols mangled by g++ versions 2.X and 3.X.
An important fact about demangling is that function names mentioned in suppressions files should be in their mangled form. Valgrind does not demangle function names when searching for applicable suppressions, because to do otherwise would make suppressions file contents dependent on the state of Valgrind's demangling machinery, and would also be slow and pointless.
--num-callers=<number>
[default=4]By default, Valgrind shows four levels of function call names to help you identify program locations. You can change that number with this option. This can help in determining the program's location in deeply-nested call chains. Note that errors are commoned up using only the top three function locations (the place in the current function, and that of its two immediate callers). So this doesn't affect the total number of errors reported.
The maximum value for this is 50. Note that higher settings will make Valgrind run a bit more slowly and take a bit more memory, but can be useful when working with programs with deeply-nested call chains.
--gdb-attach=no
[the default]--gdb-attach=yes
When enabled, Valgrind will pause after every error shown,
and print the line
---- Attach to GDB ? --- [Return/N/n/Y/y/C/c] ----
Pressing Ret
, or N
Ret
or n
Ret
, causes Valgrind not to
start GDB for this error.
Y
Ret
or y
Ret
causes Valgrind to
start GDB, for the program at this point. When you have
finished with GDB, quit from it, and the program will continue.
Trying to continue from inside GDB doesn't work.
C
Ret
or c
Ret
causes Valgrind not to
start GDB, and not to ask again.
--gdb-attach=yes
conflicts with
--trace-children=yes
. You can't use them together.
Valgrind refuses to start up in this situation. 1 May 2002:
this is a historical relic which could be easily fixed if it
gets in your way. Mail me and complain if this is a problem for
you.
--partial-loads-ok=yes
[the default]--partial-loads-ok=no
Controls how Valgrind handles word (4-byte) loads from
addresses for which some bytes are addressible and others
are not. When yes
(the default), such loads
do not elicit an address error. Instead, the loaded V bytes
corresponding to the illegal addresses indicate undefined, and
those corresponding to legal addresses are loaded from shadow
memory, as usual.
When no
, loads from partially
invalid addresses are treated the same as loads from completely
invalid addresses: an illegal-address error is issued,
and the resulting V bytes indicate valid data.
--sloppy-malloc=no
[the default]--sloppy-malloc=yes
When enabled, all requests for malloc/calloc are rounded up to a whole number of machine words -- in other words, made divisible by 4. For example, a request for 17 bytes of space would result in a 20-byte area being made available. This works around bugs in sloppy libraries which assume that they can safely rely on malloc/calloc requests being rounded up in this fashion. Without the workaround, these libraries tend to generate large numbers of errors when they access the ends of these areas.
Valgrind snapshots dated 17 Feb 2002 and later are
cleverer about this problem, and you should no longer need to
use this flag. To put it bluntly, if you do need to use this
flag, your program violates the ANSI C semantics defined for
malloc
and free
, even if it appears to
work correctly, and you should fix it, at least if you hope for
maximum portability.
--alignment=<number>
[default: 4]By
default valgrind's malloc
, realloc
,
etc, return 4-byte aligned addresses. These are suitable for
any accesses on x86 processors.
Some programs might however assume that malloc
et
al return 8- or more aligned memory.
These programs are broken and should be fixed, but
if this is impossible for whatever reason the alignment can be
increased using this parameter. The supplied value must be
between 4 and 4096 inclusive, and must be a power of two.
--trace-children=no
[the default]--trace-children=yes
When enabled, Valgrind will trace into child processes. This
is confusing and usually not what you want, so is disabled by
default. As of 1 May 2002, tracing into a child process from a
parent which uses libpthread.so
is probably broken
and is likely to cause breakage. Please report any such
problems to me.
--freelist-vol=<number>
[default: 1000000]
When the client program releases memory using free (in C) or delete (C++), that memory is not immediately made available for re-allocation. Instead it is marked inaccessible and placed in a queue of freed blocks. The purpose is to delay the point at which freed-up memory comes back into circulation. This increases the chance that Valgrind will be able to detect invalid accesses to blocks for some significant period of time after they have been freed.
This flag specifies the maximum total size, in bytes, of the blocks in the queue. The default value is one million bytes. Increasing this increases the total amount of memory used by Valgrind but may detect invalid uses of freed blocks which would otherwise go undetected.
--logfile-fd=<number>
[default: 2, stderr]
Specifies the file descriptor on which Valgrind communicates
all of its messages. The default, 2, is the standard error
channel. This may interfere with the client's own use of
stderr. To dump Valgrind's commentary in a file without using
stderr, something like the following works well (sh/bash
syntax):
valgrind --logfile-fd=9 my_prog 9> logfile
That is: tell Valgrind to send all output to file descriptor 9,
and ask the shell to route file descriptor 9 to "logfile".
--suppressions=<filename>
[default: $PREFIX/lib/valgrind/default.supp]
Specifies an extra file from which to read descriptions of errors to suppress. You may use as many extra suppressions files as you like.
--leak-check=no
[default]--leak-check=yes
When enabled, search for memory leaks when the client program finishes. A memory leak means a malloc'd block, which has not yet been free'd, but to which no pointer can be found. Such a block can never be free'd by the program, since no pointer to it exists. Leak checking is disabled by default because it tends to generate dozens of error messages.
--show-reachable=no
[default]--show-reachable=yes
When disabled, the memory leak detector only shows blocks for
which it cannot find a pointer to at all, or it can only find a
pointer to the middle of. These blocks are prime candidates for
memory leaks. When enabled, the leak detector also reports on
blocks which it could find a pointer to. Your program could, at
least in principle, have freed such blocks before exit.
Contrast this to blocks for which no pointer, or only an
interior pointer could be found: they are more likely to
indicate memory leaks, because you do not actually have a
pointer to the start of the block which you can hand to
free
, even if you wanted to.
--leak-resolution=low
[default]--leak-resolution=med
--leak-resolution=high
When doing leak checking, determines how willing Valgrind is
to consider different backtraces to be the same. When set to
low
, the default, only the first two entries need
match. When med
, four entries have to match. When
high
, all entries need to match.
For hardcore leak debugging, you probably want to use
--leak-resolution=high
together with
--num-callers=40
or some such large number. Note
however that this can give an overwhelming amount of
information, which is why the defaults are 4 callers and
low-resolution matching.
Note that the --leak-resolution=
setting does not
affect Valgrind's ability to find leaks. It only changes how
the results are presented.
--workaround-gcc296-bugs=no
[default]--workaround-gcc296-bugs=yes
When enabled,
assume that reads and writes some small distance below the stack
pointer %esp
are due to bugs in gcc 2.96, and does
not report them. The "small distance" is 256 bytes by default.
Note that gcc 2.96 is the default compiler on some popular Linux
distributions (RedHat 7.X, Mandrake) and so you may well need to
use this flag. Do not use it if you do not have to, as it can
cause real errors to be overlooked. Another option is to use a
gcc/g++ which does not generate accesses below the stack
pointer. 2.95.3 seems to be a good choice in this respect.
Unfortunately (27 Feb 02) it looks like g++ 3.0.4 has a similar bug, so you may need to issue this flag if you use 3.0.4. A while later (early Apr 02) this is confirmed as a scheduling bug in g++-3.0.4.
--error-limit=yes
[default]--error-limit=no
When enabled, valgrind stops reporting errors after 30000 in total, or 300 different ones, have been seen. This is to stop the error tracking machinery from becoming a huge performance overhead in programs with many errors.
--avoid-strlen-errors=yes
[default]--avoid-strlen-errors=no
When enabled, valgrind inspects each basic block it instruments for some tell-tale literals (0xFEFEFEFF, 0x80808080, 0x00008080) which suggest that this block is part of an inlined strlen() function. In many cases such functions cause spurious uninitialised-value errors to be reported -- their code is too clever for the instrumentation scheme. This horrible hack works around the problem, at the expense of hiding any genuine uninitialised-value errors which might appear ine such blocks. It is enabled by default because it is needed to get sensible behaviour on code compiled by gcc-3.1 and above.
--cachesim=no
[default]--cachesim=yes
When enabled, turns off memory checking, and turns on cache profiling. Cache profiling is described in detail in Section 7.
--weird-hacks=hack1,hack2,...
Pass miscellaneous hints to Valgrind which slightly modify the
simulated behaviour in nonstandard or dangerous ways, possibly
to help the simulation of strange features. By default no hacks
are enabled. Use with caution! Currently known hacks are:
ioctl-VTIME
Use this if you have a program
which sets readable file descriptors to have a timeout by
doing ioctl
on them with a
TCSETA
-style command and a non-zero
VTIME
timeout value. This is considered
potentially dangerous and therefore is not engaged by
default, because it is (remotely) conceivable that it could
cause threads doing read
to incorrectly block
the entire process.
You probably want to try this one if you have a program
which unexpectedly blocks in a read
from a file
descriptor which you know to have been messed with by
ioctl
. This could happen, for example, if the
descriptor is used to read input from some kind of screen
handling library.
To find out if your program is blocking unexpectedly in the
read
system call, run with
--trace-syscalls=yes
flag.
truncate-writes
Use this if you have a threaded
program which appears to unexpectedly block whilst writing
into a pipe. The effect is to modify all calls to
write()
so that requests to write more than
4096 bytes are treated as if they only requested a write of
4096 bytes. Valgrind does this by changing the
count
argument of write()
, as
passed to the kernel, so that it is at most 4096. The
amount of data written will then be less than the client
program asked for, but the client should have a loop around
its write()
call to check whether the requested
number of bytes have been written. If not, it should issue
further write()
calls until all the data is
written.
This all sounds pretty dodgy to me, which is why I've made this behaviour only happen on request. It is not the default behaviour. At the time of writing this (30 June 2002) I have only seen one example where this is necessary, so either the problem is extremely rare or nobody is using Valgrind :-)
On experimentation I see that truncate-writes
doesn't interact well with ioctl-VTIME
, so you
probably don't want to try both at once.
As above, to find out if your program is blocking
unexpectedly in the write()
system call, you
may find the --trace-syscalls=yes
--trace-sched=yes
flags useful.
lax-ioctls
Reduce accuracy of ioctl checking.
Doesn't require the full buffer to be initialized when
writing. Without this, using some device drivers with a
large number of strange ioctl commands becomes very
tiresome. You can use this as a quick hack to workaround
unimplemented ioctls. A better long-term solution is to
write a proper wrapper for the ioctl. This is quite easy
- for details read
README_MISSING_SYSCALL_OR_IOCTL
in the
source distribution.
--single-step=no
[default]--single-step=yes
When enabled, each x86 insn is translated seperately into instrumented code. When disabled, translation is done on a per-basic-block basis, giving much better translations.
--optimise=no
--optimise=yes
[default]
When enabled, various improvements are applied to the intermediate code, mainly aimed at allowing the simulated CPU's registers to be cached in the real CPU's registers over several simulated instructions.
--instrument=no
--instrument=yes
[default]
When disabled, the translations don't actually contain any instrumentation.
--cleanup=no
--cleanup=yes
[default]
When enabled, various improvments are applied to the post-instrumented intermediate code, aimed at removing redundant value checks.
--trace-syscalls=no
[default]--trace-syscalls=yes
Enable/disable tracing of system call intercepts.
--trace-signals=no
[default]--trace-signals=yes
Enable/disable tracing of signal handling.
--trace-sched=no
[default]--trace-sched=yes
Enable/disable tracing of thread scheduling events.
--trace-pthread=none
[default]--trace-pthread=some
--trace-pthread=all
Specifies amount of trace detail for pthread-related events.
--trace-symtab=no
[default]--trace-symtab=yes
Enable/disable tracing of symbol table reading.
--trace-malloc=no
[default]--trace-malloc=yes
Enable/disable tracing of malloc/free (et al) intercepts.
--stop-after=<number>
[default: infinity, more or less]
After <number> basic blocks have been executed, shut down Valgrind and switch back to running the client on the real CPU.
--dump-error=<number>
[default: inactive]
After the program has exited, show gory details of the
translation of the basic block containing the <number>'th
error context. When used with --single-step=yes
,
can show the exact x86 instruction causing an error. This is
all fairly dodgy and doesn't work at all if threads are
involved.
Invalid read of size 4 at 0x40F6BBCC: (within /usr/lib/libpng.so.2.1.0.9) by 0x40F6B804: (within /usr/lib/libpng.so.2.1.0.9) by 0x40B07FF4: read_png_image__FP8QImageIO (kernel/qpngio.cpp:326) by 0x40AC751B: QImageIO::read() (kernel/qimage.cpp:3621) Address 0xBFFFF0E0 is not stack'd, malloc'd or free'd
This happens when your program reads or writes memory at a place which Valgrind reckons it shouldn't. In this example, the program did a 4-byte read at address 0xBFFFF0E0, somewhere within the system-supplied library libpng.so.2.1.0.9, which was called from somewhere else in the same library, called from line 326 of qpngio.cpp, and so on.
Valgrind tries to establish what the illegal address might relate to, since that's often useful. So, if it points into a block of memory which has already been freed, you'll be informed of this, and also where the block was free'd at. Likewise, if it should turn out to be just off the end of a malloc'd block, a common result of off-by-one-errors in array subscripting, you'll be informed of this fact, and also where the block was malloc'd.
In this example, Valgrind can't identify the address. Actually the address is on the stack, but, for some reason, this is not a valid stack address -- it is below the stack pointer, %esp, and that isn't allowed. In this particular case it's probably caused by gcc generating invalid code, a known bug in various flavours of gcc.
Note that Valgrind only tells you that your program is about to access memory at an illegal address. It can't stop the access from happening. So, if your program makes an access which normally would result in a segmentation fault, you program will still suffer the same fate -- but you will get a message from Valgrind immediately prior to this. In this particular example, reading junk on the stack is non-fatal, and the program stays alive.
Conditional jump or move depends on uninitialised value(s) at 0x402DFA94: _IO_vfprintf (_itoa.h:49) by 0x402E8476: _IO_printf (printf.c:36) by 0x8048472: main (tests/manuel1.c:8) by 0x402A6E5E: __libc_start_main (libc-start.c:129)
An uninitialised-value use error is reported when your program uses a value which hasn't been initialised -- in other words, is undefined. Here, the undefined value is used somewhere inside the printf() machinery of the C library. This error was reported when running the following small program:
int main() { int x; printf ("x = %d\n", x); }
It is important to understand that your program can copy around junk (uninitialised) data to its heart's content. Valgrind observes this and keeps track of the data, but does not complain. A complaint is issued only when your program attempts to make use of uninitialised data. In this example, x is uninitialised. Valgrind observes the value being passed to _IO_printf and thence to _IO_vfprintf, but makes no comment. However, _IO_vfprintf has to examine the value of x so it can turn it into the corresponding ASCII string, and it is at this point that Valgrind complains.
Sources of uninitialised data tend to be:
Invalid free() at 0x4004FFDF: free (ut_clientmalloc.c:577) by 0x80484C7: main (tests/doublefree.c:10) by 0x402A6E5E: __libc_start_main (libc-start.c:129) by 0x80483B1: (within tests/doublefree) Address 0x3807F7B4 is 0 bytes inside a block of size 177 free'd at 0x4004FFDF: free (ut_clientmalloc.c:577) by 0x80484C7: main (tests/doublefree.c:10) by 0x402A6E5E: __libc_start_main (libc-start.c:129) by 0x80483B1: (within tests/doublefree)
Valgrind keeps track of the blocks allocated by your program with malloc/new, so it can know exactly whether or not the argument to free/delete is legitimate or not. Here, this test program has freed the same block twice. As with the illegal read/write errors, Valgrind attempts to make sense of the address free'd. If, as here, the address is one which has previously been freed, you wil be told that -- making duplicate frees of the same block easy to spot.
new[]
has wrongly been deallocated with free
:
Mismatched free() / delete / delete [] at 0x40043249: free (vg_clientfuncs.c:171) by 0x4102BB4E: QGArray::~QGArray(void) (tools/qgarray.cpp:149) by 0x4C261C41: PptDoc::~PptDoc(void) (include/qmemarray.h:60) by 0x4C261F0E: PptXml::~PptXml(void) (pptxml.cc:44) Address 0x4BB292A8 is 0 bytes inside a block of size 64 alloc'd at 0x4004318C: __builtin_vec_new (vg_clientfuncs.c:152) by 0x4C21BC15: KLaola::readSBStream(int) const (klaola.cc:314) by 0x4C21C155: KLaola::stream(KLaola::OLENode const *) (klaola.cc:416) by 0x4C21788F: OLEFilter::convert(QCString const &) (olefilter.cc:272)The following was told to me be the KDE 3 developers. I didn't know any of it myself. They also implemented the check itself.
In C++ it's important to deallocate memory in a way compatible with how it was allocated. The deal is:
malloc
, calloc
,
realloc
, valloc
or
memalign
, you must deallocate with free
.
new[]
, you must deallocate with
delete[]
.
new
, you must deallocate with
delete
.
Pascal Massimino adds the following clarification:
delete[]
must be called associated with a
new[]
because the compiler stores the size of the array
and the pointer-to-member to the destructor of the array's content
just before the pointer actually returned. This implies a
variable-sized overhead in what's returned by new
or
new[]
. It rather surprising how compilers [Ed:
runtime-support libraries?] are robust to mismatch in
new
/delete
new[]
/delete[]
.
Here's an example of a system call with an invalid parameter:
#include <stdlib.h> #include <unistd.h> int main( void ) { char* arr = malloc(10); (void) write( 1 /* stdout */, arr, 10 ); return 0; }
You get this complaint ...
Syscall param write(buf) contains uninitialised or unaddressable byte(s) at 0x4035E072: __libc_write by 0x402A6E5E: __libc_start_main (libc-start.c:129) by 0x80483B1: (within tests/badwrite) by <bogus frame pointer> ??? Address 0x3807E6D0 is 0 bytes inside a block of size 10 alloc'd at 0x4004FEE6: malloc (ut_clientmalloc.c:539) by 0x80484A0: main (tests/badwrite.c:6) by 0x402A6E5E: __libc_start_main (libc-start.c:129) by 0x80483B1: (within tests/badwrite)
... because the program has tried to write uninitialised junk from the malloc'd block to the standard output.
-v
):
More than 50 errors detected. Subsequent errors
will still be recorded, but in less detail than before.
More than 300 errors detected. I'm not reporting any more.
Final error counts may be inaccurate. Go fix your
program!
Warning: client switching stacks?
Warning: client attempted to close Valgrind's logfile fd <number>
--logfile-fd=<number>
option to specify a different logfile file-descriptor number.
Warning: noted but unhandled ioctl <number>
ioctl
system calls, but did not modify its
memory status info (because I have not yet got round to it).
The call will still have gone through, but you may get spurious
errors after this as a result of the non-update of the memory info.
Warning: set address range perms: large range <number>
$PREFIX/lib/valgrind/default.supp
.
You can ask to add suppressions from another file, by specifying
--suppressions=/path/to/file.supp
.
Each suppression has the following components:
Value1
,
Value2
,
Value4
or
Value8
,
meaning an uninitialised-value error when
using a value of 1, 2, 4 or 8 bytes.
Or
Cond
(or its old name, Value0
),
meaning use of an uninitialised CPU condition code. Or:
Addr1
,
Addr2
,
Addr4
or
Addr8
, meaning an invalid address during a
memory access of 1, 2, 4 or 8 bytes respectively. Or
Param
,
meaning an invalid system call parameter error. Or
Free
, meaning an invalid or mismatching free.
Or PThread
, meaning any kind of complaint to do
with the PThreads API.
free
,
__builtin_vec_delete
, etc)
Locations may be either names of shared objects or wildcards matching
function names. They begin obj:
and fun:
respectively. Function and object names to match against may use the
wildcard characters *
and ?
.
A suppression only suppresses an error when the error matches all the
details in the suppression. Here's an example:
{ __gconv_transform_ascii_internal/__mbrtowc/mbtowc Value4 fun:__gconv_transform_ascii_internal fun:__mbr*toc fun:mbtowc }
What is means is: suppress a use-of-uninitialised-value error, when
the data size is 4, when it occurs in the function
__gconv_transform_ascii_internal
, when that is called
from any function of name matching __mbr*toc
,
when that is called from
mbtowc
. It doesn't apply under any other circumstances.
The string by which this suppression is identified to the user is
__gconv_transform_ascii_internal/__mbrtowc/mbtowc.
Another example:
{ libX11.so.6.2/libX11.so.6.2/libXaw.so.7.0 Value4 obj:/usr/X11R6/lib/libX11.so.6.2 obj:/usr/X11R6/lib/libX11.so.6.2 obj:/usr/X11R6/lib/libXaw.so.7.0 }
Suppress any size 4 uninitialised-value error which occurs anywhere
in libX11.so.6.2
, when called from anywhere in the same
library, when called from anywhere in libXaw.so.7.0
. The
inexact specification of locations is regrettable, but is about all
you can hope for, given that the X11 libraries shipped with Red Hat
7.2 have had their symbol tables removed.
Note -- since the above two examples did not make it clear -- that
you can freely mix the obj:
and fun:
styles of description within a single suppression record.
For your convenience, a subset of these so-called client requests is provided to allow you to tell Valgrind facts about the behaviour of your program, and conversely to make queries. In particular, your program can tell Valgrind about changes in memory range permissions that Valgrind would not otherwise know about, and so allows clients to get Valgrind to do arbitrary custom checks.
Clients need to include the header file valgrind.h
to
make this work. The macros therein have the magical property that
they generate code in-line which Valgrind can spot. However, the code
does nothing when not run on Valgrind, so you are not forced to run
your program on Valgrind just because you use the macros in this file.
Also, you are not required to link your program with any extra
supporting libraries.
A brief description of the available macros:
VALGRIND_MAKE_NOACCESS
,
VALGRIND_MAKE_WRITABLE
and
VALGRIND_MAKE_READABLE
. These mark address
ranges as completely inaccessible, accessible but containing
undefined data, and accessible and containing defined data,
respectively. Subsequent errors may have their faulting
addresses described in terms of these blocks. Returns a
"block handle". Returns zero when not run on Valgrind.
VALGRIND_DISCARD
: At some point you may want
Valgrind to stop reporting errors in terms of the blocks
defined by the previous three macros. To do this, the above
macros return a small-integer "block handle". You can pass
this block handle to VALGRIND_DISCARD
. After
doing so, Valgrind will no longer be able to relate
addressing errors to the user-defined block associated with
the handle. The permissions settings associated with the
handle remain in place; this just affects how errors are
reported, not whether they are reported. Returns 1 for an
invalid handle and 0 for a valid handle (although passing
invalid handles is harmless). Always returns 0 when not run
on Valgrind.
VALGRIND_CHECK_NOACCESS
,
VALGRIND_CHECK_WRITABLE
and
VALGRIND_CHECK_READABLE
: check immediately
whether or not the given address range has the relevant
property, and if not, print an error message. Also, for the
convenience of the client, returns zero if the relevant
property holds; otherwise, the returned value is the address
of the first byte for which the property is not true.
Always returns 0 when not run on Valgrind.
VALGRIND_CHECK_NOACCESS
: a quick and easy way
to find out whether Valgrind thinks a particular variable
(lvalue, to be precise) is addressible and defined. Prints
an error message if not. Returns no value.
VALGRIND_MAKE_NOACCESS_STACK
: a highly
experimental feature. Similarly to
VALGRIND_MAKE_NOACCESS
, this marks an address
range as inaccessible, so that subsequent accesses to an
address in the range gives an error. However, this macro
does not return a block handle. Instead, all annotations
created like this are reviewed at each client
ret
(subroutine return) instruction, and those
which now define an address range block the client's stack
pointer register (%esp
) are automatically
deleted.
In other words, this macro allows the client to tell Valgrind about red-zones on its own stack. Valgrind automatically discards this information when the stack retreats past such blocks. Beware: hacky and flaky, and probably interacts badly with the new pthread support.
RUNNING_ON_VALGRIND
: returns 1 if running on
Valgrind, 0 if running on the real CPU.
VALGRIND_DO_LEAK_CHECK
: run the memory leak detector
right now. Returns no value. I guess this could be used to
incrementally check for leaks between arbitrary places in the
program's execution. Warning: not properly tested!
VALGRIND_DISCARD_TRANSLATIONS
: discard translations
of code in the specified address range. Useful if you are
debugging a JITter or some other dynamic code generation system.
After this call, attempts to execute code in the invalidated
address range will cause valgrind to make new translations of that
code, which is probably the semantics you want. Note that this is
implemented naively, and involves checking all 200191 entries in
the translation table to see if any of them overlap the specified
address range. So try not to call it often, or performance will
nosedive. Note that you can be clever about this: you only need
to call it when an area which previously contained code is
overwritten with new code. You can choose to write code into
fresh memory, and just call this occasionally to discard large
chunks of old code all at once.
Warning: minimally tested, especially for the cache simulator.
It works as follows: threaded apps are (dynamically) linked against
libpthread.so
. Usually this is the one installed with
your Linux distribution. Valgrind, however, supplies its own
libpthread.so
and automatically connects your program to
it instead.
The fake libpthread.so
and Valgrind cooperate to
implement a user-space pthreads package. This approach avoids the
horrible implementation problems of implementing a truly
multiprocessor version of Valgrind, but it does mean that threaded
apps run only on one CPU, even if you have a multiprocessor machine.
Valgrind schedules your threads in a round-robin fashion, with all threads having equal priority. It switches threads every 50000 basic blocks (typically around 300000 x86 instructions), which means you'll get a much finer interleaving of thread executions than when run natively. This in itself may cause your program to behave differently if you have some kind of concurrency, critical race, locking, or similar, bugs.
The current (valgrind-1.0 release) state of pthread support is as follows:
pthread_once
, reader-writer locks, semaphores,
cleanup stacks, cancellation and thread detaching currently work.
Various attribute-like calls are handled but ignored; you get a
warning message.
write
read
nanosleep
sleep
select
poll
recvmsg
and
accept
.
pthread_sigmask
, pthread_kill
,
sigwait
and raise
are now implemented.
Each thread has its own signal mask, as POSIX requires.
It's a bit kludgey -- there's a system-wide pending signal set,
rather than one for each thread. But hey.
./configure
,
make
, make install
mechanism, and I have
attempted to ensure that it works on machines with kernel 2.2 or 2.4
and glibc 2.1.X or 2.2.X. I don't think there is much else to say.
There are no options apart from the usual --prefix
that
you should give to ./configure
.
The configure
script tests the version of the X server
currently indicated by the current $DISPLAY
. This is a
known bug. The intention was to detect the version of the current
XFree86 client libraries, so that correct suppressions could be
selected for them, but instead the test checks the server version.
This is just plain wrong.
If you are building a binary package of Valgrind for distribution,
please read README_PACKAGERS
. It contains some important
information.
Apart from that there is no excitement here. Let me know if you have build problems.
See Section 4 for the known limitations of Valgrind, and for a list of programs which are known not to work on it.
The translator/instrumentor has a lot of assertions in it. They are permanently enabled, and I have no plans to disable them. If one of these breaks, please mail me!
If you get an assertion failure on the expression
chunkSane(ch)
in vg_free()
in
vg_malloc.c
, this may have happened because your program
wrote off the end of a malloc'd block, or before its beginning.
Valgrind should have emitted a proper message to that effect before
dying in this way. This is a known problem which I should fix.
As of version 1.0.4, there is a FAQ.txt
in the source
distribution. This might help in some common problem situations.
Each byte in the system therefore has a 8 V bits which follow it wherever it goes. For example, when the CPU loads a word-size item (4 bytes) from memory, it also loads the corresponding 32 V bits from a bitmap which stores the V bits for the process' entire address space. If the CPU should later write the whole or some part of that value to memory at a different address, the relevant V bits will be stored back in the V-bit bitmap.
In short, each bit in the system has an associated V bit, which
follows it around everywhere, even inside the CPU. Yes, the CPU's
(integer and %eflags
) registers have their own V bit
vectors.
Copying values around does not cause Valgrind to check for, or report on, errors. However, when a value is used in a way which might conceivably affect the outcome of your program's computation, the associated V bits are immediately checked. If any of these indicate that the value is undefined, an error is reported.
Here's an (admittedly nonsensical) example:
int i, j; int a[10], b[10]; for (i = 0; i < 10; i++) { j = a[i]; b[i] = j; }
Valgrind emits no complaints about this, since it merely copies
uninitialised values from a[]
into b[]
, and
doesn't use them in any way. However, if the loop is changed to
for (i = 0; i < 10; i++) { j += a[i]; } if (j == 77) printf("hello there\n");then Valgrind will complain, at the
if
, that the
condition depends on uninitialised values.
Most low level operations, such as adds, cause Valgrind to use the V bits for the operands to calculate the V bits for the result. Even if the result is partially or wholly undefined, it does not complain.
Checks on definedness only occur in two places: when a value is used to generate a memory address, and where control flow decision needs to be made. Also, when a system call is detected, valgrind checks definedness of parameters as required.
If a check should detect undefinedness, an error message is issued. The resulting value is subsequently regarded as well-defined. To do otherwise would give long chains of error messages. In effect, we say that undefined values are non-infectious.
This sounds overcomplicated. Why not just check all reads from memory, and complain if an undefined value is loaded into a CPU register? Well, that doesn't work well, because perfectly legitimate C programs routinely copy uninitialised values around in memory, and we don't want endless complaints about that. Here's the canonical example. Consider a struct like this:
struct S { int x; char c; }; struct S s1, s2; s1.x = 42; s1.c = 'z'; s2 = s1;
The question to ask is: how large is struct S
, in
bytes? An int is 4 bytes and a char one byte, so perhaps a struct S
occupies 5 bytes? Wrong. All (non-toy) compilers I know of will
round the size of struct S
up to a whole number of words,
in this case 8 bytes. Not doing this forces compilers to generate
truly appalling code for subscripting arrays of struct
S
's.
So s1 occupies 8 bytes, yet only 5 of them will be initialised.
For the assignment s2 = s1
, gcc generates code to copy
all 8 bytes wholesale into s2
without regard for their
meaning. If Valgrind simply checked values as they came out of
memory, it would yelp every time a structure assignment like this
happened. So the more complicated semantics described above is
necessary. This allows gcc to copy s1
into
s2
any way it likes, and a warning will only be emitted
if the uninitialised values are later used.
One final twist to this story. The above scheme allows garbage to pass through the CPU's integer registers without complaint. It does this by giving the integer registers V tags, passing these around in the expected way. This complicated and computationally expensive to do, but is necessary. Valgrind is more simplistic about floating-point loads and stores. In particular, V bits for data read as a result of floating-point loads are checked at the load instruction. So if your program uses the floating-point registers to do memory-to-memory copies, you will get complaints about uninitialised values. Fortunately, I have not yet encountered a program which (ab)uses the floating-point registers in this way.
As described above, every bit in memory or in the CPU has an associated valid-value (V) bit. In addition, all bytes in memory, but not in the CPU, have an associated valid-address (A) bit. This indicates whether or not the program can legitimately read or write that location. It does not give any indication of the validity or the data at that location -- that's the job of the V bits -- only whether or not the location may be accessed.
Every time your program reads or writes memory, Valgrind checks the A bits associated with the address. If any of them indicate an invalid address, an error is emitted. Note that the reads and writes themselves do not change the A bits, only consult them.
So how do the A bits get set/cleared? Like this:
This apparently strange choice reduces the amount of confusing information presented to the user. It avoids the unpleasant phenomenon in which memory is read from a place which is both unaddressible and contains invalid values, and, as a result, you get not only an invalid-address (read/write) error, but also a potentially large set of uninitialised-value errors, one for every time the value is used.
There is a hazy boundary case to do with multi-byte loads from
addresses which are partially valid and partially invalid. See
details of the flag --partial-loads-ok
for details.
Under the hood, dealing with signals is a real pain, and Valgrind's simulation leaves much to be desired. If your program does way-strange stuff with signals, bad things may happen. If so, let me know. I don't promise to fix it, but I'd at least like to be aware of it.
For each such block, Valgrind scans the entire address space of the process, looking for pointers to the block. One of three situations may result:
The precise area of memory in which Valgrind searches for pointers is: all naturally-aligned 4-byte words for which all A bits indicate addressibility and all V bits indicated that the stored value is actually valid.
Valgrind will run x86-GNU/Linux ELF dynamically linked binaries, on a kernel 2.2.X or 2.4.X system, subject to the following constraints:
libpthread.so
, so that Valgrind can
substitute its own implementation at program startup time. If
you're statically linked against it, things will fail
badly.
__pthread_clock_gettime
and
__pthread_clock_settime
. This appears to be due to
/lib/librt-2.2.5.so
needing them. Unfortunately I
do not understand enough about this problem to fix it properly,
and I can't reproduce it on my test RedHat 7.3 system. Please
mail me if you have more information / understanding.
-fno-builtin-strlen
in
the meantime. Or use an earlier gcc.
The dynamic linker allows each .so in the process image to have an initialisation function which is run before main(). It also allows each .so to have a finalisation function run after main() exits.
When valgrind.so's initialisation function is called by the dynamic linker, the synthetic CPU to starts up. The real CPU remains locked in valgrind.so for the entire rest of the program, but the synthetic CPU returns from the initialisation function. Startup of the program now continues as usual -- the dynamic linker calls all the other .so's initialisation routines, and eventually runs main(). This all runs on the synthetic CPU, not the real one, but the client program cannot tell the difference.
Eventually main() exits, so the synthetic CPU calls valgrind.so's finalisation function. Valgrind detects this, and uses it as its cue to exit. It prints summaries of all errors detected, possibly checks for memory leaks, and then exits the finalisation routine, but now on the real CPU. The synthetic CPU has now lost control -- permanently -- so the program exits back to the OS on the real CPU, just as it would have done anyway.
On entry, Valgrind switches stacks, so it runs on its own stack. On exit, it switches back. This means that the client program continues to run on its own stack, so we can switch back and forth between running it on the simulated and real CPUs without difficulty. This was an important design decision, because it makes it easy (well, significantly less difficult) to debug the synthetic CPU.
Valgrind no longer directly supports detection of self-modifying code. Such checking is expensive, and in practice (fortunately) almost no applications need it. However, to help people who are debugging dynamic code generation systems, there is a Client Request (basically a macro you can put in your program) which directs Valgrind to discard translations in a given address range. So Valgrind can still work in this situation provided the client tells it when code has become out-of-date and needs to be retranslated.
The JITter translates basic blocks -- blocks of straight-line-code -- as single entities. To minimise the considerable difficulties of dealing with the x86 instruction set, x86 instructions are first translated to a RISC-like intermediate code, similar to sparc code, but with an infinite number of virtual integer registers. Initially each insn is translated seperately, and there is no attempt at instrumentation.
The intermediate code is improved, mostly so as to try and cache the simulated machine's registers in the real machine's registers over several simulated instructions. This is often very effective. Also, we try to remove redundant updates of the simulated machines's condition-code register.
The intermediate code is then instrumented, giving more intermediate code. There are a few extra intermediate-code operations to support instrumentation; it is all refreshingly simple. After instrumentation there is a cleanup pass to remove redundant value checks.
This gives instrumented intermediate code which mentions arbitrary numbers of virtual registers. A linear-scan register allocator is used to assign real registers and possibly generate spill code. All of this is still phrased in terms of the intermediate code. This machinery is inspired by the work of Reuben Thomas (Mite).
Then, and only then, is the final x86 code emitted. The intermediate code is carefully designed so that x86 code can be generated from it without need for spare registers or other inconveniences.
The translations are managed using a traditional LRU-based caching scheme. The translation cache has a default size of about 14MB.
When such a signal arrives, Valgrind's own handler catches it, and notes the fact. At a convenient safe point in execution, Valgrind builds a signal delivery frame on the client's stack and runs its handler. If the handler longjmp()s, there is nothing more to be said. If the handler returns, Valgrind notices this, zaps the delivery frame, and carries on where it left off before delivering the signal.
The purpose of this nonsense is that setting signal handlers essentially amounts to giving callback addresses to the Linux kernel. We can't allow this to happen, because if it did, signal handlers would run on the real CPU, not the simulated one. This means the checking machinery would not operate during the handler run, and, worse, memory permissions maps would not be updated, which could cause spurious error reports once the handler had returned.
An even worse thing would happen if the signal handler longjmp'd rather than returned: Valgrind would completely lose control of the client program.
Upshot: we can't allow the client to install signal handlers directly. Instead, Valgrind must catch, on behalf of the client, any signal the client asks to catch, and must delivery it to the client on the simulated CPU, not the real one. This involves considerable gruesome fakery; see vg_signals.c for details.
sewardj@phoenix:~/newmat10$ ~/Valgrind-6/valgrind -v ./bogon ==25832== Valgrind 0.10, a memory error detector for x86 RedHat 7.1. ==25832== Copyright (C) 2000-2001, and GNU GPL'd, by Julian Seward. ==25832== Startup, with flags: ==25832== --suppressions=/home/sewardj/Valgrind/redhat71.supp ==25832== reading syms from /lib/ld-linux.so.2 ==25832== reading syms from /lib/libc.so.6 ==25832== reading syms from /mnt/pima/jrs/Inst/lib/libgcc_s.so.0 ==25832== reading syms from /lib/libm.so.6 ==25832== reading syms from /mnt/pima/jrs/Inst/lib/libstdc++.so.3 ==25832== reading syms from /home/sewardj/Valgrind/valgrind.so ==25832== reading syms from /proc/self/exe ==25832== loaded 5950 symbols, 142333 line number locations ==25832== ==25832== Invalid read of size 4 ==25832== at 0x8048724: _ZN10BandMatrix6ReSizeEiii (bogon.cpp:45) ==25832== by 0x80487AF: main (bogon.cpp:66) ==25832== by 0x40371E5E: __libc_start_main (libc-start.c:129) ==25832== by 0x80485D1: (within /home/sewardj/newmat10/bogon) ==25832== Address 0xBFFFF74C is not stack'd, malloc'd or free'd ==25832== ==25832== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0) ==25832== malloc/free: in use at exit: 0 bytes in 0 blocks. ==25832== malloc/free: 0 allocs, 0 frees, 0 bytes allocated. ==25832== For a detailed leak analysis, rerun with: --leak-check=yes ==25832== ==25832== exiting, did 1881 basic blocks, 0 misses. ==25832== 223 translations, 3626 bytes in, 56801 bytes out.
The GCC folks fixed this about a week before gcc-3.0 shipped.
Also, since one instruction cache read is performed per instruction executed, you can find out how many instructions are executed per line, which can be useful for traditional profiling and test coverage.
Any feedback, bug-fixes, suggestions, etc, welcome.
-g
flag). But by contrast with normal Valgrind use, you
probably do want to turn optimisation on, since you should profile your
program as it will be normally run.
The two steps are:
cachegrind
in front of the
normal command line invocation. When the program finishes,
Valgrind will print summary cache statistics. It also collects
line-by-line information in a file cachegrind.out
.
This step should be done every time you want to collect information about a new program, a changed program, or about the same program with different input.
--auto=yes
option. You can annotate C/C++
files or assembly language files equally easily.
This step can be performed as many times as you like for each Step 2. You may want to do multiple annotations showing different information each time.
The more specific characteristics of the simulation are as follows.
--I1
, --D1
and --L2
options.Other noteworthy behaviour:
inc
and
dec
) are counted as doing just a read, ie. a single data
reference. This may seem strange, but since the write can never cause a
miss (the read guarantees the block is in the cache) it's not very
interesting.Thus it measures not the number of times the data cache is accessed, but the number of times a data cache miss could occur.
vg_cachesim_I1.c
, vg_cachesim_D1.c
,
vg_cachesim_L2.c
and vg_cachesim_gen.c
. We'd be
interested to hear from anyone who does.
--cachesim=yes
option to the valgrind
shell script. Alternatively, it
is probably more convenient to use the cachegrind
script.
Either way automatically turns off Valgrind's memory checking functions,
since the cache simulation is slow enough already, and you probably
don't want to do both at once.
To gather cache profiling information about the program ls
-l
, type:
cachegrind ls -l
The program will execute (slowly). Upon completion, summary statistics
that look like this will be printed:
==31751== I refs: 27,742,716 ==31751== I1 misses: 276 ==31751== L2 misses: 275 ==31751== I1 miss rate: 0.0% ==31751== L2i miss rate: 0.0% ==31751== ==31751== D refs: 15,430,290 (10,955,517 rd + 4,474,773 wr) ==31751== D1 misses: 41,185 ( 21,905 rd + 19,280 wr) ==31751== L2 misses: 23,085 ( 3,987 rd + 19,098 wr) ==31751== D1 miss rate: 0.2% ( 0.1% + 0.4%) ==31751== L2d miss rate: 0.1% ( 0.0% + 0.4%) ==31751== ==31751== L2 misses: 23,360 ( 4,262 rd + 19,098 wr) ==31751== L2 miss rate: 0.0% ( 0.0% + 0.4%)Cache accesses for instruction fetches are summarised first, giving the number of fetches made (this is the number of instructions executed, which can be useful to know in its own right), the number of I1 misses, and the number of L2 instruction (
L2i
) misses.
Cache accesses for data follow. The information is similar to that of the
instruction fetches, except that the values are also shown split between reads
and writes (note each row's rd
and wr
values add up
to the row's total).
Combined instruction and data figures for the L2 cache follow that.
cachegrind.out
. This file is human-readable, but is best
interpreted by the accompanying program vg_annotate
,
described in the next section.
Things to note about the cachegrind.out
file:
valgrind --cachesim=yes
or
cachegrind
is run, and will overwrite any existing
cachegrind.out
in the current directory.
ls -l
generates a file of about
350KB. Browsing a few files and web pages with a Konqueror
built with full debugging information generates a file
of around 15 MB.The interesting cache-simulation specific options are:
--I1=<size>,<associativity>,<line_size>
--D1=<size>,<associativity>,<line_size>
--L2=<size>,<associativity>,<line_size>
[default: uses CPUID for automagic cache configuration]
Manually specifies the I1/D1/L2 cache configuration, where
size
and line_size
are measured in bytes. The
three items must be comma-separated, but with no spaces, eg:
cachegrind --I1=65535,2,64
You can specify one, two or three of the I1/D1/L2 caches. Any level not
manually specified will be simulated using the configuration found in the
normal way (via the CPUID instruction, or failing that, via defaults).
vg_annotate
, it is worth widening your
window to be at least 120-characters wide if possible, as the output
lines can be quite long.
To get a function-by-function summary, run vg_annotate
in
directory containing a cachegrind.out
file. The output
looks like this:
-------------------------------------------------------------------------------- I1 cache: 65536 B, 64 B, 2-way associative D1 cache: 65536 B, 64 B, 2-way associative L2 cache: 262144 B, 64 B, 8-way associative Command: concord vg_to_ucode.c Events recorded: Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw Events shown: Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw Event sort order: Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw Threshold: 99% Chosen for annotation: Auto-annotation: on -------------------------------------------------------------------------------- Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw -------------------------------------------------------------------------------- 27,742,716 276 275 10,955,517 21,905 3,987 4,474,773 19,280 19,098 PROGRAM TOTALS -------------------------------------------------------------------------------- Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw file:function -------------------------------------------------------------------------------- 8,821,482 5 5 2,242,702 1,621 73 1,794,230 0 0 getc.c:_IO_getc 5,222,023 4 4 2,276,334 16 12 875,959 1 1 concord.c:get_word 2,649,248 2 2 1,344,810 7,326 1,385 . . . vg_main.c:strcmp 2,521,927 2 2 591,215 0 0 179,398 0 0 concord.c:hash 2,242,740 2 2 1,046,612 568 22 448,548 0 0 ctype.c:tolower 1,496,937 4 4 630,874 9,000 1,400 279,388 0 0 concord.c:insert 897,991 51 51 897,831 95 30 62 1 1 ???:??? 598,068 1 1 299,034 0 0 149,517 0 0 ../sysdeps/generic/lockfile.c:__flockfile 598,068 0 0 299,034 0 0 149,517 0 0 ../sysdeps/generic/lockfile.c:__funlockfile 598,024 4 4 213,580 35 16 149,506 0 0 vg_clientmalloc.c:malloc 446,587 1 1 215,973 2,167 430 129,948 14,057 13,957 concord.c:add_existing 341,760 2 2 128,160 0 0 128,160 0 0 vg_clientmalloc.c:vg_trap_here_WRAPPER 320,782 4 4 150,711 276 0 56,027 53 53 concord.c:init_hash_table 298,998 1 1 106,785 0 0 64,071 1 1 concord.c:create 149,518 0 0 149,516 0 0 1 0 0 ???:tolower@@GLIBC_2.0 149,518 0 0 149,516 0 0 1 0 0 ???:fgetc@@GLIBC_2.0 95,983 4 4 38,031 0 0 34,409 3,152 3,150 concord.c:new_word_node 85,440 0 0 42,720 0 0 21,360 0 0 vg_clientmalloc.c:vg_bogus_epilogueFirst up is a summary of the annotation options:
Ir
: I cache reads (ie. instructions executed)I1mr
: I1 cache read missesI2mr
: L2 cache instruction read missesDr
: D cache reads (ie. memory reads)D1mr
: D1 cache read missesD2mr
: L2 cache data read missesDw
: D cache writes (ie. memory writes)D1mw
: D1 cache write missesD2mw
: L2 cache data write misses
Note that D1 total accesses is given by D1mr
+
D1mw
, and that L2 total accesses is given by
I2mr
+ D2mr
+ D2mw
.
--show
option.
Ir
counts to lowest. If two functions have identical
Ir
counts, they will then be sorted by I1mr
counts, and so on. This order can be adjusted with the
--sort
option.
Note that this dictates the order the functions appear. It is not
the order in which the columns appear; that is dictated by the "events
shown" line (and can be changed with the --show
option).
vg_annotate
by default omits functions
that cause very low numbers of misses to avoid drowning you in
information. In this case, vg_annotate shows summaries the
functions that account for 99% of the Ir
counts;
Ir
is chosen as the threshold event since it is the
primary sort event. The threshold can be adjusted with the
--threshold
option.
--auto=yes
option. In this case no.
cachegrind
.
Then follows function-by-function statistics. Each function is
identified by a file_name:function_name
pair. If a column
contains only a dot it means the function never performs
that event (eg. the third row shows that strcmp()
contains no instructions that write to memory). The name
???
is used if the the file name and/or function name
could not be determined from debugging information. If most of the
entries have the form ???:???
the program probably wasn't
compiled with -g
. If any code was invalidated (either due to
self-modifying code or unloading of shared objects) its counts are aggregated
into a single cost centre written as (discarded):(discarded)
.
It is worth noting that functions will come from three types of source files:
concord.c
in this example).getc.c
)vg_clientmalloc.c:malloc
). These are recognisable because
the filename begins with vg_
, and is probably one of
vg_main.c
, vg_clientmalloc.c
or
vg_mylibc.c
.
--auto=yes
option. To do it
manually, just specify the filenames as arguments to
vg_annotate
. For example, the output from running
vg_annotate concord.c
for our example produces the same
output as above followed by an annotated version of
concord.c
, a section of which looks like:
-------------------------------------------------------------------------------- -- User-annotated source: concord.c -------------------------------------------------------------------------------- Ir I1mr I2mr Dr D1mr D2mr Dw D1mw D2mw [snip] . . . . . . . . . void init_hash_table(char *file_name, Word_Node *table[]) 3 1 1 . . . 1 0 0 { . . . . . . . . . FILE *file_ptr; . . . . . . . . . Word_Info *data; 1 0 0 . . . 1 1 1 int line = 1, i; . . . . . . . . . 5 0 0 . . . 3 0 0 data = (Word_Info *) create(sizeof(Word_Info)); . . . . . . . . . 4,991 0 0 1,995 0 0 998 0 0 for (i = 0; i < TABLE_SIZE; i++) 3,988 1 1 1,994 0 0 997 53 52 table[i] = NULL; . . . . . . . . . . . . . . . . . . /* Open file, check it. */ 6 0 0 1 0 0 4 0 0 file_ptr = fopen(file_name, "r"); 2 0 0 1 0 0 . . . if (!(file_ptr)) { . . . . . . . . . fprintf(stderr, "Couldn't open '%s'.\n", file_name); 1 1 1 . . . . . . exit(EXIT_FAILURE); . . . . . . . . . } . . . . . . . . . 165,062 1 1 73,360 0 0 91,700 0 0 while ((line = get_word(data, line, file_ptr)) != EOF) 146,712 0 0 73,356 0 0 73,356 0 0 insert(data->;word, data->line, table); . . . . . . . . . 4 0 0 1 0 0 2 0 0 free(data); 4 0 0 1 0 0 2 0 0 fclose(file_ptr); 3 0 0 2 0 0 . . . }(Although column widths are automatically minimised, a wide terminal is clearly useful.)
Each source file is clearly marked (User-annotated source
) as
having been chosen manually for annotation. If the file was found in one of
the directories specified with the -I
/--include
option, the directory and file are both given.
Each line is annotated with its event counts. Events not applicable for a line are represented by a `.'; this is useful for distinguishing between an event which cannot happen, and one which can but did not.
Sometimes only a small section of a source file is executed. To minimise uninteresting output, Valgrind only shows annotated lines and lines within a small distance of annotated lines. Gaps are marked with the line numbers so you know which part of a file the shown code comes from, eg:
(figures and code for line 704) -- line 704 ---------------------------------------- -- line 878 ---------------------------------------- (figures and code for line 878)The amount of context to show around annotated lines is controlled by the
--context
option.
To get automatic annotation, run vg_annotate --auto=yes
.
vg_annotate will automatically annotate every source file it can find that is
mentioned in the function-by-function summary. Therefore, the files chosen for
auto-annotation are affected by the --sort
and
--threshold
options. Each source file is clearly marked
(Auto-annotated source
) as being chosen automatically. Any files
that could not be found are mentioned at the end of the output, eg:
-------------------------------------------------------------------------------- The following files chosen for auto-annotation could not be found: -------------------------------------------------------------------------------- getc.c ctype.c ../sysdeps/generic/lockfile.cThis is quite common for library files, since libraries are usually compiled with debugging information, but the source files are often not present on a system. If a file is chosen for annotation both manually and automatically, it is marked as
User-annotated source
.
Use the -I/--include
option to tell Valgrind where to look for
source files if the filenames found from the debugging information aren't
specific enough.
Beware that vg_annotate can take some time to digest large
cachegrind.out
files, eg. 30 seconds or more. Also beware that
auto-annotation can produce a lot of output if your program is large!
To do this, you just need to assemble your .s
files with
assembler-level debug information. gcc doesn't do this, but you can
use the GNU assembler with the --gstabs
option to
generate object files with this information, eg:
as --gstabs foo.s
You can then profile and annotate source files in the same way as for C/C++
programs.
vg_annotate
options-h, --help
-v, --version
Help and version, as usual.
--sort=A,B,C
[default: order in
cachegrind.out
]
Specifies the events upon which the sorting of the function-by-function
entries will be based. Useful if you want to concentrate on eg. I cache
misses (--sort=I1mr,I2mr
), or D cache misses
(--sort=D1mr,D2mr
), or L2 misses
(--sort=D2mr,I2mr
).
--show=A,B,C
[default: all, using order in
cachegrind.out
]
Specifies which events to show (and the column order). Default is to use
all present in the cachegrind.out
file (and use the order in
the file).
--threshold=X
[default: 99%]
Sets the threshold for the function-by-function summary. Functions are
shown that account for more than X% of the primary sort event. If
auto-annotating, also affects which files are annotated.
Note: thresholds can be set for more than one of the events by appending
any events for the --sort
option with a colon and a number
(no spaces, though). E.g. if you want to see the functions that cover
99% of L2 read misses and 99% of L2 write misses, use this option:
--sort=D2mr:99,D2mw:99
--auto=no
[default]--auto=yes
When enabled, automatically annotates every file that is mentioned in the function-by-function summary that can be found. Also gives a list of those that couldn't be found.
--context=N
[default: 8]Print N lines of context before and after each annotated line. Avoids printing large sections of source files that were not executed. Use a large number (eg. 10,000) to show all source lines.
-I=<dir>, --include=<dir>
[default: empty string]Adds a directory to the list in which to search for files. Multiple -I/--include options can be given to add multiple directories.
cachegrind.out
file. This is because the information in cachegrind.out
is
only recorded with line numbers, so if the line numbers change at all in
the source (eg. lines added, deleted, swapped), any annotations will be
incorrect.
cachegrind.out
file. If this happens,
the figures for the bogus lines are printed anyway (clearly marked as
bogus) in case they are important.
1 0 0 . . . . . . leal -12(%ebp),%eax 1 0 0 . . . 1 0 0 movl %eax,84(%ebx) 2 0 0 0 0 0 1 0 0 movl $1,-20(%ebp) . . . . . . . . . .align 4,0x90 1 0 0 . . . . . . movl $.LnrB,%eax 1 0 0 . . . 1 0 0 movl %eax,-16(%ebp)How can the third instruction be executed twice when the others are executed only once? As it turns out, it isn't. Here's a dump of the executable, using
objdump -d
:
8048f25: 8d 45 f4 lea 0xfffffff4(%ebp),%eax 8048f28: 89 43 54 mov %eax,0x54(%ebx) 8048f2b: c7 45 ec 01 00 00 00 movl $0x1,0xffffffec(%ebp) 8048f32: 89 f6 mov %esi,%esi 8048f34: b8 08 8b 07 08 mov $0x8078b08,%eax 8048f39: 89 45 f0 mov %eax,0xfffffff0(%ebp)Notice the extra
mov %esi,%esi
instruction. Where did this
come from? The GNU assembler inserted it to serve as the two bytes of
padding needed to align the movl $.LnrB,%eax
instruction on
a four-byte boundary, but pretended it didn't exist when adding debug
information. Thus when Valgrind reads the debug info it thinks that the
movl $0x1,0xffffffec(%ebp)
instruction covers the address
range 0x8048f2b--0x804833 by itself, and attributes the counts for the
mov %esi,%esi
to it.
inline_me()
is defined in
foo.h
and inlined in the functions f1()
,
f2()
and f3()
in bar.c
, there will
not be a foo.h:inline_me()
function entry. Instead, there
will be separate function entries for each inlining site, ie.
foo.h:f1()
, foo.h:f2()
and
foo.h:f3()
. To find the total counts for
foo.h:inline_me()
, add up the counts from each entry.
The reason for this is that although the debug info output by gcc
indicates the switch from bar.c
to foo.h
, it
doesn't indicate the name of the function in foo.h
, so
Valgrind keeps using the old one.
/home/user/proj/proj.h
and ../proj.h
. In this
case, if you use auto-annotation, the file will be annotated twice with
the counts split between the two.
struct
nlist
defined in a.out.h
under Linux is only a 16-bit
value. Valgrind can handle some files with more than 65,535 lines
correctly by making some guesses to identify line number overflows. But
some cases are beyond it, in which case you'll get a warning message
explaining that annotations for the file might be incorrect.
-g
and some without, some
events that take place in a file without debug info could be attributed
to the last line of a file with debug info (whichever one gets placed
before the non-debug-info file in the executable).
Note: stabs is not an easy format to read. If you come across bizarre annotations that look like might be caused by a bug in the stabs reader, please let us know.
malloc()
will allocate memory in different
ways to the standard malloc()
, which could warp the results.
bts
, btr
and btc
will incorrectly be counted as doing a data read if both the arguments
are registers, eg:
btsl %eax, %edx
This should only happen rarely.
fsave
) are treated as though they only access 16 bytes.
These instructions seem to be rare so hopefully this won't affect
accuracy much.
valgrind.so
file, the size of the program being
profiled, or even the length of its name can perturb the results. Variations
will be small, but don't expect perfectly repeatable results if your program
changes at all.While these factors mean you shouldn't trust the results to be super-accurate, hopefully they should be close enough to be useful.