Copyright (C) 1995-2004 The University of Melbourne.
Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies.
Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided also that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.
Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions.
This guide describes the compilation environment of Mercury -- how to build and debug Mercury programs.
mdb
.
This document describes the compilation environment of Mercury.
It describes how to use mmc
, the Mercury compiler;
how to use mmake
, the "Mercury make" program,
a tool built on top of ordinary or GNU make
to simplify the handling of Mercury programs;
how to use mdb
, the Mercury debugger;
and how to use mprof
, the Mercury profiler.
We strongly recommend that programmers use mmake
rather
than invoking mmc
directly, because mmake
is generally
easier to use and avoids unnecessary recompilation.
Mercury source files must be named *.m
.
Each Mercury source file should contain a single Mercury module
whose module name should be the same as the filename without
the .m
extension.
The Mercury implementation uses a variety of intermediate files, which
are described below. But all you really need to know is how to name
source files. For historical reasons, the default behaviour is for
intermediate files to be created in the current directory, but if you
use the --use-subdirs
option to mmc
or mmake
, all
these intermediate files will be created in a Mercury
subdirectory, where you can happily ignore them.
Thus you may wish to skip the rest of this chapter.
In cases where the source file name and module name don't match, the names for intermediate files are based on the name of the module from which they are derived, not on the source file name.
Files ending in .int
, .int0
, .int2
and .int3
are interface files; these are generated automatically by the compiler,
using the --make-interface
(or --make-int
),
--make-private-interface
(or --make-priv-int
),
--make-short-interface
(or --make-short-int
) options.
Files ending in .opt
are
interface files used in inter-module optimization,
and are created using the --make-optimization-interface
(or --make-opt-int
) option.
Similarly, files ending in .trans_opt
are interface files used in
transitive inter-module optimization, and are created using the
--make-transitive-optimization-interface
(or --make-trans-opt-int
) option.
Since the interface of a module changes less often than its implementation,
the .int
, .int0
, .int2
, .int3
, .opt
,
and .trans_opt
files will remain unchanged on many compilations.
To avoid unnecessary recompilations of the clients of the module,
the timestamps on the these files are updated only if their contents change.
.date
, .date0
, .date3
, .optdate
,
and .trans_opt_date
files associated with the module are used as timestamp files;
they are used when deciding whether the interface files need to be regenerated.
.c_date
, .il_date
, .java_date
,
.s_date
and .pic_s_date
files
perform a similar function for .c
, .il
, .java
,
.s
and .pic_s
files respectively. When smart recompilation
(see Auxiliary output options) works out that a module
does not need to be recompiled, the timestamp file for the
target file is updated, and the timestamp of the target file
is left unchanged.
.used
files contain dependency information for smart recompilation
(see Auxiliary output options).
Files ending in .d
are automatically-generated Makefile fragments
which contain the dependencies for a module.
Files ending in .dep
are automatically-generated Makefile fragments
which contain the rules for an entire program.
Files ending in .dv
are automatically-generated Makefile fragments
which contain variable definitions for an entire program.
As usual, .c
files are C source code,
and .o
files are object code.
In addition, .pic_o
files are object code files
that contain position-independent code (PIC).
.lpic_o
files are object code files that can be
linked with shared libraries, but don't necessarily
contain position-independent code themselves.
.mh
and .mih
files are C header files generated
by the Mercury compiler. The non-standard extensions are necessary
to avoid conflicts with system header files.
.s
files and .pic_s
files are assembly language.
.java
, .class
and .jar
files are Java source code,
Java bytecode and Java archives respectively.
.il
files are Intermediate Language (IL) files
for the .NET Common Language Runtime.
Files ending in .rlo
are Aditi-RL bytecode files, which are
executed by the Aditi deductive database system (see Using Aditi).
Following a long Unix tradition,
the Mercury compiler is called mmc
(for "Melbourne Mercury Compiler").
Some of its options (e.g. -c
, -o
, and -I
)
have a similar meaning to that in other Unix compilers.
Arguments to mmc
may be either file names (ending in .m
),
or module names, with .
(rather than __
or :
)
as the module qualifier. For a module name such as foo.bar.baz
,
the compiler will look for the source in files foo.bar.baz.m
,
bar.baz.m
, and baz.m
, in that order.
Note that if the file name does not include all the module
qualifiers (e.g. if it is bar.baz.m
or baz.m
rather than foo.bar.baz.m
), then the module name in the
:- module
declaration for that module must be fully qualified.
To make the compiler look in another file for a module, use
mmc -f sources-files
to generate a mapping from module name
to file name, where sources-files is the list of source files in
the directory (see Output options).
To compile a program which consists of just a single source file,
use the command
mmc filename.m
Unlike traditional Unix compilers, however,
mmc
will put the executable into a file called filename
,
not a.out
.
For programs that consist of more than one source file, we strongly recommend that you use Mmake (see Using Mmake). Mmake will perform all the steps listed below, using automatic dependency analysis to ensure that things are done in the right order, and that steps are not repeated unnecessarily. If you use Mmake, then you don't need to understand the details of how the Mercury implementation goes about building programs. Thus you may wish to skip the rest of this chapter.
To compile a source file to object code without creating an executable,
use the command
mmc -c filename.m
mmc
will put the object code into a file called module.o
,
where module is the name of the Mercury module defined in
filename.m
.
It also will leave the intermediate C code in a file called
module.c
.
If the source file contains nested modules, then each sub-module will get
compiled to separate C and object files.
Before you can compile a module,
you must make the interface files
for the modules that it imports (directly or indirectly).
You can create the interface files for one or more source files
using the following commands:
mmc --make-short-int filename1.m filename2.m ... mmc --make-priv-int filename1.m filename2.m ... mmc --make-int filename1.m filename2.m ...
If you are going to compile with --intermodule-optimization
enabled,
then you also need to create the optimization interface files.
mmc --make-opt-int filename1.m filename2.m ...
If you are going to compile with --transitive-intermodule-optimization
enabled, then you also need to create the transitive optimization files.
mmc --make-trans-opt filename1.m filename2.m ...
Given that you have made all the interface files,
one way to create an executable for a multi-module program
is to compile all the modules at the same time
using the command
mmc filename1.m filename2.m ...
This will by default put the resulting executable in filename1
,
but you can use the -o filename
option to specify a different
name for the output file, if you so desire.
The other way to create an executable for a multi-module program
is to compile each module separately using mmc -c
,
and then link the resulting object files together.
The linking is a two stage process.
First, you must create and compile an initialization file,
which is a C source file
containing calls to automatically generated initialization functions
contained in the C code of the modules of the program:
c2init module1.c module2.c ... > main-module_init.c, mgnuc -c main-module_init.c
The c2init
command line must contain
the name of the C file of every module in the program.
The order of the arguments is not important.
The mgnuc
command is the Mercury GNU C compiler;
it is a shell script that invokes the GNU C compiler gcc
with the options appropriate for compiling
the C programs generated by Mercury.
You then link the object code of each module
with the object code of the initialization file to yield the executable:
ml -o main-module module1.o module2.o ... main_module_init.o
ml
, the Mercury linker, is another shell script
that invokes a C compiler with options appropriate for Mercury,
this time for linking. ml
also pipes any error messages
from the linker through mdemangle
, the Mercury symbol demangler,
so that error messages refer to predicate and function names from
the Mercury source code rather than to the names used in the intermediate
C code.
The above command puts the executable in the file main-module
.
The same command line without the -o
option
would put the executable into the file a.out
.
mmc
and ml
both accept a -v
(verbose) option.
You can use that option to see what is actually going on.
For the full set of options of mmc
, see Invocation.
Once you have created an executable for a Mercury program,
you can go ahead and execute it. You may however wish to specify
certain options to the Mercury runtime system.
The Mercury runtime accepts
options via the MERCURY_OPTIONS
environment variable.
The most useful of these are the options that set the size of the stacks.
(For the full list of available options, see Environment.)
The det stack and the nondet stack
are allocated fixed sizes at program start-up.
The default size is 4096k for the det stack and 128k for the nondet stack,
but these can be overridden with the
--detstack-size
and --nondetstack-size
options,
whose arguments are the desired sizes of the det and nondet stacks
respectively, in units of kilobytes.
On operating systems that provide the appropriate support,
the Mercury runtime will ensure that stack overflow
is trapped by the virtual memory system.
With conservative garbage collection (the default),
the heap will start out with a zero size,
and will be dynamically expanded as needed,
When not using conservative garbage collection,
the heap has a fixed size like the stacks.
The default size is 4 Mb, but this can be overridden
with the --heap-size
option.
Mmake, short for "Mercury Make",
is a tool for building Mercury programs
that is built on top of ordinary or GNU Make 1.
With Mmake, building even a complicated Mercury program
consisting of a number of modules is as simple as
mmc -f source-files mmake main-module.depend mmake main-module
Mmake only recompiles those files that need to be recompiled,
based on automatically generated dependency information.
Most of the dependencies are stored in .d
files that are
automatically recomputed every time you recompile,
so they are never out-of-date.
A little bit of the dependency information is stored in .dep
and .dv
files which are more expensive to recompute.
The mmake main-module.depend
command which recreates the
main-module.dep
and main-module.dv
files needs
to be repeated only when you add or remove a module from your program,
and there is no danger of getting an inconsistent executable if you forget
this step -- instead you will get a compile or link error.
The mmc -f
step above is only required if there are any source
files for which the file name does not match the module name.
mmc -f
generates a file Mercury.modules
containing
a mapping from module name to source file. The Mercury.modules
file must be updated when a source file for which the file name does
not match the module name is added to or removed from the directory.
mmake
allows you to build more than one program in the same directory.
Each program must have its own .dep
and .dv
files,
and therefore you must run mmake program.depend
for each program. The Mercury.modules
file is used for
all programs in the directory.
If there is a file called Mmake
or Mmakefile
in the
current directory,
Mmake will include that file in its automatically-generated Makefile.
The Mmake
file can override the default values of
various variables used by Mmake's builtin rules,
or it can add additional rules, dependencies, and actions.
Mmake's builtin rules are defined by the file
prefix/lib/mercury/mmake/Mmake.rules
(where prefix is /usr/local/mercury-version
by default,
and version is the version number, e.g. 0.6
),
as well as the rules and variables in the automatically-generated
.dep
and .dv
files.
These rules define the following targets:
main-module.depend
main-module.dep
and
main-module.dv
from main-module.m
and the modules it imports.
This step must be performed first.
It is also required whenever you wish to change the level of
inter-module optimization performed (see Overall optimization options).
main-module.ints
make
program does not handle transitive dependencies,
this step may be necessary before
attempting to make main-module
or main-module.check
;
if the underlying make
is GNU Make, this step should not be necessary.)
main-module.check
.err
files.
main-module
.err
files.
main-module.split
--split-c-files
option enabled.
For more information about --split-c-files
, see
Output-level (LLDS -> C) optimization options.
main-module.javas
*.java
).
main-module.classes
*.class
).
main-module.ils
*.il
)
for the .NET Common Language Runtime.
libmain-module
libmain-module.install
LIBGRADES
variable. It will also build and install the
necessary interface files. The variable INSTALL
specifies
the name of the command to use to install each file, by default
cp
. The variable INSTALL_MKDIR
specifies the command to use
to create directories, by default mkdir -p
.
For more information, see Installing libraries.
main-module.clean
.c
, .s
, .o
,
.pic_o
, .prof
, .no
, .ql
, .used
,
.mih
,
.derived_schema
, .base_schema
and .err
files
belonging to the named main-module or its imported modules.
Use this target whenever you wish to change compilation model
(see Compilation model options).
This target is also recommended whenever you wish to change the level
of inter-module optimization performed (see Overall optimization options) in addition to the mandatory main-module.depend.
main-module.realclean
.int
, .int0
, .int2
,
.int3
, .opt
, .trans_opt
,
.date
, .date0
, .date3
, .optdate
,
.trans_opt_date
,
.rlo
,
.mh
and .d
files
belonging to one of the modules of the program,
and also the various possible executables, libraries and dependency files
for the program as a whole --
main-module
,
main-module.split
,
libmain-module.a
,
libmain-module.so
,
main-module.split.a
,
main-module.init
,
main-module.dep
and
main-module.dv
.
clean
main-module.clean
for every main-module
for which there is a main-module.dep
file in the current
directory, as well as deleting the profiling files
Prof.CallPair
,
Prof.Counts
,
Prof.Decl
,
Prof.MemWords
and
Prof.MemCells
.
realclean
main-module.realclean
for every main-module
for which there is a main-module.dep
file in the current
directory, as well as deleting the profiling files as per the clean
target.
The variables used by the builtin rules (and their default values) are
defined in the file prefix/lib/mercury/mmake/Mmake.vars
, however
these may be overridden by user Mmake
files. Some of the more
useful variables are:
MAIN_TARGET
mmake
is invoked with
any target explicitly named on the command line.
MC
GRADEFLAGS and EXTRA_GRADEFLAGS
mmc
, mgnuc
, ml
, and c2init
).
MCFLAGS and EXTRA_MCFLAGS
GRADEFLAGS
, not in MCFLAGS
.)
MGNUC
MGNUCFLAGS and EXTRA_MGNUCFLAGS
CFLAGS and EXTRA_CFLAGS
JAVACFLAGS and EXTRA_JAVACFLAGS
MS_CLFLAGS and EXTRA_MS_CLFLAGS
MS_CL_NOASM
:noAssembly
to turn off assembly generation, leave empty to turn
on assembly generation. The default is to leave this variable empty.
ML
LINKAGE
shared
to link with shared libraries,
or static
to always link statically. The default is shared
.
This variable only has an effect with mmc --make
.
MERCURY_LINKAGE
shared
to link with shared Mercury libraries,
or static
to always link with the static versions of Mercury libraries.
The default is system dependent.
This variable only has an effect with mmc --make
.
MLFLAGS and EXTRA_MLFLAGS
GRADEFLAGS
, not in MLFLAGS
.)
LDFLAGS and EXTRA_LDFLAGS
ml --print-link-command
to find out
what command is used, usually the C compiler).
LD_LIBFLAGS and EXTRA_LD_LIBFLAGS
ml --print-shared-lib-link-command
to find out what command is used, usually the C compiler
or the system linker, depending on the platform).
MLLIBS and EXTRA_MLLIBS
-l
options specifying libraries used by the program
(or library) that you are building. See Using libraries.
MLOBJS and EXTRA_MLOBJS
C2INITFLAGS and EXTRA_C2INITFLAGS
C2INITFLAGS
and EXTRA_C2INITFLAGS
are obsolete synonyms
for MLFLAGS
and EXTRA_MLFLAGS
(ml
and c2init
take the same set of options).
(Note that compilation model options and extra files to be processed by
c2init should not be specified in C2INITFLAGS
-- they should be
specified in GRADEFLAGS
and C2INITARGS
, respectively.)
C2INITARGS and EXTRA_C2INITARGS
MLFLAGS
) since they are also used to derive extra dependency
information.
EXTRA_LIBRARIES
lib
prefix or extension.
For example the library including the files libfoo.a
and
foo.init
would be referred to as just foo
.
See Using libraries.
EXTRA_LIB_DIRS
INSTALL_PREFIX
INSTALL
cp
.
INSTALL_MKDIR
mkdir -p
.
LIBGRADES
GRADEFLAGS
settings will also be applied when
the library is built in each of the listed grades, so you may not get what
you expect if those options are not subsumed by each of the grades listed.
Other variables also exist -- see
prefix/lib/mercury/mmake/Mmake.vars
for a complete list.
If you wish to temporarily change the flags passed to an executable,
rather than setting the various FLAGS
variables directly, you can
set an EXTRA_
variable. This is particularly intended for
use where a shell script needs to call mmake and add an extra parameter,
without interfering with the flag settings in the Mmakefile
.
For each of the variables for which there is version with an EXTRA_
prefix, there is also a version with an ALL_
prefix that
is defined to include both the ordinary and the EXTRA_
version.
If you wish to use the values any of these variables
in your Mmakefile (as opposed to setting the values),
then you should use the ALL_
version.
It is also possible to override these variables on a per-file basis.
For example, if you have a module called say bad_style.m
which triggers lots of compiler warnings, and you want to disable
the warnings just for that file, but keep them for all the other modules,
then you can override MCFLAGS
just for that file. This is done by
setting the variable MCFLAGS-bad_style
, as shown here:
MCFLAGS-bad_style = --inhibit-warnings
Mmake has a few options, including --use-subdirs
, --use-mmc-make
,
--save-makefile
, --verbose
, and --no-warn-undefined-vars
.
For details about these options, see the man page or type mmake --help
.
Finally, since Mmake is built on top of Make or GNU Make, you can also make use of the features and options supported by the underlying Make. In particular, GNU Make has support for running jobs in parallel, which is very useful if you have a machine with more than one CPU.
As an alternative to Mmake, the Mercury compiler now contains a
significant part of the functionality of Mmake, using mmc's
--make
option.
The advantages of the mmc --make
over Mmake are that there
is no mmake depend
step and the dependencies are more accurate.
Parallel builds are not yet supported.
Note that --use-subdirs
is automatically enabled if you specify
mmc --make
.
The Mmake variables above can be used by mmc --make
if they
are set in a file called Mercury.options
. The Mercury.options
file has the same syntax as an Mmakefile, but only variable assignments and
include
directives are allowed. All variables in Mercury.options
are treated as if they are assigned using :=
. Variables may also
be set in the environment, overriding settings in options files.
mmc --make
can be used in conjunction with Mmake. This is useful
for projects which include source code written in languages other than
Mercury. The --use-mmc-make
Mmake option disables Mmake's
Mercury-specific rules. Mmake will then process source files written in
other languages, but all Mercury compilation will be done by
mmc --make
. The following variables can be set in the Mmakefile
to control the use of mmc --make
.
MERCURY_MAIN_MODULES
mmc --make
to rebuild the targets for the main modules even if those files already
exist.
MC_BUILD_FILES
mmc --make
.
This should only be necessary for header files generated by the
Mercury compiler which are included by the user's C source files.
MC_MAKE_FLAGS and EXTRA_MC_MAKE_FLAGS
mmc --make
.
Often you will want to use a particular set of Mercury modules in more than one program. The Mercury implementation includes support for developing libraries, i.e. sets of Mercury modules intended for reuse. It allows separate compilation of libraries and, on many platforms, it supports shared object libraries.
A Mercury library is identified by a top-level module,
which should contain all of the modules in that library as sub-modules.
It may be as simple as this mypackage.m
file:
:- module mypackage. :- interface. :- include_module foo, bar, baz.
This defines a module mypackage
containing
sub-modules mypackage:foo
, mypackage:bar
,
and mypackage:baz
.
It is also possible to build libraries of unrelated
modules, so long as the top-level module imports all
the necessary modules. For example:
:- module blah. :- import_module fee, fie, foe, fum.
This example defines a module blah
, which has
no functionality of its own, and which is just used
for grouping the unrelated modules fee
,
fie
, foe
, and fum
.
Generally it is better style for each library to consist of a single module which encapsulates its sub-modules, as in the first example, rather than just a group of unrelated modules, as in the second example.
Generally Mmake will do most of the work of building
libraries automatically. Here's a sample Mmakefile
for
creating a library.
MAIN_TARGET = libmypackage depend: mypackage.depend
The Mmake target libfoo
is a built-in target for
creating a library whose top-level module is foo.m
.
The automatically generated Mmake rules for the target libfoo
will create all the files needed to use the library.
(You will need to run mmake foo.depend
first
to generate the module dependency information.)
Mmake will create static (non-shared) object libraries
and, on most platforms, shared object libraries;
however, we do not yet support the creation of dynamic link
libraries (DLLs) on Windows.
Static libraries are created using the standard tools ar
and ranlib
.
Shared libraries are created using the --make-shared-lib
option to ml
.
The automatically-generated Make rules for libmypackage
will look something like this:
libmypackage: libmypackage.a libmypackage.so \ $(mypackage.ints) $(mypackage.int3s) \ $(mypackage.opts) $(mypackage.trans_opts) mypackage.init libmypackage.a: $(mypackage.os) rm -f libmypackage.a $(AR) $(ARFLAGS) libmypackage.a $(mypackage.os) $(MLOBJS) $(RANLIB) $(RANLIBFLAGS) mypackage.a libmypackage.so: $(mypackage.pic_os) $(ML) $(MLFLAGS) --make-shared-lib -o libmypackage.so \ $(mypackage.pic_os) $(MLPICOBJS) $(MLLIBS) libmypackage.init: ... clean: rm -f libmypackage.a libmypackage.so
If necessary, you can override the default definitions of the variables
such as ML
, MLFLAGS
, MLPICOBJS
, and MLLIBS
to customize the way shared libraries are built. Similarly AR
,
ARFLAGS
, MLOBJS
, RANLIB
, and RANLIBFLAGS
control the way static libraries are built. (The MLOBJS
variable
is supposed to contain a list of additional object files to link into
the library, while the MLLIBS
variable should contain a list of
-l
options naming other libraries used by this library.
MLPICOBJS
is described below.)
Note that to use a library, as well as the shared or static object library,
you also need the interface files. That's why the
libmypackage
target builds $(mypackage.ints)
and
$(mypackage.int3s)
.
If the people using the library are going to use intermodule
optimization, you will also need the intermodule optimization interfaces.
The libmypackage
target will build $(mypackage.opts)
if
--intermodule-optimization
is specified in your MCFLAGS
variable (this is recommended).
Similarly, if the people using the library are going to use transitive
intermodule optimization, you will also need the transitive intermodule
optimization interfaces ($(mypackage.trans_opt)
).
These will be built if --trans-intermod-opt
is specified in your
MCFLAGS
variable.
In addition, with certain compilation grades, programs will need to
execute some startup code to initialize the library; the
mypackage.init
file contains information about initialization
code for the library. The libmypackage
target will build this file.
On some platforms, shared objects must be created using position independent
code (PIC), which requires passing some special options to the C compiler.
On these platforms, Mmake
will create .pic_o
files,
and $(mypackage.pic_os)
will contain a list of the .pic_o
files
for the library whose top-level module is mypackage
.
In addition, $(MLPICOBJS)
will be set to $MLOBJS
with
all occurrences of .o
replaced with .pic_o
.
On other platforms, position independent code is the default,
so $(mypackage.pic_os)
will just be the same as $(mypackage.os)
,
which contains a list of the .o
files for that module,
and $(MLPICOBJS)
will be the same as $(MLOBJS)
.
mmake
has support for alternative library directory hierarchies.
These have the same structure as the prefix/lib/mercury
tree,
including the different subdirectories for different grades and different
machine architectures.
In order to support the installation of a library into such a tree, you
simply need to specify (e.g. in your Mmakefile
) the path prefix
and the list of grades to install:
INSTALL_PREFIX = /my/install/dir LIBGRADES = asm_fast asm_fast.gc.tr.debug
This specifies that libraries should be installed in
/my/install/dir/lib/mercury
, in the default grade plus
asm_fast
and asm_fast.gc.tr.debug
.
If INSTALL_PREFIX
is not specified, mmake
will attempt to
install the library in the same place as the standard Mercury libraries.
If LIBGRADES
is not specified, mmake
will use the Mercury
compiler's default set of grades, which may or may not correspond to the
actual set of grades in which the standard Mercury libraries were installed.
To actually install a library libfoo
, use the mmake
target libfoo.install
.
This also installs all the needed interface files, and (if intermodule
optimisation is enabled) the relevant intermodule optimisation files.
One can override the list of grades to install for a given library
libfoo
by setting the LIBGRADES-foo
variable,
or add to it by setting EXTRA_LIBGRADES-foo
.
The command used to install each file is specified by INSTALL
.
If INSTALL
is not specified, cp
will be used.
The command used to create directories is specified by INSTALL_MKDIR
.
If INSTALL_MKDIR
is not specified, mkdir -p
will be used.
Note that currently it is not possible to set the installation prefix on a library-by-library basis.
Once a library is installed, using it is easy.
Suppose the user wishes to use the library mypackage
(installed
in the tree rooted at /some/directory/mypackage
) and the library
myotherlib
(installed in the tree rooted at
/some/directory/myotherlib
).
The user need only set the following Mmake variables:
EXTRA_LIB_DIRS = /some/directory/mypackage/lib/mercury \ /some/directory/myotherlib/lib/mercury EXTRA_LIBRARIES = mypackage myotherlib
When using --intermodule-optimization
with a library which
uses the C interface, it may be necessary to add -I
options to
MGNUCFLAGS
so that the C compiler can find any header files
used by the library's C code.
Mmake will ensure that the appropriate directories are searched for the relevant interface files, module initialisation files, compiled libraries, etc.
To use a library when invoking mmc
directly, use the --mld
and --ml
options (see Link options). You can also specify
whether to link executables with the shared or static versions of Mercury
libraries using --mercury-linkage shared
or
--mercury-linkage static
(shared libraries are always linked with
the shared versions of libraries).
Beware that the directory name that you must use in EXTRA_LIB_DIRS
or as the argument of the --mld
option is not quite the same as
the name that was specified in the INSTALL_PREFIX
when the library
was installed -- the name needs to have /lib/mercury
appended.
One can specify extra libraries to be used on a program-by-program
basis. For instance, if the program foo
also uses the library
mylib4foo
, but the other programs governed by the Mmakefile don't,
then one can declare:
EXTRA_LIBRARIES-foo = mylib4foo
Libraries are handled a little differently for the Java grade. Instead of
compiling object code into a static or shared library, the class files are
added to a jar (Java ARchive) file of the form library-name.jar
.
To create or install a Java library, simply specify that you want to use the
java grade, either by setting GRADE=java
in your Mmakefile, or by
including --java
or --grade java
in your GRADEFLAGS
, then
follow the instructions as above.
Java libraries are installed to the directory
prefix/lib/mercury/lib/java
. To include them in a program, in
addition to the instructions above, you will need to include the installed jar
file in your CLASSPATH
, which you can set using
--java-classpath jarfile
in MCFLAGS
.
This section gives a quick and simple guide to getting started with the debugger. The remainder of this chapter contains more detailed documentation.
To use the debugger, you must
first compile your program with debugging enabled.
You can do this by using
the --debug
option
when invoking mmc
,
or by including GRADEFLAGS = --debug
in your Mmakefile
.
bash$ mmc --debug hello.m
Once you've compiled with debugging enabled, you can use the mdb
command to invoke your program under the debugger:
bash$ mdb ./hello arg1 arg2 ...
Any arguments (such as arg1 arg2 ...
in this example)
that you pass after the program name will be given as arguments
to the program.
The debugger will print a start-up message
and will then show you the first trace event,
namely the call to main/2
:
1: 1 1 CALL pred hello:main/2-0 (det) hello.m:13 mdb>
By hitting enter at the mdb>
prompt, you can step through
the execution of your program to the next trace event:
2: 2 2 CALL pred io:write_string/3-0 (det) io.m:2837 (hello.m:14) mdb> Hello, world 3: 2 2 EXIT pred io:write_string/3-0 (det) io.m:2837 (hello.m:14) mdb>
For each trace event, the debugger prints out several pieces of
information. The three numbers at the start of the display are the
event number, the call sequence number, and the call depth.
(You don't really need to pay too much attention to those.)
They are followed by the event type (e.g. CALL
or EXIT
).
After that comes the identification of the procedure
in which the event occurred, consisting of the module-qualified name
of the predicate or function to which the procedure belongs,
followed by its arity, mode number and determinism.
This may sometimes be followed by a "path"
(see Tracing of Mercury programs).
At the end is the file name and line number of the
called procedure and (if available) also the file name
and line number of the call.
The most useful mdb
commands have single-letter abbreviations.
The alias
command will show these abbreviations:
mdb> alias ? => help EMPTY => step NUMBER => step P => print * b => break c => continue d => stack f => finish g => goto h => help p => print r => retry s => step v => vars
The P
or print *
command will display the values
of any live variables in scope.
The f
or finish
command can be used if you want
to skip over a call.
The b
or break
command can be used to set break-points.
The d
or stack
command will display the call stack.
The quit
command will exit the debugger.
That should be enough to get you started.
But if you have GNU Emacs installed, you should strongly
consider using the Emacs interface to mdb
-- see
the following section.
For more information about the available commands,
use the ?
or help
command, or see Debugger commands.
As well as the command-line debugger, mdb, there is also an Emacs interface to this debugger. Note that the Emacs interface only works with GNU Emacs, not with XEmacs.
With the Emacs interface, the debugger will display your source code as you trace through it, marking the line that is currently being executed, and allowing you to easily set breakpoints on particular lines in your source code. You can have separate windows for the debugger prompt, the source code being executed, and for the output of the program being executed. In addition, most of the mdb commands are accessible via menus.
To start the Emacs interface, you first need to put the following
text in the file .emacs
in your home directory,
replacing "/usr/local/mercury-1.0" with the directory
that your Mercury implementation was installed in.
(setq load-path (cons (expand-file-name "/usr/local/mercury-1.0/lib/mercury/elisp") load-path)) (autoload 'mdb "gud" "Invoke the Mercury debugger" t)
Build your program with debugging enabled, as described
in Quick overview or Preparing a program for debugging.
Then start up Emacs, e.g. using the command emacs
,
and type M-x mdb <RET>. Emacs will then prompt you for
the mdb command to invoke
Run mdb (like this): mdb
and you should type in the name of the program that you want to debug
and any arguments that you want to pass to it:
Run mdb (like this): mdb ./hello arg1 arg2 ...
Emacs will then create several "buffers": one for the debugger prompt, one for the input and output of the program being executed, and one or more for the source files. By default, Emacs will split the display into two parts, called "windows", so that two of these buffers will be visible. You can use the command C-x o to switch between windows, and you can use the command C-x 2 to split a window into two windows. You can use the "Buffers" menu to select which buffer is displayed in each window.
If you're using X-Windows, then it is a good idea
to set the Emacs variable pop-up-frames
to t
before starting mdb, since this will cause each buffer to be
displayed in a new "frame" (i.e. a new X window).
You can set this variable interactively using the
set-variable
command, i.e.
M-x set-variable <RET> pop-up-frames <RET> t <RET>.
Or you can put (setq pop-up-frames t)
in the .emacs
file in your home directory.
For more information on buffers, windows, and frames, see the Emacs documentation.
Another useful Emacs variable is gud-mdb-directories
.
This specifies the list of directories to search for source files.
You can use a command such as
M-x set-variable <RET> gud-mdb-directories <RET> (list "/foo/bar" "../other" "/home/guest") <RET>
to set it interactively, or you can put a command like
(setq gud-mdb-directories (list "/foo/bar" "../other" "/home/guest"))
in your .emacs
file.
At each trace event, the debugger will search for the
source file corresponding to that event, first in the
same directory as the program, and then in the directories
specified by the gud-mdb-directories
variable.
It will display the source file, with the line number
corresponding to that trace event marked by
an arrow (=>
) at the start of the line.
Several of the debugger features can be accessed by moving the cursor to the relevant part of the source code and then selecting a command from the menu. You can set a break point on a line by moving the cursor to the appropriate line in your source code (e.g. with the arrow keys, or by clicking the mouse there), and then selecting the "Set breakpoint on line" command from the "Breakpoints" sub-menu of the "MDB" menu. You can set a breakpoint on a procedure by moving the cursor over the procedure name and then selecting the "Set breakpoint on procedure" command from the same menu. And you can display the value of a variable by moving the cursor over the variable name and then selecting the "Print variable" command from the "Data browsing" sub-menu of the "MDB" menu. Most of the menu commands also have keyboard short-cuts, which are displayed on the menu.
Note that mdb's context
command should not be used if
you are using the Emacs interface, otherwise the Emacs
interface won't be able to parse the file names and
line numbers that mdb outputs, and so it won't be able to
highlight the correct location in the source code.
The Mercury debugger is based on a modified version of the box model on which the four-port debuggers of most Prolog systems are based. Such debuggers abstract the execution of a program into a sequence, also called a trace, of execution events of various kinds. The four kinds of events supported by most Prolog systems (their ports) are
Mercury also supports these four kinds of events, but not all events can occur for every procedure call. Which events can occur for a procedure call, and in what order, depend on the determinism of the procedure. The possible event sequences for procedures of the various determinisms are as follows.
In addition to these four event types, Mercury supports exception events. An exception event occurs when an exception has been thrown inside a procedure, and control is about to propagate this exception to the caller. An exception event can replace the final exit or fail event in the event sequences above or, in the case of erroneous procedures, can come after the call event.
Besides the event types call, exit, redo, fail and exception, which describe the interface of a call, Mercury also supports several types of events that report on what is happening internal to a call. Each of these internal event types has an associated parameter called a path. The internal event types are:
A path is a sequence of path components separated by semicolons. Each path component is one of the following:
cnum
dnum
snum
?
t
e
~
q
A path describes the position of a goal
inside the body of a procedure definition.
For example, if the procedure body is a disjunction
in which each disjunct is a conjunction,
then the path d2;c3;
denotes the third conjunct
within the second disjunct.
If the third conjunct within the second disjunct is an atomic goal
such as a call or a unification,
then this will be the only goal with whose path has d2;c3;
as a prefix.
If it is a compound goal,
then its components will all have paths that have d2;c3;
as a prefix,
e.g. if it is an if-then-else,
then its three components will have the paths
d2;c3;?;
, d2;c3;t;
and d2;c3;e;
.
Paths refer to the internal form of the procedure definition.
When debugging is enabled
(and the option --trace-optimized
is not given),
the compiler will try to keep this form
as close as possible to the source form of the procedure,
in order to make event paths as useful as possible to the programmer.
Due to the compiler's flattening of terms,
and its introduction of extra unifications to implement calls in implied modes,
the number of conjuncts in a conjunction will frequently differ
between the source and internal form of a procedure.
This is rarely a problem, however, as long as you know about it.
Mode reordering can be a bit more of a problem,
but it can be avoided by writing single-mode predicates and functions
so that producers come before consumers.
The compiler transformation that
potentially causes the most trouble in the interpretation of goal paths
is the conversion of disjunctions into switches.
In most cases, a disjunction is transformed into a single switch,
and it is usually easy to guess, just from the events within a switch arm,
just which disjunct the switch arm corresponds to.
Some cases are more complex;
for example, it is possible for a single disjunction
can be transformed into several switches,
possibly with other, smaller disjunctions inside them.
In such cases, making sense of goal paths
may require a look at the internal form of the procedure.
You can ask the compiler to generate a file
with the internal forms of the procedures in a given module
by including the options -dfinal -Dpaths
on the command line
when compiling that module.
When you compile a Mercury program, you can specify whether you want to be able to run the Mercury debugger on the program or not. If you do, the compiler embeds calls to the Mercury debugging system into the executable code of the program, at the execution points that represent trace events. At each event, the debugging system decides whether to give control back to the executable immediately, or whether to first give control to you, allowing you to examine the state of the computation and issue commands.
Mercury supports two broad ways of preparing a program for debugging.
The simpler way is to compile a program in a debugging grade,
which you can do directly by specifying a grade
that includes the word "debug" (e.g. asm_fast.gc.debug
),
or indirectly by specifying the --debug
grade option to the compiler, linker, and other tools
(in particular mmc
, mgnuc
, ml
, and c2init
).
If you follow this way,
and accept the default settings of the various compiler options
that control the selection of trace events (which are described below),
you will be assured of being able to get control
at every execution point that represents a potential trace event,
which is very convenient.
The two drawbacks of using a debugging grade
are the large size of the resulting executables,
and the fact that often you discover that you need to debug a big program
only after having built it in a non-debugging grade.
This is why Mercury also supports another way
to prepare a program for debugging,
one that does not require the use of a debugging grade.
With this way, you can decide, individually for each module,
which of four trace levels,
none
, shallow
, deep
, and rep
you want to compile them with:
none
none
will never generate any events.
deep
deep
will always generate all the events requested by the user.
By default, this is all possible events,
but you can tell the compiler that
you are not interested in some kinds of events
via compiler options (see below).
However, declarative debugging requires all events to be generated
if it is to operate properly,
so do not disable the generation of any event types
if you want to use declarative debugging.
For more details see Declarative debugging.
rep
deep
,
except that a representation of the module is stored in the executable
along with the usual debugging information.
The declarative debugger can use this extra information
to help it avoid asking unnecessary questions,
so this trace level has the effect of better declarative debugging
at the cost of increased executable size.
shallow
shallow
will generate interface events
if it is called from a procedure compiled with trace level deep
,
but it will never generate any internal events,
and it will not generate any interface events either
if it is called from a procedure compiled with trace level shallow
.
If it is called from a procedure compiled with trace level none
,
the way it will behave is dictated by whether
its nearest ancestor whose trace level is not none
has trace level deep
or shallow
.
The intended uses of these trace levels are as follows.
deep
deep
if you suspect there may be a bug in the module,
or if you think that being able to examine what happens inside that module
can help you locate a bug.
rep
rep
if you suspect there may be a bug in the module,
you wish to use the full power of the declarative debugger,
and you are not concerned about the size of the executable.
shallow
shallow
if you believe the code of the module is reliable and unlikely to have bugs,
but you still want to be able to get control at calls to and returns from
any predicates and functions defined in the module,
and if you want to be able to see the arguments of those calls.
none
none
only if you are reasonably confident that the module is reliable,
and if you believe that knowing what calls other modules make to this module
would not significantly benefit you in your debugging.
In general, it is a good idea for most or all modules
that can be called from modules compiled with trace level
deep
or rep
to be compiled with at least trace level shallow
.
You can control what trace level a module is compiled with by giving one of the following compiler options:
--trace shallow
shallow
.
--trace deep
deep
.
--trace rep
rep
.
--trace minimum
shallow
;
in non-debugging grades, it sets the trace level to none
.
--trace default
deep
;
in non-debugging grades, it sets the trace level to none
.
As the name implies, the last alternative is the default,
which is why by default you get
no debugging capability in non-debugging grades
and full debugging capability in debugging grades.
The table also shows that in a debugging grade,
no module can be compiled with trace level none
.
Important note:
If you are not using a debugging grade, but you compile some modules with
a trace level other than none,
then you must also pass the --trace
(or -t
) option
to c2init and to the Mercury linker.
If you're using Mmake, then you can do this by including --trace
in the MLFLAGS
variable.
If you're using Mmake, then you can also set the compilation options
for a single module named Module by setting the Mmake variable
MCFLAGS-Module
. For example, to compile the file
foo.m
with deep tracing, bar.m
with shallow tracing,
and everything else with no tracing, you could use the following:
MLFLAGS = --trace MCFLAGS-foo = --trace deep MCFLAGS-bar = --trace shallow
By default, all trace levels other than none
turn off all compiler optimizations
that can affect the sequence of trace events generated by the program,
such as inlining.
If you are specifically interested in
how the compiler's optimizations affect the trace event sequence,
you can specify the option --trace-optimized
,
which tells the compiler that it does not have to disable those optimizations.
(A small number of low-level optimizations
have not yet been enhanced to work properly in the presence of tracing,
so compiler disables these even if --trace-optimized
is given.)
The executables of Mercury programs
by default do not invoke the Mercury debugger
even if some or all of their modules were compiled with some form of tracing,
and even if the grade of the executable is a debugging grade,
This is similar to the behaviour of executables
created by the implementations of other languages;
for example the executable of a C program compiled with -g
does not automatically invoke gdb or dbx etc when it is executed.
Unlike those other language implementations,
when you invoke the Mercury debugger mdb
,
you invoke it not just with the name of an executable
but with the command line you want to debug.
If something goes wrong when you execute the command
prog arg1 arg2 ...
and you want to find the cause of the problem,
you must execute the command
mdb prog arg1 arg2 ...
because you do not get a chance to specify the command line of the program later.
When the debugger starts up, as part of its initialization it executes commands from the following three sources, in order:
MERCURY_DEBUGGER_INIT
environment variable.
Usually, mdb
sets this variable to point to a file
that provides documentation for all the debugger commands
and defines a small set of aliases.
However, if MERCURY_DEBUGGER_INIT
is already defined
when mdb
is invoked, it will leave its value unchanged.
You can use this override ability to provide alternate documentation.
If the file named by MERCURY_DEBUGGER_INIT
cannot be read,
mdb
will print a warning,
since in that case, that usual online documentation will not be available.
.mdbrc
in your home directory.
You can put your usual aliases and settings here.
.mdbrc
in the current working directory.
You can put program-specific aliases and settings here.
The operation of the Mercury debugger mdb
is based on the following concepts.
The effect of a break point depends on the state of the break point.
stop
,
execution will stop and user interaction will start
at any event within the procedure that matches the invocation conditions,
unless the current debugger command has specifically disabled this behaviour
(see the concept strict commands
below).
print
,
the debugger will print any event within the procedure
that matches the invocation conditions,
unless the current debugger command has specifically disabled this behaviour
(see the concept print level
below).
Neither of these will happen if the break point is disabled.
stop
applies.
By default, the debugger will stop at such events.
However, if the debugger is executing a strict command,
it will not stop at an event
just because a break point in the stop state applies to it.
If the debugger receives an interrupt (e.g. if the user presses control-C),
it will stop at the next event regardless of what command it is executing
at the time.
none
,
none of the stepped over events will be printed.
all
,
all the stepped over events will be printed.
some
,
the debugger will print the event only if a break point applies to the event.
Regardless of the print level, the debugger will print
any event that causes execution to stop and user interaction to start.
some
,
but this value can be overridden by the user.
up
, down
and level
commands
can set the current environment
to refer to one of the ancestors of the current call.
This will then be the current environment until another of these commands
changes the environment yet again or execution continues to another event.
break
,
require a parameter that specifies a procedure.
Such a procedure specification has
the following components in the following order:
pred*
or func*
that specifies whether the procedure belongs to a predicate or a function.
module:
or module__
that specifies the name of the module that defines
the predicate or function to which the procedure belongs.
/arity
that specifies the arity of the predicate or function
to which the procedure belongs.
-modenum
that specifies the mode number of the procedure
within the predicate or function to which the procedure belongs.
In Mercury, predicates that want to do I/O must take a di/uo pair of I/O state arguments. Some of these predicates call other predicates to do I/O for them, but some are I/O primitives, i.e. they perform the I/O themselves. The Mercury standard library provides a large set of these primitives, and programmers can write their own through the foreign language interface. An I/O action is the execution of one call to an I/O primitive.
In debugging grades, the Mercury implementation has the ability
to automatically record, for every I/O action,
the identity of the I/O primitive involved in the action
and the values of all its arguments.
The size of the table storing this information
is proportional to the number of tabled I/O actions,
which are the I/O actions whose details are entered into the table.
Therefore the tabling of I/O actions is never turned on automatically;
instead, users must ask for I/O tabling to start
with the table_io start
command in mdb.
The purpose of I/O tabling is to enable transparent retries across I/O actions.
(The mdb retry
command
restores the computation to a state it had earlier,
allowing the programmer to explore code that the program has already executed;
see its documentation in the Debugger commands section below.)
In the absence of I/O tabling,
retries across I/O actions can have bad consequences.
Retry of a goal that reads some input requires that input to be provided twice;
retry of a goal that writes some output generates duplicate output.
Retry of a goal that opens a file leads to a file descriptor leak;
retry of a goal that closes a file can lead to errors
(duplicate closes, reads from and writes to closed files).
I/O tabling avoids these problems by making I/O primitives idempotent. This means that they will generate their desired effect when they are first executed, but reexecuting them after a retry won't have any further effect. The Mercury implementation achieves this by looking up the action (which is identified by a I/O action number) in the table and returning the output arguments stored in the table for the given action without executing the code of the primitive.
Starting I/O tabling when the program starts execution
and leaving it enabled for the entire program run
will work well for program runs that don't do lots of I/O.
For program runs that do lots of I/O,
the table can fill up all available memory.
In such cases, the programmer may enable I/O tabling with table_io start
just before the program enters the part they wish to debug
and in which they wish to be able to perform
transparent retries across I/O actions,
and turn it off with table_io stop
after execution leaves that part.
The commands table_io start
and table_io stop
can each be given only once during an mdb session.
They divide the execution of the program into three phases:
before table_io start
,
between table_io start
and table_io stop
,
and after table_io stop
.
Retries across I/O will be transparent only in the middle phase.
When the debugger (as opposed to the program being debugged) is interacting with the user, the debugger prints a prompt and reads in a line of text, which it will interpret as its next command line. A command line consists of a single command, or several commands separated by semicolons. Each command consists of several words separated by white space. The first word is the name of the command, while any other words give options and/or parameters to the command.
A word may itself contain semicolons or whitespace if it is
enclosed in single quotes (').
This is useful for commands that have other commands as parameters,
for example view -w 'xterm -e'
.
Characters that have special meaning to mdb
will be treated like
ordinary characters if they are escaped with a backslash (\).
It is possible to escape single quotes, whitespace, semicolons, newlines
and the escape character itself.
Some commands take a number as their first parameter. For such commands, users can type `number command' as well as `command number'. The debugger will treat the former as the latter, even if the number and the command are not separated by white space.
query module1 module2 ...
cc_query module1 module2 ...
io_query module1 module2 ...
These commands allow you to type in queries (goals) interactively
in the debugger. When you use one of these commands, the debugger
will respond with a query prompt (?-
or run <--
),
at which you can type in a goal; the debugger will the compile
and execute the goal and display the answer(s).
You can return from the query prompt to the mdb>
prompt
by typing the end-of-file indicator (typically control-D or control-Z),
or by typing quit.
.
The module names module1, module2, ... specify
which modules will be imported. Note that you can also
add new modules to the list of imports directly at the query prompt,
by using a command of the form [module]
, e.g. [int]
.
You need to import all the modules that define symbols used in your query.
Queries can only use symbols that are exported from a module;
entities which are declared in a module's implementation section
only cannot be used.
The three variants differ in what kind of goals they allow.
For goals which perform I/O, you need to use io_query
;
this lets you type in the goal using DCG syntax.
For goals which don't do I/O, but which have determinism
cc_nondet
or cc_multi
, you need to use cc_query
;
this finds only one solution to the specified goal.
For all other goals, you can use plain query
, which
finds all the solutions to the goal.
For query
and cc_query
, the debugger will print
out all the variables in the goal using io__write
.
The goal must bind all of its variables to ground terms,
otherwise you will get a mode error.
The current implementation works by compiling the queries on-the-fly
and then dynamically linking them into the program being debugged.
Thus it may take a little while for your query to be executed.
Each query will be written to a file named mdb_query.m
in the current
directory, so make sure you don't name your source file mdb_query.m
.
Note that dynamic linking may not be supported on some systems;
if you are using a system for which dynamic linking is not supported,
you will get an error message when you try to run these commands.
You may also need to build your program using shared libraries
for interactive queries to work.
With Linux on the Intel x86 architecture, the default is for
executables to be statically linked, which means that dynamic
linking won't work, and hence interactive queries won't work either
(the error message is rather obscure: the dynamic linker complains
about the symbol __data_start
being undefined).
To build with shared libraries, you can use
MGNUCFLAGS=--pic-reg
and MLFLAGS=--shared
in your
Mmakefile. See the README.Linux
file in the Mercury
distribution for more details.
step [-NSans] [num]
The options -n
or --none
, -s
or --some
,
-a
or --all
specify the print level to use
for the duration of the command,
while the options -S
or --strict
and -N
or --nostrict
specify
the strictness of the command.
By default, this command is not strict, and it uses the default print level.
A command line containing only a number num is interpreted as
if it were `step num'.
An empty command line is interpreted as `step 1'.
goto [-NSans] num
The options -n
or --none
, -s
or --some
,
-a
or --all
specify the print level to use
for the duration of the command,
while the options -S
or --strict
and -N
or --nostrict
specify
the strictness of the command.
By default, this command is strict, and it uses the default print level.
next [-NSans] [num]
The options -n
or --none
, -s
or --some
,
-a
or --all
specify the print level to use
for the duration of the command,
while the options -S
or --strict
and -N
or --nostrict
specify
the strictness of the command.
By default, this command is strict, and it uses the default print level.
finish [-NSans] [num]
The options -n
or --none
, -s
or --some
,
-a
or --all
specify the print level to use
for the duration of the command,
while the options -S
or --strict
and -N
or --nostrict
specify
the strictness of the command.
By default, this command is strict, and it uses the default print level.
exception [-NSans]
The options -n
or --none
, -s
or --some
,
-a
or --all
specify the print level to use
for the duration of the command,
while the options -S
or --strict
and -N
or --nostrict
specify
the strictness of the command.
By default, this command is strict, and it uses the default print level.
return [-NSans]
The options -n
or --none
, -s
or --some
,
-a
or --all
specify the print level to use
for the duration of the command,
while the options -S
or --strict
and -N
or --nostrict
specify
the strictness of the command.
By default, this command is strict, and it uses the default print level.
forward [-NSans]
The options -n
or --none
, -s
or --some
,
-a
or --all
specify the print level to use
for the duration of the command,
while the options -S
or --strict
and -N
or --nostrict
specify
the strictness of the command.
By default, this command is strict, and it uses the default print level.
mindepth [-NSans] depth
The options -n
or --none
, -s
or --some
,
-a
or --all
specify the print level to use
for the duration of the command,
while the options -S
or --strict
and -N
or --nostrict
specify
the strictness of the command.
By default, this command is strict, and it uses the default print level.
maxdepth [-NSans] depth
The options -n
or --none
, -s
or --some
,
-a
or --all
specify the print level to use
for the duration of the command,
while the options -S
or --strict
and -N
or --nostrict
specify
the strictness of the command.
By default, this command is strict, and it uses the default print level.
continue [-NSans]
The options -n
or --none
, -s
or --some
,
-a
or --all
specify the print level to use
for the duration of the command,
while the options -S
or --strict
and -N
or --nostrict
specify
the strictness of the command.
By default, this command is not strict. The print level used
by the command by default depends on the final strictness level:
if the command is strict, it is none
, otherwise it is some
.
retry [-fio] [num]
The command will report an error unless
the values of all the input arguments of the selected call are available
at the return site at which control would reenter the selected call.
(The compiler will keep the values
of the input arguments of traced predicates as long as possible,
but it cannot keep them beyond the point where they are destructively updated.)
The exception is values of type `io__state';
the debugger can perform a retry if the only missing value is of
type `io__state' (there can be only one io__state at any given time).
Retries over I/O actions are guaranteed to be safe
only if the events at which the retry starts and ends
are both within the I/O tabled region of the program's execution.
If the retry is not guaranteed to be safe,
the debugger will normally ask the user if they really want to do this.
The option -f
or --force
suppresses the question,
telling the debugger that retrying over I/O is OK;
the option -o
or --only-if-safe
suppresses the question,
telling the debugger that retrying over I/O is not OK;
the option -i
or --interactive
restores the question
if a previous option suppressed it.
vars
print [-fpv] name
print [-fpv] num
browse
command (see below). Various settings
which affect the way that terms are printed out
(including e.g. the maximum term depth) can be set using
the set
command.
The options -f
or --flat
, -p
or --pretty
,
and -v
or --verbose
specify the format to use for printing.
print [-fpv] *
The options -f
or --flat
, -p
or --pretty
,
and -v
or --verbose
specify the format to use for printing.
print [-fpv]
print [-fpv] goal
The options -f
or --flat
, -p
or --pretty
,
and -v
or --verbose
specify the format to use for printing.
print [-fpv] exception
The options -f
or --flat
, -p
or --pretty
,
and -v
or --verbose
specify the format to use for printing.
print [-fpv] action num
The options -f
or --flat
, -p
or --pretty
,
and -v
or --verbose
specify the format to use for printing.
browse [-fpv] name
browse [-fpv] num
The interactive term browser allows you
to selectively examine particular subterms.
The depth and size of printed terms
may be controlled.
The displayed terms may also be clipped to fit
within a single screen.
The options -f
or --flat
, -p
or --pretty
,
and -v
or --verbose
specify the format to use for browsing.
For further documentation on the interactive term browser,
invoke the browse
command from within mdb
and then
type help
at the browser>
prompt.
browse [-fpv]
browse [-fpv] goal
The options -f
or --flat
, -p
or --pretty
,
and -v
or --verbose
specify the format to use for browsing.
browse [-fpv] exception
The options -f
or --flat
, -p
or --pretty
,
and -v
or --verbose
specify the format to use for browsing.
browse [-fpv] action num
The options -f
or --flat
, -p
or --pretty
,
and -v
or --verbose
specify the format to use for browsing.
stack [-d] [num]
The option -d
or --detailed
specifies that for each ancestor call,
the call's event number, sequence number and depth should also be printed
if the call is to a procedure that is being execution traced.
The optional number, if present,
specifies that only the topmost num stack frames should be printed.
This command will report an error if there is no stack trace
information available about any ancestor.
up [-d] [num]
If num is not specified, the default value is one.
This command will report an error
if the current environment doesn't have the required number of ancestors,
or if there is no execution trace information about the requested ancestor,
or if there is no stack trace information about any of the ancestors
between the current environment and the requested ancestor.
The option -d
or --detailed
specifies that for each ancestor call,
the call's event number, sequence number and depth should also be printed
if the call is to a procedure that is being execution traced.
down [-d] [num]
If num is not specified, the default value is one.
This command will report an error
if there is no execution trace information about the requested descendant.
The option -d
or --detailed
specifies that for each ancestor call,
the call's event number, sequence number and depth should also be printed
if the call is to a procedure that is being execution traced.
level [-d] [num]
This command will report an error
if the current environment doesn't have the required number of ancestors,
or if there is no execution trace information about the requested ancestor,
or if there is no stack trace information about any of the ancestors
between the current environment and the requested ancestor.
The option -d
or --detailed
specifies that for each ancestor call,
the call's event number, sequence number and depth should also be printed
if the call is to a procedure that is being execution traced.
current
set [-APBfpv] param value
format
, depth
, size
, width
and lines
.
format
can be set to flat
, pretty
or verbose
to change the output style of the browser.
depth
is the maximum depth to which subterms will be displayed.
Subterms at the depth limit may be abbreviated as functor/arity,
or (in lists) may be replaced by an ellipsis (...
).
The principal functor of any term has depth zero.
For subterms which are not lists,
the depth of any argument of the functor is one greater than the
depth of the functor.
For subterms which are lists,
the depth of each element of the list
is one greater than the depth of the list.
size
is the suggested maximum number of functors to display.
Beyond this limit, subterms may be abbreviated as functor/arity,
or (in lists) may be replaced by an ellipsis (...
).
For the purposes of this parameter,
the size of a list is one greater than
the sum of the sizes of the elements in the list.
width
is the width of the screen in characters.
lines
is the maximum number of lines of one term to display.
The browser maintains separate configuration parameters
for the three commands print *
, print var
,
and browse var
.
A single set
command can modify the parameters
for more than one of these;
the options -A
or --print-all
, -P
or --print
,
and -B
or --browse
select which commands will be affected by the change.
If none of these options is given,
the default is to affect all commands.
The browser also maintains separate configuration parameters
for the three different output formats.
This applies to all parameters except for the format itself.
The options -f
or --flat
, -p
or --pretty
,
and -v
or --verbose
select which formats will be affected by the change.
If none of these options is given,
the default is to affect all formats.
In the case that the format itself is being set,
these options are ignored.
view [-vf2] [-w window-cmd] [-s server-cmd] [-n server-name] [-t timeout]
view -c [-v] [-s server-cmd] [-n server-name]
vim
compiled with the client/server option enabled.
The debugger only updates one window at a time.
If you try to open a new source window when there is already one open,
this command aborts with an error message.
The variant with -c
(or --close
)
does not open a new window but instead
attempts to close a currently open source window.
The attempt may fail if, for example,
the user has modified the source file without saving.
The option -v
(or --verbose
)
prints the underlying system calls before running them,
and prints any output the calls produced.
This is useful to find out what is wrong if the server does not start.
The option -f
(or --force
)
stops the command from aborting if there is already a window open.
Instead it attempts to close that window first.
The option -2
(or --split-screen
)
starts the vim server with two windows,
which allows both the callee as well as the caller
to be displayed at interface events.
The lower window shows what would normally be seen
if the split-screen option was not used,
which at interface events is the caller.
At these events,
the upper window shows the callee definition.
At internal events,
the lower window shows the associated source,
and the view in the upper window
(which is not interesting at these events)
remains unchanged.
The option -w
(or --window-command
) specifies
the command to open a new window.
The default is xterm -e
.
The option -s
(or --server-command
) specifies
the command to start the server.
The default is vim
.
The option -n
(or --server-name
) specifies
the name of an existing server.
Instead of starting up a new server,
mdb will attempt to connect to the existing one.
The option -t
(or --timeout
) specifies
the maximum number of seconds to wait for the server to start.
break [-PS] [-Eignore-count] [-Iignore-count] filename:linenumber
The options -P
or --print
, and -S
or --stop
specify the action to be taken at the break point.
The options -Eignore-count
and --ignore-entry ignore-count
tell the debugger to ignore the breakpoint
until after ignore-count occurrences of a call event
that matches the breakpoint.
The options -Iignore-count
and --ignore-interface ignore-count
tell the debugger to ignore the breakpoint
until after ignore-count occurrences of interface events
that match the breakpoint.
By default, the initial state of the break point is stop
,
and the ignore count is zero.
break [-AOPSaei] [-Eignore-count] [-Iignore-count] proc-spec
The options -A
or --select-all
,
and -O
or --select-one
select the action to be taken
if the specification matches more than one procedure.
If you have specified option -A
or --select-all
,
mdb will put a breakpoint on all matched procedures,
whereas if you have specified option -O
or --select-one
,
mdb will report an error.
By default, mdb will ask you whether you want to put a breakpoint
on all matched procedures or just one, and if so, which one.
The options -P
or --print
, and -S
or --stop
specify the action to be taken at the break point.
The options -a
or --all
,
-e
or --entry
, and -i
or --interface
specify the invocation conditions of the break point.
If none of these options are specified,
the default is the one indicated by the current scope
(see the scope
command below).
The initial scope is interface
.
The options -Eignore-count
and --ignore-entry ignore-count
tell the debugger to ignore the breakpoint
until after ignore-count occurrences of a call event
that matches the breakpoint.
The options -Iignore-count
and --ignore-interface ignore-count
tell the debugger to ignore the breakpoint
until after ignore-count occurrences of interface events
that match the breakpoint.
By default, the action of the break point is stop
,
its invocation condition is interface
,
and the ignore count is zero.
break [-PS] [-Eignore-count] [-Iignore-count] here
The options -P
or --print
, and -S
or --stop
specify the action to be taken at the break point.
The options -Eignore-count
and --ignore-entry ignore-count
tell the debugger to ignore the breakpoint
until after ignore-count occurrences of a call event
that matches the breakpoint.
The options -Iignore-count
and --ignore-interface ignore-count
tell the debugger to ignore the breakpoint
until after ignore-count occurrences of interface events
that match the breakpoint.
By default, the initial state of the break point is stop
,
and the ignore count is zero.
break info
ignore [-Eignore-count] [-Iignore-count] num
The options -Eignore-count
and --ignore-entry ignore-count
tell the debugger to ignore the breakpoint
until after ignore-count occurrences of a call event
that matches the breakpoint with the specified number.
The options -Iignore-count
and --ignore-interface ignore-count
tell the debugger to ignore the breakpoint
until after ignore-count occurrences of interface events
that match the breakpoint with the specified number.
If neither option is given,
the default is to ignore one call event
that matches the breakpoint with the specified number.
Reports an error if there is no break point with the specified number.
ignore [-Eignore-count] [-Iignore-count]
The options -Eignore-count
and --ignore-entry ignore-count
tell the debugger to ignore the breakpoint
until after ignore-count occurrences of a call event
that matches the most recently added breakpoint.
The options -Iignore-count
and --ignore-interface ignore-count
tell the debugger to ignore the breakpoint
until after ignore-count occurrences of interface events
that match the most recently added breakpoint.
If neither option is given,
the default is to ignore one call event
that matches the most recently added breakpoint.
Reports an error if the most recently added breakpoint has since been deleted.
disable num
disable *
disable
enable num
enable *
enable
delete num
delete *
delete
modules
procedures module
register
table_io
table_io start
table_io stop
table_io stats
mmc_options option1 option2 ...
mmc
to compile your query when you use one of the query commands:
query
, cc_query
, or io_query
.
For example, if a query results in a compile error,
it may sometimes be helpful to use mmc_options --verbose-error-messages
.
printlevel none
none
.
printlevel some
some
.
printlevel all
all
.
printlevel
echo on
echo off
echo
scroll on
--more--
prompt.
You may type an empty line, which allows the debugger
to continue to print the next screenful of event reports.
By typing a line that starts with a
, s
or n
,
you can override the print level of the current command,
setting it to all
, some
or none
respectively.
By typing a line that starts with q
,
you can abort the current debugger command
and get back control at the next event.
scroll off
scroll size
--more--
prompt
after every size - 1 events.
The default value of size
is the value of the LINES
environment variable,
which should correspond to the number of lines available on the terminal.
scroll
context none
context before
context after
context prevline
context nextline
context
scope all
scope interface
scope entry
scope
alias name command [command-parameter ...]
If name is the upper-case word EMPTY
,
the debugger will substitute the given command and parameters
whenever the user types in an empty command line.
If name is the upper-case word NUMBER
,
the debugger will insert the given command and parameters
before the command line
whenever the user types in a command line that consists of a single number.
unalias name
document_category slot category
end
.
The list of category summaries printed in response to the command help
is ordered on the integer slot numbers of the categories involved.
document category slot item
end
.
The list of items printed in response to the command help category
is ordered on the integer slot numbers of the items involved.
help category item
help word
help
histogram_all filename
histogram_exp filename
clear_histogram
histogram_exp
,
i.e. sets the counts for all depths to zero.
source [-i] filename
The option -i
or --ignore-errors
tells mdb
not to complain if the named file does not exist or is not readable.
save filename
break
and alias
commands.
Sourcing the file will recreate the current breakpoints and aliases.
dd
quit [-y]
-y
is not present, asks for confirmation first.
Any answer starting with y
, or end-of-file, is considered confirmation.
End-of-file on the debugger's input is considered a quit command.
The following commands are intended for use by the developers
of the Mercury implementation.
flag flagname
flag flagname on
flag flagname off
subgoal n
consumer n
gen_stack
cut_stack
pneg_stack
mm_stacks
nondet_stack [-d] [num]
-d
or --detailed
option is given,
it will also print the names and values of the live variables in them.
The optional number, if present,
specifies that only the topmost num stack frames should be printed.
stack_regs
all_regs
debug_vars
proc_stats
proc_stats filename
label_stats
label_stats filename
var_name_stats
var_name_stats filename
print_optionals
print_optionals on
print_optionals off
unhide_events
unhide_events on
unhide_events off
dd_dd
dd
command,
does not turn off the events generated by the declarative debugger itself.
This makes it possible to debug the declarative debugger itself.
table proc [num1 ...]
For now, this command is supported only for procedures
whose arguments are all either integers, floats or strings.
If the user specifies one or more integers on the command line,
the output is restricted to the entries in the call table in which
the nth argument is equal to the nth number on the command line.
type_ctor [-fr] modulename typectorname arity
If the option -r
or --print-rep
option is given,
it also prints the name of the type representation scheme
used by the type constructor
(known as its `type_ctor_rep' in the implementation).
If the option -f
or --print-functors
option is given,
it also prints the names and arities
of function symbols defined by type constructor.
all_type_ctors [-fr] [modulename]
If the option -r
or --print-rep
option is given,
it also prints the name of the type representation scheme
of each type constructor
(known as its `type_ctor_rep' in the implementation).
If the option -f
or --print-functors
option is given,
it also prints the names and arities
of function symbols defined by each type constructor.
class_decl [-im] modulename typeclassname arity
If the option -m
or --print-methods
option is given,
it also lists all the methods of the type class.
If the option -i
or --print-instance
option is given,
it also lists all the instances of the type class.
all_class_decls [-im] [modulename]
If the option -m
or --print-methods
option is given,
it also lists all the methods of each type class.
If the option -i
or --print-instance
option is given,
it also lists all the instances of each type class.
The debugger incorporates a declarative debugger which can be accessed from its command line. Starting from an event that exhibits a bug, e.g. an event giving a wrong answer, the declarative debugger can find a bug which explains that behaviour using knowledge of the intended interpretation of the program only.
Note that this is a work in progress, so there are some limitations in the implementation. The main limitations are pointed out in the following sections.
Every CALL event corresponds to an atomic goal, the one printed by the "print" command at that event. This atom has the actual arguments in the input argument positions and distinct free variables in the output argument positions (including the return value for functions). We refer to this as the call atom of the event.
The same view can be taken of EXIT events, although in this case the outputs as well as the inputs will be bound. We refer to this as the exit atom of the event. The exit atom is always an instance of the call atom for the corresponding CALL event.
Using these concepts, it is possible to interpret
the events at which control leaves a procedure
as assertions about the semantics of the program.
These assertions may be true or false, depending on whether or not
the program's actual semantics are consistent with its intended semantics.
If one of these assertions is wrong,
then we consider the event to represent incorrect behaviour of the program.
If the user encounters an event for which the assertion is wrong,
then they can request the declarative debugger to
diagnose the incorrect behaviour by giving the dd
command
to the procedural debugger at that event.
Once the dd
command has been given,
the declarative debugger asks the user
a series of questions about the truth of various assertions
in the intended interpretation.
The first question in this series will be about
the validity of the event for which the dd
command was given.
The answer to this question will nearly always be "no",
since the user has just implied the assertion is false
by giving the dd
command.
Later questions will be about other events
in the execution of the program,
not all of them necessarily of the same kind as the first.
The user is expected to act as an "oracle" and provide answers to these questions based on their knowledge of the intended interpretation. The debugger provides some help here: previous answers are remembered and used where possible, so questions are not repeated unnecessarily. Commands are available to provide answers, as well as to browse the arguments more closely or to change the order in which the questions are asked. See the next section for details of the commands that are available.
When seeking to determine the validity of
the assertion corresponding to an EXIT event,
the declarative debugger prints the exit atom
followed by the question Valid?
for the user to answer.
The atom is printed using
the same mechanism that the debugger uses to print values,
which means some arguments may be abbreviated if they are too large.
When seeking to determine the validity of
the assertion corresponding to a FAIL event,
the declarative debugger prints the call atom, prefixed by Call
,
followed by each of the exit atoms
(indented, and on multiple lines if need be),
and prints the question Complete?
for the user to answer.
Note that the user is not required to provide any missing instance
in the case that the answer is no.
(A limitation of the current implementation is that
it is difficult to browse a specific exit atom.
This will hopefully be addressed in the near future.)
When seeking to determine the validity of
the assertion corresponding to an EXCP event,
the declarative debugger prints the call atom
followed by the exception that was thrown,
and prints the question Expected?
for the user to answer.
In some circumstances the declarative debugger provides a default answer to the question. If this is the case, the default answer will be shown in square brackets immediately after the question, and simply pressing return is equivalent to giving that answer.
At the above mentioned prompts, the following commands may be given.
Each command (with the exception of pd
)
can also be abbreviated to just its first letter.
yes
no
skip
restart
browse n
pd
dd
command
in the procedural debugger.
abort
dd
command was given.
help
It is also legal to press return without specifying a command. If there is a default answer (see Oracle questions), pressing return is equivalent to giving that answer. If there is no default answer, pressing return is equivalent to the skip command.
If the oracle keeps providing answers to the asked questions, then the declarative debugger will eventually locate a bug. A "bug", for our purposes, is an assertion about some call which is false, but for which the assertions about every child of that call are not false. There are three different classes of bugs that this debugger can diagnose, one associated with each kind of assertion.
Assertions about EXIT events lead to a kind of bug we call an "incorrect contour". This is a contour (an execution path through the body of a clause) which results in a wrong answer for that clause. When the debugger diagnoses a bug of this kind, it displays the exit atom for the event at the end of the contour. The program event associated with this bug, which we call the "bug event", is the exit event at the end of the contour. (The current implementation does not yet display a representation of which contour was at fault.)
Assertions about FAIL events lead to a kind of bug we call a "partially uncovered atom". This is a call atom which has some instance which is valid, but which is not covered by any of the applicable clauses. When the debugger diagnoses a bug of this kind, it displays the call atom; it does not, however, provide an actual instance that satisfies the above condition. The bug event in this case is the fail event reached after all the solutions were exhausted.
Assertions about EXCP events lead to a kind of bug we call an "unhandled exception". This is a contour which throws an exception that needs to be handled but which is not handled. When the debugger diagnoses a bug of this kind, it displays the call atom followed by the exception which was not handled. The bug event in this case is the exception event for the call in question.
After the diagnosis is displayed, the user is asked to confirm
that the event located by the declarative debugger
does in fact represent a bug.
The user can answer yes
or y
to confirm the bug,
no
or n
to reject the bug,
or abort
or a
to abort the diagnosis.
If the user confirms the diagnosis, they are returned to the procedural debugger at the event which was found to be the bug event. This gives the user an opportunity, if they need it, to investigate (procedurally) the events in the neighbourhood of the bug.
If the user rejects the diagnosis, which implies that some of their earlier answers may have been mistakes, diagnosis is resumed from some earlier point determined by the debugger. The user may now be asked questions they have already answered, with the previous answer they gave being the default, or they may be asked entirely new questions.
If the user aborts the diagnosis,
they are returned to the event at which the dd
command was given.
The Mercury compiler allows compilation of predicates for execution using the Aditi2 deductive database system. There are several sources of useful information:
$ADITI_HOME/doc/aditi.m
$ADITI_HOME/demos
As an alternative to compiling stand-alone programs, you can execute queries using the Aditi query shell.
The Aditi interface library is installed as part of the Aditi
installation process. To use the Aditi library in your programs, use
the Mmakefile in $ADITI_HOME/demos/transactions
as a template.
To obtain the best trade-off between productivity and efficiency, programmers should not spend too much time optimizing their code until they know which parts of the code are really taking up most of the time. Only once the code has been profiled should the programmer consider making optimizations that would improve efficiency at the expense of readability or ease of maintenance. A good profiler is therefore a tool that should be part of every software engineer's toolkit.
Mercury programs can be analyzed using two distinct profilers.
The Mercury profiler mprof
is a conventional call-graph profiler
(or graph profiler for short) in the style of gprof
.
The Mercury deep profiler mdprof
is a new kind of profiler
that associates a lot more context with each measurement.
mprof
can be used to profile either time or space,
but not both at the same time;
mdprof
can profile both time and space at the same time.
To enable profiling, your program must be built with profiling enabled. The two different profilers require different support, and thus you must choose which one to enable when you build your program.
mprof
,
pass the -p
(--profiling
) option to mmc
(and also to mgnuc
and ml
, if you invoke them separately).
mprof
,
pass the --memory-profiling
option to mmc
,
mgnuc
and ml
.
mdprof
),
pass the --deep-profiling
option to mmc
,
mgnuc
and ml
.
If you are using Mmake,
then you pass these options to all the relevant programs
by setting the GRADEFLAGS
variable in your Mmakefile,
e.g. by adding the line GRADEFLAGS=--profiling
.
(For more information about the different grades,
see Compilation model options.)
Enabling profiling has several effects.
First, it causes the compiler to generate slightly modified code,
which counts the number of times each predicate or function is called,
and for every call, records the caller and callee.
With deep profiling, there are other modifications as well,
the most important impact of which is the loss of tail-recursion
for groups of mutually tail-recursive predicates
(self-tail-recursive predicates stay tail-recursive).
Second, your program will be linked with versions of the library and runtime
that were compiled with the same kind of profiling enabled.
Third, if you enable graph profiling,
the compiler will generate for each source file
the static call graph for that file in module.prof
.
Once you have created a profiled executable, you can gather profiling information by running the profiled executable on some test data that is representative of the intended uses of the program. The profiling version of your program will collect profiling information during execution, and save this information at the end of execution, provided execution terminates normally and not via an abort.
Executables compiled with --profiling
save profiling data in the files
Prof.Counts
, Prof.Decls
, and Prof.CallPair
.
(Prof.Decl
contains the names
of the procedures and their associated addresses,
Prof.CallPair
records the number of times
each procedure was called by each different caller,
and Prof.Counts
records the number of times
that execution was in each procedure when a profiling interrupt occurred.)
Executables compiled with --memory-profiling
will use two of those files (Prof.Decls
and Prof.CallPair
)
and a two others: Prof.MemoryWords
and Prof.MemoryCells
.
Executables compiled with --deep-profiling
save profiling data in a single file, Deep.data
.
It is also possible to combine mprof
profiling results
from multiple runs of your program.
You can do by running your program several times,
and typing mprof_merge_counts
after each run.
It is not (yet) possible to combine mdprof
profiling results
from multiple runs of your program.
Due to a known timing-related bug in our code,
you may occasionally get segmentation violations
when running your program with mprof
profiling enabled.
If this happens, just run it again -- the problem occurs only very rarely.
The same vulnerability does not occur with mdprof
profiling.
With both profilers,
you can control whether time profiling measures
real (elapsed) time, user time plus system time, or user time only,
by including the options -Tr
, -Tp
, or -Tv
respectively
in the environment variable MERCURY_OPTIONS
when you run the program to be profiled.
Currently, the -Tp
and -Tv
options don't work on Windows,
so on Windows you must explicitly specify -Tr
.
The default is user time plus system time, which counts all time spent executing the process, including time spent by the operating system working on behalf of the process, but not including time that the process was suspended (e.g. due to time slicing, or while waiting for input). When measuring real time, profiling counts even periods during which the process was suspended. When measuring user time only, profiling does not count time inside the operating system at all.
To display the graph profile information
gathered from one or more profiling runs,
just type mprof
or mprof -c
.
(For programs built with --high-level-code
,
you need to also pass the --no-demangle
option to mprof
as well.)
Note that mprof
can take quite a while to execute
(especially with -c
),
and will usually produce quite a lot of output,
so you will usually want to redirect the output into a file
with a command such as mprof > mprof.out
.
The output of mprof -c
consists of three major sections.
These are named the call graph profile,
the flat profile and the alphabetic listing.
The output of mprof
contains
the flat profile and the alphabetic listing only.
The call graph profile presents the local call graph of each procedure. For each procedure it shows the parents (callers) and children (callees) of that procedure, and shows the execution time and call counts for each parent and child. It is sorted on the total amount of time spent in the procedure and all of its descendents (i.e. all of the procedures that it calls, directly or indirectly.)
The flat profile presents the just execution time spent in each procedure. It does not count the time spent in descendents of a procedure.
The alphabetic listing just lists the procedures in alphabetical order, along with their index number in the call graph profile, so that you can quickly find the entry for a particular procedure in the call graph profile.
The profiler works by interrupting the program at frequent intervals, and each time recording the currently active procedure and its caller. It uses these counts to determine the proportion of the total time spent in each procedure. This means that the figures calculated for these times are only a statistical approximation to the real values, and so they should be treated with some caution. In particular, if the profiler's assumption that calls to a procedure from different callers have roughly similar costs is not true, the graph profile can be quite misleading.
The time spent in a procedure and its descendents is calculated by propagating the times up the call graph, assuming that each call to a procedure from a particular caller takes the same amount of time. This assumption is usually reasonable, but again the results should be treated with caution. (The deep profiler does not make such an assumption, and hence its output is significantly more reliable.)
Note that any time spent in a C function
(e.g. time spent in GC_malloc()
,
which does memory allocation and garbage collection)
is credited to the Mercury procedure that called that C function.
Here is a small portion of the call graph profile from an example program.
called/total parents index %time self descendents called+self name index called/total children <spontaneous> [1] 100.0 0.00 0.75 0 call_engine_label [1] 0.00 0.75 1/1 do_interpreter [3] ----------------------------------------------- 0.00 0.75 1/1 do_interpreter [3] [2] 100.0 0.00 0.75 1 io__run/0(0) [2] 0.00 0.00 1/1 io__init_state/2(0) [11] 0.00 0.74 1/1 main/2(0) [4] ----------------------------------------------- 0.00 0.75 1/1 call_engine_label [1] [3] 100.0 0.00 0.75 1 do_interpreter [3] 0.00 0.75 1/1 io__run/0(0) [2] ----------------------------------------------- 0.00 0.74 1/1 io__run/0(0) [2] [4] 99.9 0.00 0.74 1 main/2(0) [4] 0.00 0.74 1/1 sort/2(0) [5] 0.00 0.00 1/1 print_list/3(0) [16] 0.00 0.00 1/10 io__write_string/3(0) [18] ----------------------------------------------- 0.00 0.74 1/1 main/2(0) [4] [5] 99.9 0.00 0.74 1 sort/2(0) [5] 0.05 0.65 1/1 list__perm/2(0) [6] 0.00 0.09 40320/40320 sorted/1(0) [10] ----------------------------------------------- 8 list__perm/2(0) [6] 0.05 0.65 1/1 sort/2(0) [5] [6] 86.6 0.05 0.65 1+8 list__perm/2(0) [6] 0.00 0.60 5914/5914 list__insert/3(2) [7] 8 list__perm/2(0) [6] ----------------------------------------------- 0.00 0.60 5914/5914 list__perm/2(0) [6] [7] 80.0 0.00 0.60 5914 list__insert/3(2) [7] 0.60 0.60 5914/5914 list__delete/3(3) [8] ----------------------------------------------- 40319 list__delete/3(3) [8] 0.60 0.60 5914/5914 list__insert/3(2) [7] [8] 80.0 0.60 0.60 5914+40319 list__delete/3(3) [8] 40319 list__delete/3(3) [8] ----------------------------------------------- 0.00 0.00 3/69283 tree234__set/4(0) [15] 0.09 0.09 69280/69283 sorted/1(0) [10] [9] 13.3 0.10 0.10 69283 compare/3(0) [9] 0.00 0.00 3/3 __Compare___io__stream/0(0) [20] 0.00 0.00 69280/69280 builtin_compare_int/3(0) [27] ----------------------------------------------- 0.00 0.09 40320/40320 sort/2(0) [5] [10] 13.3 0.00 0.09 40320 sorted/1(0) [10] 0.09 0.09 69280/69283 compare/3(0) [9] -----------------------------------------------
The first entry is call_engine_label
and its parent is
<spontaneous>
, meaning that it is the root of the call graph.
(The first three entries, call_engine_label
, do_interpreter
,
and io__run/0
are all part of the Mercury runtime;
main/2
is the entry point to the user's program.)
Each entry of the call graph profile consists of three sections, the parent procedures, the current procedure and the children procedures.
Reading across from the left, for the current procedure the fields are:
The predicate or function names are not just followed by their arity but
also by their mode in brackets. A mode of zero corresponds to the first mode
declaration of that predicate in the source code. For example,
list__delete/3(3)
corresponds to the (out, out, in)
mode
of list__delete/3
.
Now for the parent and child procedures the self and descendent time have slightly different meanings. For the parent procedures the self and descendent time represent the proportion of the current procedure's self and descendent time due to that parent. These times are obtained using the assumption that each call contributes equally to the total time of the current procedure.
To create a memory profile, you can invoke mprof
with the -m
(--profile memory-words
) option.
This will profile the amount of memory allocated, measured in units of words.
(A word is 4 bytes on a 32-bit architecture,
and 8 bytes on a 64-bit architecture.)
Alternatively, you can use mprof
's -M
(--profile memory-cells
) option.
This will profile memory in units of "cells".
A cell is a group of words allocated together in a single allocation,
to hold a single object.
Selecting this option this will therefore profile
the number of memory allocations,
while ignoring the size of each memory allocation.
With memory profiling, just as with time profiling,
you can use the -c
(--call-graph
) option to display
call graph profiles in addition to flat profiles.
Note that Mercury's memory profiler will only tell you about allocation,
not about deallocation (garbage collection).
It can tell you how much memory was allocated by each procedure,
but it won't tell you how long the memory was live for,
or how much of that memory was garbage-collected.
This is also true for mdprof
.
To display the information contained in a deep profiling data file
(which will be called Deep.data
unless you renamed it),
start up your browser and give it a URL of the form
http://server.domain.name/cgi-bin/mdprof_cgi?/full/path/name/Deep.data
.
The server.domain.name
part should be the name of a machine
with the following qualifications:
it should have a web server running on it,
and it should have the mdprof_cgi
program installed
in its /usr/lib/cgi-bin
directory.
The /full/path/name/Deep.data
part
should be the full path name of the deep profiling data file
whose data you wish to explore.
The name of this file must not have percent signs in it.
On some operating systems,
Mercury's profiling doesn't work properly with shared libraries.
The symptom is errors (map__lookup failed
) or warnings from mprof
.
On some systems, the problem occurs because the C implementation
fails to conform to the semantics specified by the ISO C standard
for programs that use shared libraries.
For other systems, we have not been able to analyze the cause of the failure
(but we suspect that the cause may be the same as on those systems
where we have been able to analyze it).
If you get errors or warnings from mprof
,
and your program is dynamically linked,
try rebuilding your application statically linked,
e.g. by using MLFLAGS=--static
in your Mmakefile.
Another work-around that sometimes works is to set the environment variable
LD_BIND_NOW
to a non-null value before running the program.
This section contains a brief description of all the options
available for mmc
, the Mercury compiler.
Sometimes this list is a little out-of-date;
use mmc --help
to get the most up-to-date list.
mmc
is invoked as
mmc [options] arguments
Arguments can be either module names or file names.
Arguments ending in .m
are assumed to be file names,
while other arguments are assumed to be module names, with
.
(rather than __
or :
) as module qualifier.
If you specify a module name such as foo.bar.baz
,
the compiler will look for the source in files foo.bar.baz.m
,
bar.baz.m
, and baz.m
, in that order.
Options are either short (single-letter) options preceded by a single -
,
or long options preceded by --
.
Options are case-sensitive.
We call options that do not take arguments flags.
Single-letter flags may be grouped with a single -
, e.g. -vVc
.
Single-letter flags may be negated
by appending another trailing -
, e.g. -v-
.
Long flags may be negated by preceding them with no-
,
e.g. --no-verbose
.
-w
--inhibit-warnings
--halt-at-warn
--halt-at-syntax-error
--inhibit-accumulator-warnings
--introduce-accumulators
.
--no-warn-singleton-variables
--no-warn-missing-det-decls
pred
or mode
declaration does not have a determinism annotation.
Use this option if you want the compiler to perform automatic
determinism inference for non-exported predicates.
--no-warn-det-decls-too-lax
--no-warn-inferred-erroneous
--no-warn-nothing-exported
--warn-unused-args
--warn-interface-imports
--warn-missing-opt-files
.opt
files that cannot be opened.
--warn-missing-trans-opt-files
.trans_opt
files that cannot be opened.
--warn-non-stratification
--no-warn-simple-code
--warn-duplicate-calls
--no-warn-missing-module-name
:- module
declaration.
--no-warn-wrong-module-name
:- module
declaration
does not match the module's file name.
--no-warn-smart-recompilation
--no-warn-undefined-options-variables
--make
.
--warn-non-tail-recursion
--high-level-code
.
--no-warn-target-code
--no-warn-up-to-date
--make
are already up to date.
--no-warn-stubs
--allow-stubs
option (see Language semantics options)
is enabled.
--warn-dead-procs
-v
--verbose
-V
--very-verbose
-E
--verbose-error-messages
--no-verbose-make
--make
option.
--output-compile-error-lines n
--make
, output the first n lines of the .err
file after compiling a module (default: 15).
--verbose-commands
--verbose
.
--verbose-recompilation
--smart-recompilation
, output messages
explaining why a module needs to be recompiled.
--find-all-recompilation-reasons
--verbose-recompilation
.
-S
--statistics
--no-trad-passes
,
so you get information at the boundaries between phases of the compiler.
-T
--debug-types
-N
--debug-modes
--debug-det
--debug-determinism
--debug-opt
--debug-opt-pred-id predid
--debug-opt
, restrict the debugging traces
to the optimization of the predicate or function with the specified pred id.
--debug-pd
--debug-rl-gen
--debug-rl-opt
--debug-liveness <n>
--debug-make
These options are mutually exclusive. If more than one of these options is specified, only the first in this list will apply. If none of these options are specified, the default action is to compile and link the modules named on the command line to produce an executable.
-f
--generate-source-file-mapping
Mercury.modules
. This must be done before
mmc --generate-dependencies
if there are any modules
for which the file name does not match the module name.
If there are no such modules the mapping need not be
generated.
-M
--generate-dependencies
module.dep
, module.dv
and the
relevant .d
files.
--generate-module-order
module.order
.
Implies --generate-dependencies
.
--generate-mmc-deps
--generate-mmc-make-module-dependencies
mmc --make
even
when using Mmake. This is recommended when building a
library for installation.
-i
--make-int
--make-interface
module.int
.
Also write the short interface to module.int2
.
--make-short-int
--make-short-interface
module.int3
.
--make-priv-int
--make-private-interface
module.int0
.
--make-opt-int
--make-optimization-interface
module.opt
.
--make-trans-opt
--make-transitive-optimization-interface
module.trans_opt
file. This file is used to store
information used for inter-module optimization. The information is read
in when the compiler is invoked with the
--transitive-intermodule-optimization
option.
The file is called the "transitive" optimization interface file
because a .trans_opt
file may depend on other
.trans_opt
and .opt
files. In contrast,
a .opt
file can only hold information derived directly
from the corresponding .m
file.
-P
--pretty-print
--convert-to-mercury
module.ugly
.
This option acts as a Mercury ugly-printer.
(It would be a pretty-printer, except that comments are stripped
and nested if-then-elses are indented too much -- so the result
is rather ugly.)
--typecheck-only
-e
--errorcheck-only
-C
--target-code-only
module.c
,
assembler in module.s
or module.pic_s
,
IL in module.il
or Java in module.java
),
but not object code.
-c
--compile-only
module.c
and object code in module.o
but do not attempt to link the named modules.
--aditi-only
module.rlo
and do not compile to C
(see Using Aditi).
--output-grade-string
--output-link-command
--output-shared-lib-link-command
--smart-recompilation
--smart-recompilation
does
not yet work with --intermodule-optimization
.
--no-assume-gmake
.d
, .dep
and .dv
files,
generate Makefile fragments that use only the features of standard make;
do not assume the availability of GNU Make extensions.
This can make these files significantly larger.
--trace-level level
none
, shallow
, deep
, rep
and default
.
See Debugging.
--trace-optimized
--no-delay-death
--stack-trace-higher-order
--generate-bytecode
--auto-comments
module.c
file.
This is primarily useful for trying to understand
how the generated C code relates to the source code,
e.g. in order to debug the compiler.
The code may be easier to understand if you also use the
--no-llds-optimize
option.
-n-
--no-line-numbers
--convert-to-mercury
).
--show-dependency-graph
-d stage
--dump-hlds stage
module.hlds_dump.num-name
.
Stage numbers range from 1 to 99; not all stage numbers are valid.
If a stage number is followed by a plus sign,
all stages after the given stage will be dumped as well.
The special stage name all
causes the dumping of all stages.
Multiple dump options accumulate.
--dump-hlds-options options
--dump-hlds
, include extra detail in the dump.
Each type of detail is included in the dump
if its corresponding letter occurs in the option argument.
These details are:
a - argument modes in unifications,
b - builtin flags on calls,
c - contexts of goals and types,
d - determinism of goals,
f - follow_vars sets of goals,
g - goal feature lists,
i - variables whose instantiation changes,
l - pred/mode ids and unify contexts of called predicates,
m - mode information about clauses,
n - nonlocal variables of goals,
p - pre-birth, post-birth, pre-death and post-death sets of goals,
r - resume points of goals,
s - store maps of goals,
t - results of termination analysis,
u - unification categories and other implementation details of unifications,
v - variable numbers in variable names,
A - argument passing information,
C - clause information,
D - instmap deltas of goals,
G - compile-time garbage collection information,
I - imported predicates,
M - mode and inst information,
P - path information,
T - type and typeclass information,
U - unify and compare predicates.
--dump-hlds-pred-id predid
--dump-hlds
, restrict the output
to the HLDS of the predicate or function with the specified pred id.
--dump-mlds stage
module.c_dump.num-name
and module.h_dump.num-name
.
Stage numbers range from 1 to 99; not all stage numbers are valid.
The special stage name all
causes the dumping of all stages.
Multiple dump options accumulate.
--verbose-dump-mlds stage
module.mlds_dump.num-name
.
--dump-rl
module.rl_dump
(see Using Aditi).
--dump-rl-bytecode
module.rla
. Aditi-RL bytecodes are directly
executed by the Aditi system (see Using Aditi).
--generate-schemas
module.base_schema
and for Aditi derived
relations to module.derived_schema
. A schema
string is a representation of the types of the attributes
of a relation (see Using Aditi).
See the Mercury language reference manual for detailed explanations of these options.
--no-reorder-conj
--no-reorder-disj
--fully-strict
error/1
.
--allow-stubs
Allow procedures to have no clauses.
Any calls to such procedures will raise an exception at run-time.
This option is sometimes useful during program development.
(See also the documentation for the --warn-stubs
option
in Warning options.)
--infer-all
--infer-types --infer-modes --infer-det
.
--infer-types
--infer-modes
--no-infer-det
--no-infer-determinism
--type-inference-iteration-limit n
--mode-inference-iteration-limit n
For detailed explanations, see the "Termination analysis" section of the "Implementation-dependent extensions" chapter in the Mercury Language Reference Manual.
--enable-term
--enable-termination
terminates
,
does_not_terminate
and check_termination
pragmas have no effect unless termination analysis is enabled. When
using termination, --intermodule-optimization
should be enabled,
as it greatly improves the accuracy of the analysis.
--chk-term
--check-term
--check-termination
--verb-chk-term
--verb-check-term
--verbose-check-termination
--term-single-arg limit
--termination-single-argument-analysis limit
--termination-norm norm
simple
norm says that size is always one.
The total
norm says that it is the number of words in the cell.
The num-data-elems
norm says that it is the number of words in
the cell that contain something other than pointers to cells of
the same type.
--term-err-limit limit
--termination-error-limit limit
--term-path-limit limit
--termination-path-limit limit
The following compilation options affect the generated
code in such a way that the entire program must be
compiled with the same setting of these options,
and it must be linked to a version of the Mercury
library which has been compiled with the same setting.
(Attempting to link object files compiled with different
settings of these options will generally result in an error at
link time, typically of the form undefined symbol MR_grade_...
or symbol MR_runtime_grade multiply defined
.)
The options below must be passed to mgnuc
, c2init
and ml
as well as to mmc
.
If you are using Mmake, then you should specify
these options in the GRADEFLAGS
variable rather than specifying
them in MCFLAGS
, MGNUCFLAGS
and MLFLAGS
.
-s grade
--grade grade
.
separated list of the
grade options to set. The grade options may be given in any order.
The available options each belong to a set of mutually
exclusive alternatives governing a single aspect of the compilation model.
The set of aspects and their alternatives are:
none
, reg
, jump
, asm_jump
,
fast
, asm_fast
, hl
, hlc
, il
and java
(the default is system dependent).
gc
, and agc
(the default is no garbage collection).
prof
,
memprof
, and profdeep
(the default is no profiling).
tr
(the default is no trailing).
rt
(the default is no reserved tag)
debug
(the default is no debugging features).
par
(the default is a non-thread-safe environment).
The default grade is system-dependent; it is chosen at installation time
by configure
, the auto-configuration script, but can be overridden
with the environment variable MERCURY_DEFAULT_GRADE
if desired.
Depending on your particular installation, only a subset
of these possible grades will have been installed.
Attempting to use a grade which has not been installed
will result in an error at link time.
(The error message will typically be something like
ld: can't find library for -lmercury
.)
The tables below show the options that are selected by each base grade and grade modifier; they are followed by descriptions of those options.
none
--target c --no-gcc-global-registers --no-gcc-nonlocal-gotos --no-asm-labels
.
reg
--target c --gcc-global-registers --no-gcc-nonlocal-gotos --no-asm-labels
.
jump
--target c --no-gcc-global-registers --gcc-nonlocal-gotos --no-asm-labels
.
fast
--target c --gcc-global-registers --gcc-nonlocal-gotos --no-asm-labels
.
asm_jump
--target c --no-gcc-global-registers --gcc-nonlocal-gotos --asm-labels
.
asm_fast
--target c --gcc-global-registers --gcc-nonlocal-gotos --asm-labels
.
hlc
--target c --high-level-code
.
hl
--target c --high-level-code --high-level-data
.
il
--target il --high-level-code --high-level-data
.
java
--target java --high-level-code --high-level-data
.
.gc
--gc boehm
.
.mps
--gc mps
.agc
--gc accurate
.
.prof
--profiling
.
.memprof
--memory-profiling
.
.profdeep
--deep-profiling
.
.tr
--use-trail
.
.rt
--reserve-tag
.
.debug
--debug
.
--target c
(grades: none, reg, jump, fast, asm_jump, asm_fast, hl, hlc)
--target asm
(grades: hlc)
--il
, --target il
(grades: il)
--java
, --target java
(grades: java)
--high-level-code
.
--il-only
--target il --target-code-only
.
Generate IL assembler code in module.il
, but do not invoke
ilasm to produce IL object code.
--dotnet-library-version version-number
--no-support-ms-clr
--support-rotor-clr
--compile-to-c
--compile-to-C
--target c --target-code-only
.
Generate C code in module.c
, but do not invoke the
C compiler to generate object code.
--java-only
--target java --target-code-only
.
Generate Java code in module.java
, but do not invoke
the Java compiler to produce Java bytecode.
--gcc-global-registers
(grades: reg, fast, asm_fast)
--no-gcc-global-registers
(grades: none, jump, asm_jump)
--high-level-code
option is enabled.
--gcc-non-local-gotos
(grades: jump, fast, asm_jump, asm_fast)
--no-gcc-non-local-gotos
(grades: none, reg)
--high-level-code
option is enabled.
--asm-labels
(grades: asm_jump, asm_fast)
--no-asm-labels
(grades: none, reg, jump, fast)
--high-level-code
option is enabled.
--pic-reg
(grades: any grade containing `.pic_reg')
--high-level-code
option is enabled.
-H
, --high-level-code
(grades: hl, hlc, il, java)
--high-level-data
(grades: hl, il, java)
--debug
(grades: any grade containing .debug
)
mdb
(see Debugging).
This option is not yet supported for the --high-level-code
back-ends.
--profiling
, --time-profiling
(grades: any grade containing .prof
)
module.prof
. See Profiling.
This option is not supported for the IL and Java back-ends.
--memory-profiling
(grades: any grade containing .memprof
)
module.prof
. See Using mprof for memory profiling.
This option is not supported for the IL and Java back-ends.
--deep-profiling
(grades: any grade containing .profdeep
)
--gc {none, boehm, mps, accurate, automatic}
--garbage-collection {none, boehm, mps, accurate, automatic}
java
or il
use --gc automatic
,
grades containing .gc
use --gc boehm
,
grades containing .mps
use --gc mps
,
other grades use --gc none
.
conservative
or boehm
is Hans Boehm et al's conservative
garbage collector.
accurate
is our own type-accurate copying collector.
It requires --high-level-code
.
mps
is another conservative collector based on Ravenbrook Limited's
MPS (Memory Pool System) kit.
automatic
means the target language provides it.
This is the case for the IL and Java back-ends, which always use
the underlying IL or Java implementation's garbage collector.
--use-trail
(grades: any grade containing .tr
)
Of the options listed below, the --num-tag-bits
option
may be useful for cross-compilation, but apart from that
these options are all experimental and are intended for
use by developers of the Mercury implementation rather than by
ordinary Mercury programmers.
--tags {none, low, high}
--num-tag-bits n
--tags high
.
With --tags low
, the default number of tag bits to use
is determined by the auto-configuration script.
--num-reserved-addresses n
--num-reserved-objects n
Note that reserved objects will only be used if reserved addresses
(see --num-reserved-addresses
) are not available, since the
latter are more efficient.
--reserve-tag
(grades: any grade containing .rt
)
--no-type-layout
functor
,
arg
). Using such code will result in undefined behaviour at
runtime. The C code also needs to be compiled with
-DNO_TYPE_LAYOUT
.
--low-level-debug
--pic
--target asm
back-end.
The generated assembler will be written to module.pic_s
rather than to module.s
.
--no-trad-passes
--trad-passes
completely processes each predicate
before going on to the next predicate.
This option tells the compiler
to complete each phase of code generation on all predicates
before going on the next phase on all predicates.
--no-reclaim-heap-on-nondet-failure
--no-reclaim-heap-on-semidet-failure
--no-reclaim-heap-on-failure
--fact-table-max-array-size size
pragma fact_table
data array (default: 1024).
The data for fact tables is placed into multiple C arrays, each with a
maximum size given by this option. The reason for doing this is that
most C compilers have trouble compiling very large arrays.
--fact-table-hash-percent-full percentage
pragma fact_table
hash tables should be
allowed to get. Given as an integer percentage (valid range: 1 to 100,
default: 90). A lower value means that the compiler will use
larger tables, but there will generally be less hash collisions,
so it may result in faster lookups.
The following options allow the Mercury compiler to optimize the generated C code based on the characteristics of the expected target architecture. The default values of these options will be whatever is appropriate for the host architecture that the Mercury compiler was installed on, so normally there is no need to set these options manually. They might come in handy if you are cross-compiling. But even when cross-compiling, it's probably not worth bothering to set these unless efficiency is absolutely paramount.
--have-delay-slot
--num-real-r-regs n
--num-real-f-regs n
--num-real-r-temps n
--num-real-f-temps n
-O n
--opt-level n
--optimization-level n
In general, there is a trade-off between compilation speed and the speed of the generated code. When developing, you should normally use optimization level 0, which aims to minimize compilation time. It enables only those optimizations that in fact usually reduce compilation time. The default optimization level is level 2, which delivers reasonably good optimization in reasonable time. Optimization levels higher than that give better optimization, but take longer, and are subject to the law of diminishing returns. The difference in the quality of the generated code between optimization level 5 and optimization level 6 is very small, but using level 6 may increase compilation time and memory requirements dramatically.
Note that if you want the compiler to perform cross-module
optimizations, then you must enable them separately;
the cross-module optimizations are not enabled by any -O
level, because they affect the compilation process in ways
that require special treatment by mmake
.
--opt-space
--optimize-space
--intermodule-optimization
--trans-intermod-opt
--transitive-intermodule-optimization
module.trans_opt
files
to make intermodule optimizations. The module.trans_opt
files
are different to the module.opt
files as .trans_opt
files may depend on other .trans_opt
files, whereas each
.opt
file may only depend on the corresponding .m
file.
--no-read-opt-files-transitively
--use-opt-files
.opt
files which are
already built, e.g. those for the standard library, but do not build any
others.
--use-trans-opt-files
.trans_opt
files which are
already built, e.g. those for the standard library, but do not build any
others.
--intermodule-analysis
--split-c-files
--optimize-dead-procs
,
except that it works globally at link time, rather than
over a single module, so it does a much better job of
eliminating unused procedures.
This option significantly increases compilation time,
link time, and intermediate disk space requirements,
but in return reduces the size of the final
executable, typically by about 10-20%.
This option is only useful with --procs-per-c-function 1
,
so this option automatically sets --procs-per-c-function 1
.
The --high-level-code
back-end does not support
--split-c-files
.
N.B. When using mmake
, the --split-c-files
option should
not be placed in the MCFLAGS
variable. Instead, use the
MODULE.split
target, i.e. type mmake foo.split
rather than mmake foo
.
These optimizations are high-level transformations on our HLDS (high-level data structure).
--no-inlining
--no-inline-simple
--no-inline-builtins
--no-inline-single-use
--inline-compound-threshold threshold
--inline-simple-threshold threshold
--intermod-inline-simple-threshold threshold
.opt
files. Note that changing this
between writing the .opt
file and compiling to C may cause link errors,
and too high a value may result in reduced performance.
--inline-vars-threshold threshold
--loop-invariants
--optimize-rl-invariants
.)
--no-common-struct
--no-common-goal
--constraint-propagation
--local-constraint-propagation
--no-follow-code
--optimize-unused-args
--intermod-unused-args
--optimize-unused-args
and
--intermodule-optimization
.
--unneeded-code
--unneeded-code-copy-limit
--optimize-higher-order
--type-specialization
--user-guided-type-specialization
--higher-order-size-limit
--optimize-higher-order
and --type-specialization
.
Goal size is measured as the number of calls, unifications
and branched goals.
--higher-order-arg-limit
--higher-order-arg-limit
--optimize-higher-order
and
--type-specialization
.
--optimize-constant-propagation
--introduce-accumulators
--optimize-constructor-last-call
--optimize-dead-procs
--excess-assign
--optimize-duplicate-calls
--delay-constructs
--optimize-saved-vars
--deforestation
--deforestation-depth-limit
--deforestation-vars-threshold
--deforestation-size-threshold
These optimizations are applied to the medium level intermediate code.
--no-mlds-optimize
--no-optimize-tailcalls
--no-optimize-initializations
--eliminate-local-variables
These optimizations are applied during the process of generating low-level intermediate code from our high-level data structure.
--no-static-ground-terms
--no-smart-indexing
--dense-switch-req-density percentage
--dense-switch-size size
--lookup-switch-req-density percentage
--lookup-switch-size size
--string-switch-size size
--tag-switch-size size
--try-switch-size size
--binary-switch-size size
--no-middle-rec
--no-simple-neg
These optimizations are transformations that are applied to our low-level intermediate code before emitting C code.
--no-common-data
--no-llds-optimize
--no-optimize-peep
--no-optimize-jumps
--no-optimize-fulljumps
--pessimize-tailcalls
--checked-nondet-tailcalls
--use-local-vars
--no-optimize-labels
--optimize-dups
--no-optimize-frames
--no-optimize-delay-slot
--optimize-reassign
--optimize-repeat n
These optimizations are applied during the process of generating C intermediate code from our low-level data structure.
--no-emit-c-loops
--use-macro-for-redo-fail
--procs-per-c-function n
These optimizations are applied to the Aditi-RL code produced
for predicates with :- pragma aditi(...)
declarations
(see Using Aditi).
--optimize-rl
--optimize-rl-cse
--optimize-rl-invariants
--optimize-rl-index
--detect-rl-streams
-m
--make
mmc
as files to
make, rather than source files. Create the specified files,
if they are not already up-to-date.
(Note that this option also enables --use-subdirs
.)
-r
--rebuild
--make
, but always rebuild the target files
even if they are up to date.
--pre-link-command command
mmc --make
.
This can be used to compile C source files which rely on
header files generated by the Mercury compiler.
The command will be passed the names of all of the source files in
the program or library, with the source file containing the main
module given first.
--extra-init-command command
.init
file for a library.
The command will be passed the names of all of the source files in
the program or library, with the source file containing the main
module given first.
-k
--keep-going
--make
keep going as far as
possible even if an error is detected.
--install-prefix dir
--install-command command
command source target
to install each file in a Mercury library.
The default command is cp
.
--libgrade grade
--options-file file
-
, an options file will be read from the
standard input. By default the file Mercury.options
in the current directory will be read.
See Using Mmake for a description of the syntax of options files.
--config-file file
--config-file
option is not set, a default configuration
will be used, unless --no-mercury-stdlib-dir
is passed to mmc.
The configuration file is just an options file (see Using Mmake).
--options-search-directory dir
--mercury-configuration-directory dir
--mercury-config-dir dir
-I dir
--search-directory dir
--intermod-directory dir
.opt
files.
--use-search-directories-for-intermod
.opt
files.
--use-subdirs
Mercury
subdirectory,
rather than in the current directory.
--use-grade-subdirs
Mercury
subdirectory,
laid out so that multiple grades can be built simultaneously.
Executables and libraries will be symlinked or copied into the
current directory.
--use-grade-subdirs
does not work with Mmake (it does
work with mmc --make
).
-?
-h
--help
--filenames-from-stdin
--aditi
--aditi-user
:- pragma owner(...)
declaration is given.
The owner field is used along with module, name and arity to identify
predicates, and is also used for security checks. Defaults to the value
of the USER
environment variable. If USER
is not set,
--aditi-user
defaults to the string "guest".
If you are using Mmake, you need to pass these options
to the target code compiler (e.g. mgnuc
) rather
than to mmc
.
--target-debug
--c-debug
(see below).
If the target language is IL, this causes the compiler to
pass /debug
to the IL assembler.
--cc compiler-name
--c-include-directory dir
MERCURY_MC_ALL_C_INCL_DIRS
environment variable to a sequence of --c-include-directory
options.
--c-debug
-g
flag to the C compiler, to enable debugging
of the generated C code, and also disable stripping of C debugging
information from the executable.
Since the generated C code is very low-level, this option is not likely
to be useful to anyone except the Mercury implementors, except perhaps
for debugging code that uses Mercury's C interface extensively.
--no-c-optimize
--no-ansi-c
--inline-alloc
GC_malloc()
.
This can improve performance a fair bit,
but may significantly increase code size.
This option has no effect if --gc boehm
is not set or if the C compiler is not GNU C.
--cflags options
--cflag option
--cflag
should be used for single words which need
to be quoted when passed to the shell.
--javac compiler-name
--java-compiler compiler-name
javac
.
--java-interpreter interpreter-name
java
.
--java-flags options
--java-flag option
--java-flag
should be used for single words which need
to be quoted when passed to the shell.
--java-classpath dir
--java-object-file-extension extension
.class
.
-o filename
--output-file filename
.m
extension.)
This option is ignored by mmc --make
.
--ld-flags options
--ld-flags option
mmc --output-link-command
to find out
which command is used.
--ld-flag
should be used for single words which need
to be quoted when passed to the shell.
--ld-libflags options
--ld-libflag option
mmc --output-shared-lib-link-command
to find out which command is used.
--ld-libflag
should be used for single words which need
to be quoted when passed to the shell.
-L directory
--library-directory directory
-R directory
--runtime-library-directory directory
-l library
--library library
--link-object object
--mld directory
--mercury-library-directory directory
--search-directory
, --library-directory
,
--init-file-directory
and --c-include-directory
options as needed. See Using libraries.
--ml library
--mercury-library library
--mercury-standard-library-directory directory
--mercury-stdlib-dir directory
--mercury-library-directory directory
and --mercury-configuration-directory directory
.
--no-mercury-standard-library-directory
--no-mercury-stdlib-dir
--no-mercury-configuration-directory
.
--init-file-directory directory
.init
files by c2init
.
--init-file file
.init
files
to be passed to c2init
.
--trace-init-file file
.init
files
to be passed to c2init
when tracing is enabled.
--linkage {shared,static}
--linkage shared
.
--mercury-linkage {shared,static}
--mercury-linkage shared
.
--no-strip
--no-demangle
--no-main
--allow-undefined
--no-use-readline
--runtime-flags flags
--extra-initialization-functions
--extra-inits
.c
files for extra initialization functions.
(This may be necessary if the C files contain
hand-coded C code with INIT
comments, rather than
containing only C code that was automatically generated
by the Mercury compiler.)
The shell scripts in the Mercury compilation environment will use the following environment variables if they are set. There should be little need to use these, because the default values will generally work fine.
MERCURY_DEFAULT_GRADE
--grade
option is specified.
MERCURY_STDLIB_DIR
--mercury-stdlib-dir
options passed to the mmc
, ml
,
mgnuc
and c2init
scripts override the setting of
the MERCURY_STDLIB_DIR environment variable.
MERCURY_NONSHARED_LIB_DIR
-mno-abicalls
. See the file README.IRIX-5
in the Mercury
source distribution.
MERCURY_OPTIONS
--runtime-flags
options to ml
and c2init
.
The Mercury runtime accepts the following options.
-C size
-D debugger
i
and via the external debugger if debugger is e
.
(The mdb script works by including -Di
in MERCURY_OPTIONS.)
The external debugger is not yet available.
-p
-P num
-T time-method
r
p
v
Currently, the -Tp
and -Tv
options don't work on Windows,
so on Windows you must explicitly specify -Tr
.
--heap-size size
--detstack-size size
--nondetstack-size size
--solutions-heap-size size
--trail-size size
-i filename
--mdb-in filename
-o filename
--mdb-out filename
-e filename
--mdb-err filename
-m filename
--mdb-tty filename
--debug-threads
Output information to the standard error stream about the locking and
unlocking occurring in each module which has been compiled with the C macro
symbol MR_DEBUG_THREADS
defined.
MERCURY_COMPILER
MERCURY_MKINIT
*_init.c
file.
MERCURY_DEBUGGER_INIT
The Mercury compiler takes special advantage of certain extensions provided by GNU C to generate much more efficient code. We therefore recommend that you use GNU C for compiling Mercury programs. However, if for some reason you wish to use another compiler, it is possible to do so. Here's what you need to do.
mercury_config
script, specifying the different C compiler, e.g.
mercury_config --output-prefix=/usr/local/mercury-cc --with-cc=cc
.
bin
directory of the new configuration to the beginning
of your PATH.
none
, hlc
or hl
(e.g. hlc.gc
).
You can specify the grade in one of three ways: by setting the
MERCURY_DEFAULT_GRADE
environment variable, by adding a line
GRADE=...
to your Mmake
file, or by using the
--grade
option to mmc
. (You will also need to install
those grades of the Mercury library, if you have not already done so.)
--no-static-ground-terms
.
The Mercury foreign language interfaces allows pragma foreign_proc to specify multiple implementations (in different foreign programming languages) for a procedure.
If the compiler generates code for a procedure using a back-end for which there are multiple applicable foreign languages, it will choose the foreign language to use for each procedure according to a builtin ordering.
If the language specified in a foreign_proc is not available for a particular backend, it will be ignored.
If there are no suitable foreign_proc clauses for a particular procedure but there are Mercury clauses, they will be used instead.
C
C#
IL
Managed C++
--aditi
: Miscellaneous options
--aditi-only
: Output options
--aditi-user
: Miscellaneous options
--allow-stubs
: Language semantics options
--allow-undefined
: Link options
--asm-labels
: LLDS back-end compilation model options, Grades and grade components
--assume-gmake
: Auxiliary output options
--auto-comments
: Auxiliary output options
--binary-switch-size
: Medium-level (HLDS -> LLDS) optimization options
--c-debug
: Target code compilation options
--c-include-directory
: Target code compilation options
--cc
: C compilers, Target code compilation options
--cflag
: Target code compilation options
--cflags
: Target code compilation options
--check-term
: Termination analysis options
--check-termination
: Termination analysis options
--checked-nondet-tailcalls
: Low-level (LLDS -> LLDS) optimization options
--chk-term
: Termination analysis options
--common-data
: Low-level (LLDS -> LLDS) optimization options
--common-goal
: High-level (HLDS -> HLDS) optimization options
--common-struct
: High-level (HLDS -> HLDS) optimization options
--compile-only
: Output options
--compile-to-c
: Target options
--config-file
: Build system options
--constraint-propagation
: High-level (HLDS -> HLDS) optimization options
--convert-to-mercury
: Output options
--debug
: Optional features compilation model options, Quick overview
--debug-det
: Verbosity options
--debug-determinism
: Verbosity options
--debug-liveness
: Verbosity options
--debug-make
: Verbosity options
--debug-modes
: Verbosity options
--debug-opt
: Verbosity options
--debug-opt-pred-id
: Verbosity options
--debug-pd
: Verbosity options
--debug-rl-gen
: Verbosity options
--debug-rl-opt
: Verbosity options
--debug-threads (runtime option)
: Environment
--debug-types
: Verbosity options
--deep-profiling
: Optional features compilation model options, Grades and grade components
--deforestation
: High-level (HLDS -> HLDS) optimization options
--deforestation-depth-limit
: High-level (HLDS -> HLDS) optimization options
--deforestation-size-threshold
: High-level (HLDS -> HLDS) optimization options
--deforestation-vars-threshold
: High-level (HLDS -> HLDS) optimization options
--delay-constructs
: High-level (HLDS -> HLDS) optimization options
--delay-death
: Auxiliary output options
--demangle
: Using mprof for time profiling
--dense-switch-req-density
: Medium-level (HLDS -> LLDS) optimization options
--dense-switch-size
: Medium-level (HLDS -> LLDS) optimization options
--detect-rl-streams
: Aditi-RL optimization options
--detstack-size
: Running
--detstack-size (runtime option)
: Environment
--dotnet-library-version
: Target options
--dump-hlds
: Auxiliary output options
--dump-hlds-options
: Auxiliary output options
--dump-hlds-pred-id
: Auxiliary output options
--dump-mlds
: Auxiliary output options
--dump-rl
: Auxiliary output options
--dump-rl-bytecode
: Auxiliary output options
--eliminate-local-variables
: MLDS backend (MLDS -> MLDS) optimization options
--enable-term
: Termination analysis options
--enable-termination
: Termination analysis options
--errorcheck-only
: Output options
--excess-assign
: High-level (HLDS -> HLDS) optimization options
--extra-init-command
: Build system options
--extra-initialization-functions
: Link options
--extra-inits
: Link options
--fact-table-hash-percent-full
: Code generation options
--fact-table-max-array-size size
: Code generation options
--filenames-from-stdin
: Miscellaneous options
--find-all-recompilation-reasons
: Verbosity options
--follow-code
: High-level (HLDS -> HLDS) optimization options
--fully-strict
: Language semantics options
--garbage-collection
: Optional features compilation model options
--gc
: Optional features compilation model options, Grades and grade components
--gcc-global-registers
: LLDS back-end compilation model options, Grades and grade components
--gcc-non-local-gotos
: LLDS back-end compilation model options
--gcc-nonlocal-gotos
: Grades and grade components
--generate-bytecode
: Auxiliary output options
--generate-dependencies
: Output options
--generate-mmc-deps
: Output options
--generate-mmc-make-module-dependencies
: Output options
--generate-schemas
: Auxiliary output options
--generate-source-file-mapping
: Output options
--grade
: Grades and grade components
--halt-at-syntax-error
: Warning options
--halt-at-warn
: Warning options
--have-delay-slot
: Code generation target options
--heap-size (runtime option)
: Environment
--help
: Miscellaneous options, Invocation
--high-level-code
: Overall optimization options, MLDS back-end compilation model options, Grades and grade components, Using mprof for time profiling
--high-level-data
: MLDS back-end compilation model options
--higher-order-size-limit
: High-level (HLDS -> HLDS) optimization options
--il
: Grades and grade components
--il-only
: Target options
--infer-all
: Language semantics options
--infer-det
: Language semantics options
--infer-determinism
: Language semantics options
--infer-modes
: Language semantics options
--infer-types
: Language semantics options
--inhibit-accumulator-warnings
: Warning options
--inhibit-warnings
: Warning options
--init-file
: Link options
--init-file-directory
: Link options
--inline-alloc
: Target code compilation options
--inline-compound-threshold
: High-level (HLDS -> HLDS) optimization options
--inline-simple
: High-level (HLDS -> HLDS) optimization options
--inline-simple-threshold
: High-level (HLDS -> HLDS) optimization options
--inline-single-use
: High-level (HLDS -> HLDS) optimization options
--inline-vars-threshold
: High-level (HLDS -> HLDS) optimization options
--inlining
: High-level (HLDS -> HLDS) optimization options
--install-command
: Build system options
--install-prefix
: Build system options
--intermod-directory
: Build system options
--intermod-inline-simple-threshold
: High-level (HLDS -> HLDS) optimization options
--intermod-unused-args
: High-level (HLDS -> HLDS) optimization options
--intermodule-analysis
: Overall optimization options
--intermodule-optimization
: Overall optimization options, Using libraries, Building libraries
--introduce-accumulators
: High-level (HLDS -> HLDS) optimization options
--java
: Grades and grade components
--java-classpath
: Target code compilation options
--java-compiler
: Target code compilation options
--java-flag
: Target code compilation options
--java-flags
: Target code compilation options
--java-interpreter
: Target code compilation options
--java-object-file-extension
: Target code compilation options
--java-only
: Target options
--javac
: Target code compilation options
--keep-going
: Build system options
--ld-flag
: Link options
--ld-flags
: Link options
--ld-libflag
: Link options
--ld-libflags
: Link options
--libgrade
: Build system options
--library
: Link options
--library-directory
: Link options
--line-numbers
: Auxiliary output options
--link-object
: Link options
--linkage
: Link options
--llds-optimize
: Low-level (LLDS -> LLDS) optimization options, Auxiliary output options
--local-constraint-propagation
: High-level (HLDS -> HLDS) optimization options
--lookup-switch-req-density
: Medium-level (HLDS -> LLDS) optimization options
--lookup-switch-size
: Medium-level (HLDS -> LLDS) optimization options
--loop-invariants
: High-level (HLDS -> HLDS) optimization options
--low-level-debug
: Code generation options
--make
: Build system options, Output options, Verbosity options, Warning options, Using Mmake
--make-int
: Output options, Using mmc, Filenames
--make-interface
: Output options, Filenames
--make-opt-int
: Output options, Using mmc
--make-optimization-interface
: Output options, Filenames
--make-priv-int
: Output options, Using mmc
--make-priv-interface
: Filenames
--make-private-interface
: Output options, Filenames
--make-short-int
: Output options, Using mmc, Filenames
--make-short-interface
: Output options, Filenames
--make-trans-opt
: Output options, Using mmc
--make-trans-opt-int
: Filenames
--make-transitive-optimization-interface
: Output options, Filenames
--mdb-err (runtime option)
: Environment
--mdb-in (runtime option)
: Environment
--mdb-out (runtime option)
: Environment
--mdb-tty (runtime option)
: Environment
--memory-profiling
: Optional features compilation model options, Grades and grade components
--mercury-config-dir
: Build system options
--mercury-configuration-directory
: Build system options
--mercury-library
: Link options, Using libraries
--mercury-library-directory
: Link options, Using libraries
--mercury-linkage
: Link options
--mercury-standard-library-directory
: Link options
--mercury-stdlib-dir
: Link options
--middle-rec
: Medium-level (HLDS -> LLDS) optimization options
--ml
: Link options, Using libraries
--mld
: Link options, Using libraries
--mlds-optimize
: MLDS backend (MLDS -> MLDS) optimization options
--no-
: Invocation overview
--no-ansi-c
: Target code compilation options
--no-asm-labels
: LLDS back-end compilation model options, Grades and grade components
--no-assume-gmake
: Auxiliary output options
--no-c-optimize
: Target code compilation options
--no-common-data
: Low-level (LLDS -> LLDS) optimization options
--no-common-goal
: High-level (HLDS -> HLDS) optimization options
--no-common-struct
: High-level (HLDS -> HLDS) optimization options
--no-delay-death
: Auxiliary output options
--no-demangle
: Link options, Using mprof for time profiling
--no-eliminate-local-variables
: MLDS backend (MLDS -> MLDS) optimization options
--no-emit-c-loops
: Output-level (LLDS -> C) optimization options
--no-follow-code
: High-level (HLDS -> HLDS) optimization options
--no-gcc-global-registers
: LLDS back-end compilation model options, Grades and grade components
--no-gcc-non-local-gotos
: LLDS back-end compilation model options
--no-gcc-nonlocal-gotos
: Grades and grade components
--no-high-level-code
: Grades and grade components
--no-infer-det
: Language semantics options
--no-infer-determinism
: Language semantics options
--no-inline-builtins
: High-level (HLDS -> HLDS) optimization options
--no-inline-simple
: High-level (HLDS -> HLDS) optimization options
--no-inline-single-use
: High-level (HLDS -> HLDS) optimization options
--no-inlining
: High-level (HLDS -> HLDS) optimization options
--no-line-numbers
: Auxiliary output options
--no-llds-optimize
: Low-level (LLDS -> LLDS) optimization options, Auxiliary output options
--no-main
: Link options
--no-mercury-standard-library-directory
: Link options
--no-mercury-stdlib-dir
: Link options
--no-middle-rec
: Medium-level (HLDS -> LLDS) optimization options
--no-mlds-optimize
: MLDS backend (MLDS -> MLDS) optimization options
--no-optimize-delay-slot
: Low-level (LLDS -> LLDS) optimization options
--no-optimize-frames
: Low-level (LLDS -> LLDS) optimization options
--no-optimize-fulljumps
: Low-level (LLDS -> LLDS) optimization options
--no-optimize-initializations
: MLDS backend (MLDS -> MLDS) optimization options
--no-optimize-jumps
: Low-level (LLDS -> LLDS) optimization options
--no-optimize-labels
: Low-level (LLDS -> LLDS) optimization options
--no-optimize-peep
: Low-level (LLDS -> LLDS) optimization options
--no-optimize-tailcalls
: MLDS backend (MLDS -> MLDS) optimization options
--no-read-opt-files-transitively
: Overall optimization options
--no-reclaim-heap-on-failure
: Code generation options
--no-reclaim-heap-on-nondet-failure
: Code generation options
--no-reclaim-heap-on-semidet-failure
: Code generation options
--no-reorder-conj
: Language semantics options
--no-reorder-disj
: Language semantics options
--no-simple-neg
: Medium-level (HLDS -> LLDS) optimization options
--no-smart-indexing
: Medium-level (HLDS -> LLDS) optimization options
--no-static-ground-terms
: Medium-level (HLDS -> LLDS) optimization options
--no-strip
: Link options
--no-support-ms-clr
: Target options
--no-trad-passes
: Code generation options, Verbosity options
--no-type-layout
: Developer compilation model options
--no-use-readline
: Link options
--no-verbose-make
: Verbosity options
--no-warn-det-decls-too-lax
: Warning options
--no-warn-inferred-erroneous
: Warning options
--no-warn-missing-det-decls
: Warning options
--no-warn-missing-module-name
: Warning options
--no-warn-nothing-exported
: Warning options
--no-warn-simple-code
: Warning options
--no-warn-singleton-variables
: Warning options
--no-warn-smart-recompilation
: Warning options
--no-warn-stubs
: Warning options
--no-warn-target-code
: Warning options
--no-warn-undefined-options-variables
: Warning options
--no-warn-up-to-date
: Warning options
--no-warn-wrong-module-name
: Warning options
--nondetstack-size
: Running
--nondetstack-size (runtime option)
: Environment
--num-real-f-regs
: Code generation target options
--num-real-f-temps
: Code generation target options
--num-real-r-regs
: Code generation target options
--num-real-r-temps
: Code generation target options
--num-tag-bits
: Developer compilation model options
--opt-level
: Overall optimization options
--opt-space
: Overall optimization options
--optimization-level
: Overall optimization options
--optimize-constant-propagation
: High-level (HLDS -> HLDS) optimization options
--optimize-constructor-last-call
: High-level (HLDS -> HLDS) optimization options
--optimize-dead-procs
: High-level (HLDS -> HLDS) optimization options, Overall optimization options
--optimize-delay-slot
: Low-level (LLDS -> LLDS) optimization options
--optimize-duplicate-calls
: High-level (HLDS -> HLDS) optimization options
--optimize-dups
: Low-level (LLDS -> LLDS) optimization options
--optimize-frames
: Low-level (LLDS -> LLDS) optimization options
--optimize-fulljumps
: Low-level (LLDS -> LLDS) optimization options
--optimize-higher-order
: High-level (HLDS -> HLDS) optimization options
--optimize-initializations
: MLDS backend (MLDS -> MLDS) optimization options
--optimize-jumps
: Low-level (LLDS -> LLDS) optimization options
--optimize-labels
: Low-level (LLDS -> LLDS) optimization options
--optimize-peep
: Low-level (LLDS -> LLDS) optimization options
--optimize-reassign
: Low-level (LLDS -> LLDS) optimization options
--optimize-repeat
: Low-level (LLDS -> LLDS) optimization options
--optimize-rl
: Aditi-RL optimization options
--optimize-rl-cse
: Aditi-RL optimization options
--optimize-rl-index
: Aditi-RL optimization options
--optimize-rl-invariants
: Aditi-RL optimization options
--optimize-saved-vars
: High-level (HLDS -> HLDS) optimization options
--optimize-space
: Overall optimization options
--optimize-tailcalls
: MLDS backend (MLDS -> MLDS) optimization options
--optimize-unused-args
: High-level (HLDS -> HLDS) optimization options
--options-file
: Build system options
--options-search-directory
: Build system options
--output-compile-error-lines
: Verbosity options
--output-file
: Link options
--output-grade-string
: Output options
--output-link-command
: Output options
--output-shared-lib-link-command
: Output options
--pessimize-tailcalls
: Low-level (LLDS -> LLDS) optimization options
--pic
: Code generation options
--pic-reg
: LLDS back-end compilation model options
--pre-link-command
: Build system options
--pretty-print
: Output options
--procs-per-c-function
: Output-level (LLDS -> C) optimization options
--profiling
: Optional features compilation model options, Grades and grade components
--rebuild
: Build system options
--reclaim-heap-on-failure
: Code generation options
--reclaim-heap-on-nondet-failure
: Code generation options
--reclaim-heap-on-semidet-failure
: Code generation options
--reorder-conj
: Language semantics options
--reorder-disj
: Language semantics options
--reserve-tag
: Developer compilation model options
--reserved-addresses
: Developer compilation model options
--runtime-flags
: Link options
--runtime-library-directory
: Link options
--search-directory
: Build system options
--show-dependency-graph
: Auxiliary output options
--simple-neg
: Medium-level (HLDS -> LLDS) optimization options
--smart-indexing
: Medium-level (HLDS -> LLDS) optimization options
--smart-recompilation
: Auxiliary output options, Filenames
--solutions-heap-size (runtime option)
: Environment
--split-c-files
: Overall optimization options
--stack-trace-higher-order
: Auxiliary output options
--static-ground-terms
: Medium-level (HLDS -> LLDS) optimization options
--statistics
: Verbosity options
--string-switch-size
: Medium-level (HLDS -> LLDS) optimization options
--support-rotor-clr
: Target options
--tag-switch-size
: Medium-level (HLDS -> LLDS) optimization options
--tags
: Developer compilation model options
--target
: Grades and grade components
--target-code-only
: Output options
--target-debug
: Target code compilation options
--term-err-limit
: Termination analysis options
--term-path-limit
: Termination analysis options
--term-single-arg limit
: Termination analysis options
--termination-error-limit
: Termination analysis options
--termination-norm
: Termination analysis options
--termination-path-limit
: Termination analysis options
--termination-single-argument-analysis
: Termination analysis options
--time-profiling
: Optional features compilation model options
--trace-init-file
: Link options
--trace-level level
: Auxiliary output options
--trace-optimized
: Auxiliary output options
--trad-passes
: Code generation options, Verbosity options
--trail-size
: Environment
--trans-intermod-opt
: Overall optimization options, Building libraries
--transitive-intermodule-optimization
: Overall optimization options, Using mmc
--try-switch-size
: Medium-level (HLDS -> LLDS) optimization options
--type-inference-iteration-limit
: Language semantics options
--type-layout
: Developer compilation model options
--type-specialization
: High-level (HLDS -> HLDS) optimization options
--typecheck-only
: Output options
--unneeded-code
: High-level (HLDS -> HLDS) optimization options
--unneeded-code-copy-limit
: High-level (HLDS -> HLDS) optimization options
--use-grade-subdirs
: Build system options
--use-local-vars
: Low-level (LLDS -> LLDS) optimization options
--use-macro-for-redo-fail
: Output-level (LLDS -> C) optimization options
--use-opt-files
: Overall optimization options
--use-search-directories-for-intermod
: Build system options
--use-subdirs
: Build system options, Filenames
--use-trail
: Optional features compilation model options
--use-trans-opt-files
: Overall optimization options
--user-guided-type-specialization
: High-level (HLDS -> HLDS) optimization options
--verb-check-term
: Termination analysis options
--verb-chk-term
: Termination analysis options
--verbose
: Verbosity options
--verbose-check-termination
: Termination analysis options
--verbose-commands
: Verbosity options
--verbose-dump-mlds
: Auxiliary output options
--verbose-error-messages
: Verbosity options
--verbose-recompilation
: Verbosity options
--very-verbose
: Verbosity options
--warn-dead-procs
: Warning options
--warn-det-decls-too-lax
: Warning options
--warn-duplicate-calls
: Warning options
--warn-inferred-erroneous
: Warning options
--warn-interface-imports
: Warning options
--warn-missing-det-decls
: Warning options
--warn-missing-module-name
: Warning options
--warn-missing-opt-files
: Warning options
--warn-missing-trans-opt-files
: Warning options
--warn-non-stratification
: Warning options
--warn-non-tail-recursion
: Warning options
--warn-nothing-exported
: Warning options
--warn-simple-code
: Warning options
--warn-singleton-variables
: Warning options
--warn-smart-recompilation
: Warning options
--warn-stubs
: Warning options
--warn-unused-args
: Warning options
--warn-up-to-date
: Warning options
--warn-wrong-module-name
: Warning options
-?
: Miscellaneous options
-c
: Output options
-C
: Output options
-c
: Using mmc
-C (runtime option)
: Environment
-d
: Auxiliary output options
-D (runtime option)
: Environment
-e
: Output options
-E
: Verbosity options
-e (runtime option)
: Environment
-fpic
: LLDS back-end compilation model options
-h
: Miscellaneous options
-H
: MLDS back-end compilation model options
-I
: Build system options
-i
: Output options
-i (runtime option)
: Environment
-k
: Build system options
-l
: Link options
-L
: Link options
-m
: Build system options
-M
: Output options
-m (runtime option)
: Environment
-N
: Verbosity options
-n-
: Auxiliary output options
-o
: Link options
-O
: Overall optimization options
-o
: Using mmc
-o (runtime option)
: Environment
-P
: Output options
-P (runtime option)
: Environment
-p (runtime option)
: Environment
-R
: Link options
-r
: Build system options
-s
: Grades and grade components
-S
: Verbosity options
-T
: Verbosity options
-T (runtime option)
: Environment
-V
: Verbosity options
-v
: Verbosity options
-w
: Warning options
/debug
: Target code compilation options
alias (mdb command)
: Parameter commands
all_class_decls (mdb command)
: Developer commands
all_regs (mdb command)
: Developer commands
all_type_ctors (mdb command)
: Developer commands
AR
: Building libraries
ARFLAGS
: Building libraries
break (mdb command)
: Breakpoint commands
browse (mdb command)
: Browsing commands
c2init
: Using mmc
C2INITARGS
: Using Mmake
C2INITFLAGS
: Using Mmake
cc_query (mdb command)
: Interactive query commands
CFLAGS
: Using Mmake
class_decl (mdb command)
: Developer commands
clear_histogram (mdb command)
: Experimental commands
consumer (mdb command)
: Developer commands
context (mdb command)
: Parameter commands
continue (mdb command)
: Forward movement commands
current (mdb command)
: Browsing commands
cut_stack (mdb command)
: Developer commands
debug_vars (mdb command)
: Developer commands
delete (mdb command)
: Breakpoint commands
depth (mdb command)
: Browsing commands
disable (mdb command)
: Breakpoint commands
document (mdb command)
: Help commands
document_category (mdb command)
: Help commands
down (mdb command)
: Browsing commands
echo (mdb command)
: Parameter commands
enable (mdb command)
: Breakpoint commands
exception (mdb command)
: Forward movement commands
EXTRA_C2INITARGS
: Using Mmake
EXTRA_C2INITFLAGS
: Using Mmake
EXTRA_CFLAGS
: Using Mmake
EXTRA_GRADEFLAGS
: Using Mmake
EXTRA_JAVACFLAGS
: Using Mmake
EXTRA_LD_LIBFLAGS
: Using Mmake
EXTRA_LDFLAGS
: Using Mmake
EXTRA_LIB_DIRS
: Using libraries, Using Mmake
EXTRA_LIBRARIES
: Using libraries, Using Mmake
EXTRA_MC_MAKE_FLAGS
: Using Mmake
EXTRA_MCFLAGS
: Using Mmake
EXTRA_MGNUCFLAGS
: Using Mmake
EXTRA_MLFLAGS
: Using Mmake
EXTRA_MLLIBS
: Using Mmake
EXTRA_MLOBJS
: Using Mmake
EXTRA_MS_CLFLAGS
: Using Mmake
finish (mdb command)
: Forward movement commands
flag (mdb command)
: Developer commands
format (mdb command)
: Browsing commands
forward (mdb command)
: Forward movement commands
gen_stack (mdb command)
: Developer commands
goto (mdb command)
: Forward movement commands
GRADEFLAGS
: Compilation model options, Using Mmake
help (mdb command)
: Help commands
histogram_all (mdb command)
: Experimental commands
histogram_exp (mdb command)
: Experimental commands
INSTALL
: Installing libraries, Using Mmake
INSTALL_MKDIR
: Installing libraries, Using Mmake
INSTALL_PREFIX
: Installing libraries, Using Mmake
io_query (mdb command)
: Interactive query commands
JAVACFLAGS
: Using Mmake
label_stats (mdb command)
: Developer commands
LD_BIND_NOW
: Profiling and shared libraries
LD_LIBFLAGS
: Using Mmake
LDFLAGS
: Using Mmake
level (mdb command)
: Browsing commands
LIBGRADES
: Installing libraries, Using Mmake
lines (mdb command)
: Browsing commands
LINKAGE
: Using Mmake
MAIN_TARGET
: Using Mmake
make --- see Mmake
: Using Mmake
maxdepth (mdb command)
: Forward movement commands
MC
: Using Mmake
MC_BUILD_FILES
: Using Mmake
MC_MAKE_FLAGS
: Using Mmake
MCFLAGS
: Compilation model options, Using Mmake
mdb
: Quick overview, Debugging
mdprof
: Using mdprof, Creating profiles, Building profiled applications, Profiling
Mercury
subdirectory: Build system options
MERCURY_COMPILER
: Environment
MERCURY_DEBUGGER_INIT
: Environment, Mercury debugger invocation
MERCURY_DEFAULT_GRADE
: C compilers, Environment, Grades and grade components
MERCURY_LINKAGE
: Using Mmake
MERCURY_MAIN_MODULES
: Using Mmake
MERCURY_MKINIT
: Environment
MERCURY_NONSHARED_LIB_DIR
: Environment
MERCURY_OPTIONS
: Environment, Running
MERCURY_STDLIB_DIR
: Environment
MGNUC
: Using Mmake
mgnuc
: Using mmc
MGNUCFLAGS
: Compilation model options, Using Mmake
mindepth (mdb command)
: Forward movement commands
ML
: Building libraries, Using Mmake
ml
: Using mmc
MLFLAGS
: Compilation model options, Building libraries, Using Mmake
MLLIBS
: Building libraries, Using Mmake
MLOBJS
: Building libraries, Using Mmake
MLPICOBJS
: Building libraries
mm_stacks (mdb command)
: Developer commands
mmake
: Using Mmake
mmc
: Using mmc
mmc_options (mdb command)
: Parameter commands
modules (mdb command)
: Breakpoint commands
mprof
: Profiling and shared libraries, Using mprof for memory profiling, Using mprof for time profiling, Creating profiles, Building profiled applications, Profiling
MS_CL_NOASM
: Using Mmake
MS_CLFLAGS
: Using Mmake
next (mdb command)
: Forward movement commands
nondet_stack (mdb command)
: Developer commands
Optimizing code size
: Overall optimization options
Optimizing space
: Overall optimization options
pneg_stack (mdb command)
: Developer commands
print (mdb command)
: Browsing commands
print_optionals (mdb command)
: Developer commands
printlevel (mdb command)
: Parameter commands
proc_stats (mdb command)
: Developer commands
procedures (mdb command)
: Breakpoint commands
query (mdb command)
: Interactive query commands
quit (mdb command)
: Miscellaneous commands
RANLIB
: Building libraries
RANLIBFLAGS
: Building libraries
register (mdb command)
: Breakpoint commands
retry (mdb command)
: Backward movement commands
return (mdb command)
: Forward movement commands
save (mdb command)
: Miscellaneous commands
scroll (mdb command)
: Parameter commands
set (mdb command)
: Browsing commands
size (mdb command)
: Browsing commands
source (mdb command)
: Miscellaneous commands
stack (mdb command)
: Browsing commands
stack_regs (mdb command)
: Developer commands
step (mdb command)
: Forward movement commands
subgoal (mdb command)
: Developer commands
table (mdb command)
: Developer commands
table_io (mdb command)
: I/O tabling commands
type_ctor (mdb command)
: Developer commands
unalias (mdb command)
: Parameter commands
unhide_events (mdb command)
: Developer commands
up (mdb command)
: Browsing commands
var_name_stats (mdb command)
: Developer commands
vars (mdb command)
: Browsing commands
view (mdb command)
: Browsing commands
width (mdb command)
: Browsing commands
We might eventually add support for ordinary "Make" programs, but currently only GNU Make is supported.