-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathREADME.Install
1374 lines (1056 loc) · 51.6 KB
/
README.Install
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
Installing HPCToolkit with Spack
================================
1 Introduction
2 Prerequisites
3 Spack Notation
4 Clone Spack and HPCToolkit
5 Config.yaml
6 Modules.yaml
7 Packages.yaml
7.1 External Packages
7.2 Micro-Architecture Targets
7.3 Require, Reuse and Fresh
8 Bootstrapping Clingo
9 Compilers and compilers.yaml
10 Python
11 Spack Install
12 Manual Install
13 Advanced Options
13.1 CUDA
13.2 Level Zero
13.3 ROCM
13.4 OpenCL
13.5 MPI
13.6 PAPI vs Perfmon
13.7 Python
14 Platform Specific Notes
14.1 Cray
15 HPCToolkit GUI Interface (Hpcviewer)
15.1 Spack Install
15.2 Manual Install
16 Building a New Compiler
16.1 Using the New Compiler
16.2 Bootstrapping Environment Modules
17 Spack Mirrors
18 Common Problems
18.1 Unable to fetch tar file
18.2 New releases break the build
18.3 Failure to load modules
1 Introduction
==============
These notes describe how to build and install HPCToolkit and hpcviewer
and their prerequisites with Spack. HPCToolkit proper (hpcrun,
hpcstruct and hpcprof) is used to measure and analyze an application's
performance and then produce a database for hpcviewer. HPCToolkit is
supported on the following platforms. IBM Blue Gene is no longer
supported.
1. Linux (64-bit) on x86_64, little-endian powerpc (power8 and 9) and
ARM (aarch64). Big endian powerpc is no longer supported.
2. Cray on x86_64 and Compute Node Linux.
We provide binary distributions for hpcviewer and hpctraceviewer on
Linux (x86_64, ppc64/le and aarch64), Windows (x86_64) and MacOS
(x86_64, M1 and M2). HPCToolkit databases are platform-independent and
it is common to run hpcrun on one machine and then view the results on
another machine.
We build HPCToolkit and its prerequisite libraries from source.
HPCToolkit has some 20-25 base prerequisites (more for cuda or rocm) and
we now use spack to build them. It is possible to use spack to install
all of hpctoolkit or build just the prerequisites and then build
hpctoolkit with the traditional 'configure ; make ; make install' method
from autotools. Developers will probably want to run 'configure' and
'make' manually, but both methods are supported.
These notes are written mostly from the view of using spack to build
hpctoolkit and its dependencies. If you are a more experienced spack
user, especially if you want to use spack to build hpctoolkit plus
several other packages, then you will want to adapt these directions to
your own needs.
Spack documentation is available at:
<https://spack.readthedocs.io/en/latest/index.html>
The current status of using Spack for HPCToolkit is at:
<http://hpctoolkit.org/spack-issues.html>
Last revised: July 11, 2023.
2 Prerequisites
===============
Building HPCToolkit requires the following prerequisites.
1. The GNU gcc, g++ and gfortran compilers version 8.x or later with
support for C++17. On systems with older compilers, you can use
spack to build a later version of gcc.
2. GNU glibc version 2.16 or later. Note: Red Hat 6.x uses glibc 2.12
which is too old.
3. Basic build tools: make, ld, ar, objcopy, nm, etc, and shell
utilities: bash, sed, awk, grep, etc. Most Linux systems have
these tools, or else you couldn't compile anything. If you are
building inside a container, you may need to add them.
4. Cmake version 3.14 or later, perl version 5.x, and python version
3.8 or later. On systems that are missing these tools or have
versions that are too old, you can use spack to build a later
version.
5. Git and curl for downloading files.
6. (optional) Environment (TCL) or LUA (Lmod) modules, if you want to
make HPCToolkit available as a module. Again, spack can install
these packages if they are missing from your system.
Hpcviewer and hpctraceviewer require Java 11 or later. Spack can
install Java, if needed. On Linux, the viewers also require GTK+
version 3.20 or later.
3 Spack Notation
================
Spack uses a special notation for specifying the version, variants,
compilers and dependencies when describing how to build a package. This
combination of version, variants, etc is called a 'spec' and is used
both on the command line and in config files.
1. '@' specifies the package version. 'spack info <package>' shows
the available versions and variants for a package. In most cases,
spaces are optional between elements of a spec. For example:
boost@1.77.0 hpctoolkit @develop papi @=6.0.0 "libiberty@2.40"
Note: 'foo@2.1' includes all versions beginning with '2.1',
including '2.1', '2.1.0', '2.1.4', '2.1.9.9.9', '2.1.stable', etc.
If you want version exactly '2.1', then use the notation 'foo@=2.1'
to differentiate '2.1' from '2.1.*'.
Note: dotted version numbers with exactly two fields that end with
a 0 ('@1.10', '@2.120', etc) should be quoted so that yaml does not
treat the version '@2.40' as the floating point number '2.4'.
2. '+', '-', '~' specify boolean (on/off) variants. Note: '-' (dash)
and '~' (tilde) both mean 'off'. Use dash after a space and tilde
after a non-space. For example:
elfutils+bzip2~nls elfutils +bzip2 -nls elfutils@0.186 +bzip2~nls
3. 'name=value' specifies a non-boolean variant, for example:
dyninst+openmp build_type=RelWithDebInfo xerces-c@3.2.2 transcoder=iconv
4. '%' specifies the build compiler and its version, for example:
hpctoolkit@develop %gcc@8.5.0
5. 'cflags', 'cxxflags', 'fflags', 'cppflags', 'ldflags' and 'ldlibs'
are special name/value variants for compiler flags. These are
normally not needed, but if you do need to add a flag to the build,
then one example might be:
amg2013 cflags='-O2 -mavx512pf'
6. '^' represents a dependency spec. The spec for a dependency
package is a full spec and may include its own version, variants,
etc. For example:
hpctoolkit@develop ^dyninst@12.1.0+openmp
7. 'arch', 'platform', 'os' and 'target' are special options for the
system architecture and machine type. Platform is normally
'linux', or else 'cray' (or even 'darwin'). OS (or
'operating_system') is the Linux distribution, 'rhel8', 'sles15',
etc, and target is the machine type, 'x86_64', 'ppc64le', etc.
Arch is a triple of platform, os and target separated by dashes,
for example 'linux-rhel8-x86_84'.
Normally, a system has only one arch type and you don't need to
specify this. However, for systems with separate front and
back-end types, the default is the back end. For example, if you
wanted to build for the front end on Cray, then you could use
something like this.
python@3.7.4 arch=cray-sles15-x86_64 boost os=fe
Now that spack has implemented microarchitecture targets (haswell,
ivybridge, etc), you can use 'target' to build for a generic x86_64
or a specific CPU type. For example:
amg2013 target=x86_64 lulesh target=ivybridge
You can use 'spack arch' to display the generic, top-level families
and the micro-arch targets.
spack arch --known-targets
The following command gives a summary of spack spec syntax.
spack help --spec
When writing a spec (for 'spack spec, install', etc), spack will fully
resolve all possible choices for the package and all of its dependencies
and create a unique hash value for that exact configuration. This
process is called 'concretization.' To see how spack would concretize a
spec, use 'spack spec'.
spack spec hpctoolkit@develop ^elfutils@0.187 ^boost@1.77.0
<https://spack.readthedocs.io/en/latest/basic_usage.html#specs-dependencies>
4 Clone Spack and HPCToolkit
============================
Spack is available via git clone from GitHub. This includes the core
spack machinery and recipes for building over 7,000 packages (and
growing). You should also clone HPCToolkit for the 'packages.yaml' file
which is used to configure the spack build. Note: spack is on GitHub,
but hpctoolkit has moved to GitLab.
git clone https://github.com/spack/spack.git
git clone https://gitlab.com/hpctoolkit/hpctoolkit.git
After cloning, add the 'spack/bin' directory to your PATH, or else
source the spack 'setup-env' script.
(bash) . /path/to/spack/share/spack/setup-env.sh
(csh) setenv SPACK_ROOT /path/to/spack/root
source $SPACK_ROOT/share/spack/setup-env.csh
It suffices to add 'spack/bin' to your PATH (or even symlink the spack
launch script). Sourcing the 'setup-env' script adds extra support for
modules built by spack.
5 Config.yaml
=============
'config.yaml' is the top-level spack config file. This specifies the
directory layout for installed files and the top-level spack parameters.
By default, spack installs packages inside the spack repository at
'spack/opt/spack'. To use another location, set the 'root' field under
'install_tree' in 'config.yaml'. Normally, you will want to set this.
config:
install_tree:
root: /path/to/top-level/install/directory
There are a few other fields that you may want to set for your local
system. These are all in 'config.yaml'.
1. 'build_stage' - the location where spack builds packages (default
is in '/tmp').
2. 'source_cache' - where spack stores downloaded source tar files.
3. 'connect_timeout' - some download sites, especially sourceforge are
often slow to connect. If you find that connections are timing
out, try increasing this time to 30 or 60 seconds (default is 10
seconds).
4. 'url_fetch_method' - by default, spack uses a python library
(urllib) to fetch source files. If you have trouble downloading
files, try changing this to 'curl'.
5. 'build_jobs' - by default, spack uses all available hardware
threads for parallel make, up to a limit of 16. If you want to use
a different number, then set this.
The default 'config.yaml' file is in the spack repository at
'spack/etc/spack/defaults'. The simplest solution is to copy this file
one directory up and then edit the copy (don't edit the default file
directly).
cd spack/etc/spack
cp defaults/config.yaml .
vi config.yaml
Alternatively, you could put this file in a separate directory, outside
of the spack repository and then use '-C/--config-scope dir' on the
spack command line. (The '-C' option goes before the spack command
name.) This is useful if you maintain multiple config files for
different machines.
spack -C dir install ...
Note: if you put 'config.yaml' in 'spack/etc/spack', then it will apply
to every spack command for that repository (and you won't forget).
Putting it in a separate directory is more flexible because you can
support multiple configurations from the same repository. But then you
must use '-C dir' with every spack command or else you will get
inconsistent results.
You can view the current configuration and see where each entry comes
from with 'spack config'.
spack [-C dir] config get config
spack [-C dir] config blame config
See the spack docs on 'Configuration Files' and 'Basic Settings'.
<https://spack.readthedocs.io/en/latest/configuration.html>
<https://spack.readthedocs.io/en/latest/config_yaml.html>
6 Modules.yaml
==============
Spack supports creating module files, but does not install them by
default. If you want to install module files, then you need to edit
'modules.yaml' to specify which type of modules to use (TCL or Lmod) and
the install path.
modules:
default:
roots:
# normally, need only one of these
tcl: /path/to/top-level/tcl-module/directory
lmod: /path/to/top-level/lmod-module/directory
enable:
- tcl (or lmod)
Also, for hpctoolkit, you should also turn off autoload for
dependencies. By default, autoload loads the modules for hpctoolkit's
dependencies. But hpctoolkit does not need this and loading them may
interfere with an application's dependencies. You should do this for
both tcl and lmod modules.
modules:
default:
tcl:
hpctoolkit:
autoload: none
all:
autoload: direct
7 Packages.yaml
===============
The 'packages.yaml' file specifies the versions and variants for the
packages that spack installs and serves as a common reference point for
HPCToolkit's prerequisites. This file also specifies the paths or
modules for system build tools (cmake, python, etc) to avoid rebuilding
them. Put this file in the same directory as 'config.yaml'. A sample
'packages.yaml' file is available in the 'spack' directory of the
hpctoolkit repository.
There are two main sections to 'packages.yaml'. The first specifies
the versions and variants for hpctoolkit's prereqs. By default, spack
will choose the latest version of each package (plus any constraints
from hpctoolkit's 'package.py' file). In most cases, this will work,
but not always. If you need to specify a different version or variant,
then set this in 'packages.yaml'. For example:
packages:
elfutils:
version: [0.189]
variants: ~nls
Note: the versions and variants specified in hpctoolkit's 'package.py'
file are hard constraints and should not be changed. Variants in
'packages.yaml' are preferences that may be modified for your local
system. (But don't report a bug until you have first tried the versions
from 'packages.yaml' that we supply.)
7.1 External Packages
---------------------
The other sections in 'packages.yaml' specify paths or modules for other
packages and system build tools. Building hpctoolkit's prerequisites
requires cmake 3.14 or later, perl 5.x and python 3.8 or later. There
are three ways to satisfy these requirements: a system installed version
(eg, /usr), a pre-built module or build from scratch.
By default, spack will rebuild these from scratch, even if your local
version is perfectly fine. If you already have an installed version and
prefer to use that instead, then you can specify this in
'packages.yaml'.
The easiest way to use a pre-built package is to let spack find the
package itself. Make sure the program is on your PATH and run 'spack
external'. For example, to search for 'cmake', use:
spack external find cmake
This does not work for every spack package, but it does work with
'cmake', 'perl' and 'python'. Note: spack puts these entries in
'packages.yaml' in the '.spack' subdirectory of your home directory.
You can also add these entries manually to 'packages.yaml'. For
example, this entry says that cmake 3.7.2 is available from module
'CMake/3.7.2'. 'buildable: False' is optional and means that spack must
find a matching external spec or else fail the build.
cmake:
externals:
- spec: cmake@3.7.2
modules:
- CMake/3.7.2
buildable: False
This example says that python2 and python3 are both available in
'/usr/bin'. Note that the 'prefix' entry is the parent directory of
'bin', not the bin directory itself.
python:
externals:
- spec: python@2.7.18
prefix: /usr
- spec: python@3.6.8
prefix: /usr
Note: as a special rule for python, use package name 'python', even
though the program name is python2 or python3.
Warning: It is Ok to use spack externals for build utilities that exist
on your system (cmake, perl, python). However, we strongly recommend
that you should rebuild all prereq packages that link code into
hpctoolkit (dyninst, elfutils, etc).
7.2 Micro-Architecture Targets
------------------------------
Spack implements a hierarchy of micro-architecture targets, where
'target' is a specific architecture (eg, haswell, ivybridge, etc)
instead of a generic family (x86_64, ppc64le or aarch64). This allows
the compiler to optimize code for the specific target.
You will notice this choice in two main places: the 'spack spec' and
the path for the install directory. For example, 'linux-rhel7-x86_64'
might become 'linux-rhel7-broadwell'. You can use 'spack arch' to see
the list of generic families and micro-architecture targets.
spack arch --known-targets
If you prefer a generic install, you can use the 'target' option to
specify a generic family (x86_64, ppc64le or aarch64) instead of a
micro-architecture target. This would be useful for a shared install
that needs to work across multiple machines with different micro-arch
types. For example:
spack install hpctoolkit ... target=x86_64
You can also specify preferences for 'target', 'compilers' and
'providers' in the 'all:' section of 'packages.yaml'. Note: these are
only preferences, they can be overridden on the command line.
packages:
all:
target: [x86_64]
compiler: [gcc@9.3.0]
providers:
mpi: [openmpi]
See the spack docs on 'Build Customization' and 'Specs and
Dependencies'.
<https://spack.readthedocs.io/en/latest/build_settings.html>
<https://spack.readthedocs.io/en/latest/basic_usage.html#specs-dependencies>
7.3 Require, Reuse and Fresh
----------------------------
It is important to understand that specifications in 'packages.yaml' are
only preferences, not requirements. There are other choices that spack
ranks higher. In particular, spack will prefer to reuse an existing
package that doesn't conform to 'packages.yaml' rather than rebuild a
newer version.
For example, suppose you previously installed hpctoolkit with dyninst
12.1.0. Then, some months later, you update your spack repo and want to
install a new hpctoolkit with dyninst 12.3.0. By default, spack will
prefer to reuse the old 12.1.0 rather than rebuild the new version.
The solution is to use 'require:' to force spack to build the new
version.
packages:
dyninst:
require: "@12.3.0"
Note:
1. The value for 'require:' is a full spec (so include '@' for
version) and supersedes both version and variants.
2. The value for 'require:' should be a singleton spec (not a list)
and should be quoted.
By default, spack install uses '--reuse' which prefers reusing an
already installed package. You can change this with '--fresh' which
prefers to rebuild the latest version of a package. But '--reuse' and
'--fresh' apply to all package versions. The advantage of 'require:' is
that you can selectively choose the version and variants on a package by
package basis.
There are two extensions to 'require:' that are sometimes useful.
'any_of' requires one or more from a list of specs, and 'one_of'
requires exactly one from a list of specs. For example,
packages:
boost:
require:
- one_of: ["@1.75.0", "@1.77.0"]
elfutils:
require:
- any_of: ["+bzip2", "+xz"]
You can require the target, compiler or providers in 'packages.yaml' as
follows. Recall that the field for 'require:' is a spec in quotes.
packages:
all:
require: "%gcc@9.3.0 target=x86_64"
mpi:
require: "mpich@4.0"
<https://spack.readthedocs.io/en/latest/build_settings.html#package-requirements>
8 Bootstrapping Clingo
======================
The 'concretizer' is the part of spack that converts a partial spec into
a full spec with values for the version and variants of every package in
the spec plus all dependencies. The new concretizer for spack (clingo)
is a third-party python library for solving answer-set logic problems
(eg, satisfiability). Normally, this only needs to be set up once per
machine, the first time you run spack.
The easiest way to install clingo it to use spack's pre-built
libraries. These are available for Linux (x86_64, ppc64le, aarch64) and
Macos/Darwin (x86_64) for python 3.7 or later. The Macos version also
requires Macos 10.13 or later and the Xcode developer package (for
python and other programs).
By default, spack will automatically install (bootstrap) clingo the
first time you run a command that uses it ('spec' or 'solve'). However,
if this fails or you want to verify the steps yourself, then follow
these steps.
In 'config.yaml', set 'concretizer' to 'clingo'.
config:
concretizer: clingo
Spack needs at least one compiler configured (see below). If this is
your first time running spack on this machine, then use 'compiler find'
to detect a compiler. Finally, use 'spack solve' to trigger
bootstrapping.
spack compiler list (to display known compilers)
spack compiler find (to add a compiler, if needed)
spack solve zlib
==> Bootstrapping clingo from pre-built binaries
...
zlib@1.2.11%gcc@8.4.1+optimize+pic+shared arch=linux-rhel8-zen
Spack stores the clingo bootstrap files in '~/.spack/bootstrap'. You
can check on the status of these files or clean (reset) them with the
'find' or 'clean' commands.
spack find -b (displays the status of the bootstrap files)
spack clean -b (erases the current bootstrap files)
If the binary bootstrap fails, then try the 'solve' step with debugging
turned on.
spack -d solve zlib
If the binary bootstrap fails or if your system is not supported, then
you will need to let spack build clingo from source. Reset
'spack-install' to true and rerun 'spack solve zlib'. This requires a
compiler with support for C++14 and takes maybe 30-45 minutes to install
all the packages.
<https://spack.readthedocs.io/en/latest/getting_started.html#bootstrapping-clingo>
<https://github.com/alalazo/spack-bootstrap-mirrors#supported-platforms>
9 Compilers and compilers.yaml
==============================
Building HPCToolkit requires GNU gcc/g++ version 8.x or later with C++17
support. By default, spack uses the latest available version of gcc,
but you can specify a different compiler, if one is available.
Spack uses a separate file, 'compilers.yaml' to store information
about available compilers. This file is normally in your home directory
at '~/.spack/platform' where 'platform' is normally 'linux' (or else
'cray').
The first time you use spack, or after adding a new compiler, you
should run 'spack compiler find' to have spack search your system for
available compilers. If a compiler is provided as a module, then you
should load the module before running 'find'. Normally, you only need
to run 'find' once, unless you want to add or delete a compiler. You
can also run 'spack compiler list' and 'spack compiler info' to see what
compilers spack knows about.
For example, on one power8 system running RedHat 7.3, /usr/bin/gcc is
version 4.8.5, but gcc 8.3.0 is available as module 'GCC/8.3.0'.
module load GCC/8.3.0
spack compiler find
==> Added 2 new compilers to /home/krentel/.spack/linux/compilers.yaml
gcc@8.3.0 gcc@4.8.5
==> Compilers are defined in the following files:
/home/krentel/.spack/linux/compilers.yaml
spack compiler list
==> Available compilers
-- gcc rhel7-ppc64le --------------------------------------------
gcc@8.3.0 gcc@4.8.5
spack compiler info gcc@8.3
gcc@8.3.0:
paths:
cc = /opt/apps/software/Core/GCCcore/8.3.0/bin/gcc
cxx = /opt/apps/software/Core/GCCcore/8.3.0/bin/g++
f77 = /opt/apps/software/Core/GCCcore/8.3.0/bin/gfortran
fc = /opt/apps/software/Core/GCCcore/8.3.0/bin/gfortran
modules = ['GCC/8.3.0']
operating system = rhel7
Note: for compilers from modules, spack does not fill in the 'modules:'
field in the 'compilers.yaml' file. You need to do this manually. In
the above example, after running 'find', I edited 'compilers.yaml' to
add 'GCC/8.3.0' to the 'modules:' field as below. This is important to
how spack manipulates the build environment.
- compiler:
modules: [GCC/8.3.0]
operating_system: rhel7
spec: gcc@8.3.0
...
Spack uses '%' syntax to specify the build compiler and '@' syntax to
specify the version. For example, suppose you had gcc versions 8.5.0,
9.3.0 and 10.2.0 available and you wanted to use 9.3.0. You could write
this as:
spack install package %gcc@9.3.0
You can also set the choice of compiler in the 'all:' section of
'packages.yaml'.
packages:
all:
compiler: [gcc@9.3.0]
See the spack docs on 'Compiler Configuration'.
<https://spack.readthedocs.io/en/latest/getting_started.html#compiler-configuration>
10 Python
=========
Spack uses Python for two things. First, to run the Spack scripts
written in Python, and second, to use as a dependency for other spack
packages. These do not have to be the same python version or install.
Currently, Spack requires at a minimum Python 3.7 to run spack at
all. But 3.7 is deprecated and support for it will be removed in a few
months. So, the best thing to do is to upgrade to Python 3.8 or later
now.
If python 3.8 or later is not available on your system, then your
options to install it are: (1) load a module for a later version, (2)
use yum or apt to install a later version, (3) ask your sysadmin to
install a later version, or (4) as a last resort, compile a later
version from source.
If a later python is available on your system but not first in your
PATH or under a different name, you can set the environment variable
'SPACK_PYTHON' to the python3 binary. For example, suppose
'/usr/bin/python3' is too old, but python 3.8 is available as
'/usr/bin/python3.8', then you could use:
export SPACK_PYTHON=/usr/bin/python3.8
If set, 'SPACK_PYTHON' is the path to the Python interpreter used to run
Spack.
11 Spack Install
================
First, set up your 'config.yaml', 'modules.yaml', 'packages.yaml' and
'compilers.yaml' files as above and edit them for your system. You can
see how spack will build hpctoolkit with 'spack spec'.
spack spec hpctoolkit
Then, the "one button" method uses spack to install everything.
spack install hpctoolkit
Tip: Spack fetch is somewhat fragile and sometimes has transient
problems downloading files. You can use 'spack fetch -D' to pre-fetch
all of the tar files and resolve any downloading problems before
starting the full install.
spack fetch -D hpctoolkit
12 Manual Install
=================
See README.md for up-to-date instructions for how to do this.
13 Advanced Options
===================
13.1 CUDA
---------
Beginning with the 2020.03.01 version, HPCToolkit now supports profiling
CUDA binaries (nVidia only). For best results, use CUDA version 10.1 or
later and Dyninst 10.1 or later. Note: in addition to a CUDA
installation, you also need the CUDA system drivers installed. This
normally requires root access and is outside the scope of spack.
For a spack install with CUDA, use the '+cuda' variant.
spack install hpctoolkit +cuda
For a manual install, either download and install CUDA or use an
existing module, and then use the '--with-cuda' configure option.
configure \
--prefix=/path/to/hpctoolkit/install/prefix \
--with-spack=/path/to/spack/install/dir \
--with-cuda=/path/to/cuda/install/prefix \
...
If you installed CUDA with spack in the same directory as the rest of
the prerequisites, then the '--with-spack' option should find it
automatically (but check the summary at the end of the configure
output). If you are using CUDA from a separate system module, then you
will need the '--with-cuda' option.
13.2 Level Zero
---------------
HPCToolkit supports profiling Intel GPUs through the Intel Level Zero
and Intel GTPin interfaces. For basic support (start and stop times for
GPU kernels) add the '+level_zero' variant. For advanced support inside
the GPU kernel, also add the '+gtpin' variant. But we recommend always
compiling with gtpin and then deciding at runtime which options to use.
spack install hpctoolkit +level_zero +gtpin
GTPin requires the 'oneapi-igc' package which is an external only spack
package, normally installed in '/usr'. You should add this manually
with a spack externals (it's currently not searchable) and let spack
build the rest. For example:
packages:
oneapi-igc:
externals:
- spec: oneapi-igc@1.0.10409
prefix: /usr
For an autotools build, use the options:
configure \
--with-level0=/path/to/oneapi-level-zero/prefix \
--with-gtpin=/path/to/intel-gtpin/prefix \
--with-igc=/usr (or oneapi-igc prefix) \
...
13.3 ROCM
---------
HPCToolkit supports profiling AMD GPU binaries through the HIP/ROCM
interface, and beginning with version 2022.04.15, we support building
hpctoolkit plus rocm with a fully integrated spack build. We require
ROCM 5.x or later, and the ROCM version should match the version the
application uses. This is still somewhat fluid and subject to change.
There are two ways to build HPCToolkit plus ROCM with spack.
HPCToolkit uses four ROCM prerequisites (hip, hsa-rocr-dev,
roctracer-dev and rocprofiler-dev). If you have AMD's all-in-one ROCM
package installed in '/opt', then specify all four prereqs in
'packages.yaml'. For example, if ROCM 5.0.0 is installed at
'/opt/rocm-5.0.0', then you would use:
packages:
hip:
externals:
- spec: hip@5.0.0
prefix: /opt/rocm-5.0.0
hsa-rocr-dev:
externals:
- spec: hsa-rocr-dev@5.0.0
prefix: /opt/rocm-5.0.0
roctracer-dev:
externals:
- spec: roctracer-dev@5.0.0
prefix: /opt/rocm-5.0.0
rocprofiler-dev:
externals:
- spec: rocprofiler-dev@5.0.0
prefix: /opt/rocm-5.0.0
Currently, with AMD's directory layout, the hip and hsa-rocr-dev
prefixes could be specified either as '/opt/rocm-5.0.0' or
'/opt/rocm-5.0.0/hip' (and '/opt/rocm-5.0.0/hsa'). But roctracer-dev
and rocprofiler-dev require '/opt/rocm-5.0.0'. Also, the rocm packages
do not support 'spack external find'. But all this is fluid and subject
to change.
Alternatively, if ROCM is not installed in '/opt/rocm', or if you
want to build a different version, then omit the externals definitions
in 'packages.yaml' (but be prepared for spack to build an extra 80-90
packages). In either case, install hpctoolkit with:
spack install hpctoolkit +rocm ...
For developers building with autotools, use the following configure
options. If '/opt/rocm' is available, then use the '--with-rocm'
option. Otherwise, use the other four options.
configure \
--with-rocm=/opt/rocm \ (for all-in-one /opt/rocm)
--with-rocm-hip=/path/to/hip/prefix \
--with-rocm-hsa=/path/to/hsa-rocr-dev/prefix \
--with-rocm-tracer=/path/to/roctracer-dev/prefix \
--with-rocm-profiler=/path/to/rocprofiler-dev/prefix \
...
It is allowed to mix the all-in-one option with the individual packages.
The rule is that the specific overrides the general.
13.4 OpenCL
-----------
For all three GPU types, an application can access the GPU through the
native interface (CUDA, ROCM, Level Zero) or through the OpenCL
interface. To add support for OpenCL, add the '+opencl' variant in
addition to the native interface. We recommend adding opencl support
for all GPU types. For example, with CUDA:
spack install hpctoolkit +cuda +opencl
For an autotools build, use the '--with-opencl' option.
configure \
--with-cuda=/path/to/cuda/prefix \
--with-opencl=/path/to/opencl-c-headers/prefix \
...
13.5 MPI
--------
HPCToolkit always supports profiling MPI applications. For hpctoolkit,
the spack variant '+mpi' is for building hpcprof-mpi, the MPI version of
hpcprof. If you want to build hpcprof-mpi, then you need to supply an
installation of MPI.
spack install hpctoolkit +mpi
Normally, for systems with compute nodes, you should use an existing MPI
module that was built for the correct interconnect for your system and
add this to 'packages.yaml'. The MPI module should be built with the
same version of GNU gcc/g++ used to build hpctoolkit (to keep the C++
libraries in sync). For example,
packages:
mpich:
externals:
- spec: mpich@4.0
modules:
- mpich/4.0
13.6 PAPI vs Perfmon
--------------------
HPCToolkit can access the Hardware Performance Counters with either PAPI
(default) or Perfmon (libpfm4). PAPI runs on top of the perfmon library
and uses its own, internal (but slightly out of date) copy of perfmon.
So, building with '+papi' allows accessing the counters with either PAPI
or perfmon events.
If you want to disable PAPI and use the latest Perfmon instead, then
build hpctoolkit with '~papi'.
spack install hpctoolkit ~papi
13.7 Python
-----------
Beginning with the 2023 release, HPCToolkit can now profile Python
scripts and attribute samples to python source functions instead of the
python interpreter. This requires Python 3.10 or later and is not the
same python to run the spack scripts. This should be the same python
used to run the application.
spack install hpctoolkit +python
When building with autotools, use the '--enable-python' argument with
the path to the 'python-config' command.
configure \
--enable-python=/path/to/python-config
...
14 Platform Specific Notes
==========================
14.1 Cray
---------
There are two ways to build 'hpcprof-mpi' on Cray systems depending on
how old the system is and what MPI wrapper is available. Newer Crays
have an 'mpicxx' wrapper from the 'cray-mpich' module (but it may not be
on your PATH). Older Crays use the 'CC' wrapper from the 'craype'
module.
On either type of system, start by switching to the 'PrgEnv-gnu'
module and unload the Darshan module if it exists. Darshan is a
profiling tool that monitors an application's use of I/O, but it
conflicts with hpctoolkit.
module swap PrgEnv-cray PrgEnv-gnu
module unload darshan
Next, we need the front-end GCC compiler that is compatible with the MPI
compiler. The gcc compiler should use the front-end operating system
type (sles, not cnl) and should be version 8.x or later (preferably 9.x
or later). The 'cc' and 'cxx' compilers should be gcc and g++, not the
cc and CC wrappers, and the modules should include at least 'PrgEnv-gnu'
and 'gcc'.
For example, I have the following on Crusher at ORNL in my
'compilers.yaml' file (your versions may differ). Note that spack may
report the front-end arch type as either cray or linux.
compilers:
- compiler:
spec: gcc@11.2.0
paths:
cc: /opt/cray/pe/gcc/11.2.0/bin/gcc
cxx: /opt/cray/pe/gcc/11.2.0/bin/g++
f77: /opt/cray/pe/gcc/11.2.0/bin/gfortran
fc: /opt/cray/pe/gcc/11.2.0/bin/gfortran
modules:
- PrgEnv-gnu/8.3.3
- gcc/11.2.0
operating_system: sles15
target: x86_64
...
New Cray The preferred method for newer Crays is using the '+mpi' option
and the 'cray-mpich' module. This requires the 'mpicxx' wrapper,
although it won't be on your PATH. Look in the '$MPICH_DIR' or
'$CRAY_MPICH_DIR' directory for the 'mpicxx' wrapper. For example on
Crusher, this is at the following path, your path may be different.
/opt/cray/pe/mpich/8.1.17/ofi/gnu/9.1/bin/mpicxx
If this is available, then add a spack externals entry for 'cray-mpich'
and the 'mpi' virtual package to 'packages.yaml'. For example, I used
this entry on Crusher, your versions may be different (put the specs in
quotes).
packages:
mpi:
require: "cray-mpich@8.1.17"
cray-mpich:
externals:
- spec: "cray-mpich@8.1.17"
prefix: /opt/cray/pe/mpich/8.1.17/ofi/gnu/9.1
modules:
- cray-mpich/8.1.17
Then, build with '+mpi' for the front-end arch type (with arch or os).
If the front and back-end arch types are the same, then you don't need
to specify that. For example,
spack install hpctoolkit +mpi os=fe (or arch=cray-sles15-x86_64)
Cray's use of modules is complex and requires several modules to be
loaded at compile time. You will likely find that the above recipe
fails with an undefined reference to one or more modules. For example,
/usr/bin/ld: warning: libfabric.so.1, needed by /opt/cray/pe/mpich/8.1.17/ofi/gnu/9.1/lib/libmpi_gnu_91.so,
not found (try using -rpath or -rpath-link)
/usr/bin/ld: /opt/cray/pe/mpich/8.1.17/ofi/gnu/9.1/lib/libmpi_gnu_91.so:
undefined reference to `fi_strerror@FABRIC_1.0'
There are two solutions. One, you could search the failing build log to
identify the missing modules and add them to the compiler entry. This
may require several modules. For example on Crusher, I added these
modules to the compiler entry and then the build succeeded.
modules:
- PrgEnv-gnu/8.3.3
- gcc/11.2.0
- craype/2.7.16
- cray-mpich/8.1.17