Install

part 1 : externals


CCIN2P3/NERSC users: you don't need to perform these steps (somebody nice already dit it for you): but you need to define the setup as explained here: ccin2p3Setup NERSCSetup DECSetup


On this page... (hide)

  1.   1.  Preliminaries
  2.   2.  CMT
  3.   3.  CLASS
  4.   4.  Planck data
    1.   4.1  Hillipop likelihood
    2.   4.2  clik support [optional]
  5.   5.  PICO
  6.   6.  CAMEL_DATA

We explain here what external data/libraries are needed to run CAMEL. When working within a group this installation can be performed only once and shared by all the users

CAMEL relies on a limited set of external libraries, namely

  • CLASS
  • CFITSIO
  • CBLAS/LAPACK (for the JLA likelihood) [optional]
  • clik (for Planck likehoods) [optional]
  • pypico (for PICO) [optional]

The last 3 packages are optional so don't waste time installing them if they are not needed.

CAMEL uses CMT to perform the compilation. So don't worry about editing Makefile, CMT will actually generate them for you. Also, CMT will allow easy switch between different versions of CAMEL/CLASS ("versionning")

1.  Preliminaries

We list below few aspects you should consider before starting the installation:

  • choose a given C/C++ compiler and stick to it everywhere. While gcc is a good choice, we noticed some better performances for CLASS (the CPU intensive part) using intel compilers, namely icc
  • installing the clik support does require some MKL version. Keep track of which version was used and try to reuse it in the requirements file of CAMEL.

2.  CMT

First, install CMT on your system. It should be pretty obvious if you follow the instructions given in the Install section of http://www.cmtsite.net. Do not use pre-compiled binaries but the source kit. The version tested is 21r26. Recall to always initialize the environment with something like

source path/to//CMT/v1r26/mgr/setup.sh

3.  CLASS

CAMEL uses CLASS, the high quality C Boltzmann solver code. CLASS need to be installed within the CMT framework. For this reason, you should not use the Makefile but the procedure below.

For the sake of versionning (ie. knowing precisely which version you use) it is probably a good idea to install the lastest CLASS tagged version. To see the list of available tags do:

git ls-remote https://github.com/lesgourg/class_public 

Suppose you choose v2.4.4 from the list. You must clone this version into the class/v2.4.4 directory with:

git clone -b v2.4.4 https://github.com/lesgourg/class_public class/v2.4.4

then create a cmt-like structure with:

cmt create class v2.4.4

and go into the cmt directory

cd class/v2.4.4/cmt/

TIP: if you prefer to use the master branch replace v2.4.4 by HEAD as CMT tag.

The main file in cmt is the so called requirements file. Unfortunately, the default one is not good. Copy this one if your compiler is gcc or that one if you instead use icc in your cmt/ directory and rename it to override the requirements file.

Then run:

cmt config 

This will create a directory (one step below) in accordance with your CMTCONFIG value. This directory will contain various outputs (libs, execs, ...).

Then still in /cmt :

make 

This creates the libClass library, which is sufficient for CAMEL.

You may want to run some execs with:

make test
cd ..
./$CMTCONFIG/class explanatory.ini
./$CMTCONFIG/test_loop

Add the directory that contains the class directory into your CMTPATH, e.g., if you have the structure /path/to/somewhere/class/vXX, it should be:

export CMTCLASS=/path/to/somewhere

4.  Planck data

4.1  Hillipop likelihood

Hillipop is a Planck (high-ℓ) likelihood with some nice properties such as having a correct Alens value when combined to ACT/SPT likelihoods, see ArXiv.1510.07600 ). It is natively coded in C++ and embedded within CAMEL so good news you don't have anything to do!

4.2  clik support [optional]

The Planck data and likelihood code are provided through the Planck Legacy Archive see the description here

Basically you will need to download both the data and the code and build tje libraries. We suggest using the waf script. If (and it is strongly suggested) you use an MKL version keep track of its version in the variable MKLROOT. You may also install CFITSIO. Both can be useful later when building CAMEL.

So typically you download the data: (COM_Likelihood_Data-baseline_R2.00.tar.gz) and the code (COM_Likelihood_Code-v2.0_R2.00.tar.bz2). After unpacking you will have the plc_2.0 directory that contains the data, and plc-2.0 where the code is and needs to be build.

We give here just as an example about how it was done at ccin2p3 using the icc compiler. Of course you need to adapt it to your site following the Planck instructions.

MKLROOT=/usr/local/intel/mkl
./waf configure --icc --ifort --install_all_deps --lapack_mkl=$MKLROOT
./waf install

Variables: the following variables need to be defined:

  • export PLANCK_DATA=/path/to/planck/data
  • export CLIKDIR=/path/to/the likihoodcode/you/just/build aka plc-2.0

5.  PICO

PICO is a fast Engine that interpolates between C spectra from a trained file. You can only ask for a limited number of parameters, so first check that the ones you want to study are available in the pico datafiles. However it is essentially devoted to CMB studies (i.e., C) so that if you plan to use some likelihood (as Lensing, BAO, JLA...) that request some more cosmological information don't waste your time here and skip this part.

The installation is pretty obvious and it is suggested to use the GitHub repository:

  git clone https://github.com/marius311/pypico 

then:

  python setup.py build 
  python setup.py install --prefix=/some/path 

Then you need to define the PICO_CODE environment variable to the directory that contains both pico.h and libpico.a (watch the install) i.e. export PICO_CODE=/some/path/...

Finally get your PICO file from there and point the PICO_DATA variable to it , i.e., export PICO_DATA=/thefull/path/to/the/file

6.  CAMEL_DATA

Only the CAMEL code is distributed through git. The data themselves must be simply downloaded and put at a fixed place. This file is be quite heavy (presently ~1GB) so you should have a good internet connection. You can get the archive file with your preferred tool (filezilla, wget,curl...), for instance:

wget http://camel.in2p3.fr/data/camel_data.tar

After unpacking the file (tar -xvf camel_data.tar) define the CAMEL_DATA environment variable to the directory containing CAMEL_DATA, which in bash reads:

 
export CAMEL_DATA=/your/path/to/CAMEL_DATA