<a id="title" class="anchor" href="#title" aria-hidden="true"><span class="octicon octicon-link"></span></a>Technical Information - biomed VO

DIRAC service

The French NGI, France-Grilles, offers a DIRAC service to the biomed VO. This is the recommended solution to access computing resources of the VO.

DIRAC provides a pilot-job execution mechanism. You may be interested in using it in case:

  • you experience long queuing delays by submitting with glite-wms-job-submit
  • you use glite-ce-job-submit, but have difficulties to select the CE where to submit your jobs
  • you are using your own pilot-job system, but you have difficulties to maintain it at a production level

To get started, read the usage instructions.

On-going discussion about the adoption of DIRAC in the VO.

CVMFS

What is it?

CVMFS is a convenient alternative to the VO software area (VO_BIOMED_SW_DIR): with CVMFS, VO software is deployed in one place only and is automatically replicated and made available through a mount point to all worker nodes that support the service.

RAL Tier-1 UK now hosts a CVMFS stratum 0 repository for biomed. As of today, half of the grid sites supporting biomed have the CVMFS client configured for biomed. Computing elements are identified with tag VO-biomed-CVMFS. On their worker nodes, the CVMFS biomed repository is accessed at either /cvmfs/biomed.egi.eu or /cvmfs/biomed.gridpp.ac.uk. Path /cvmfs/biomed.gridpp.ac.uk is planned for retirement. In time, the egi.eu replacement should take over. So far however, jobs should check which path actually exists on the Worker Nodes where they land.

How to use it?

1. You first need to deploy your files on CVMFS. To do this, you need to contact Catalin Condurache (catalin.condurache@stfc.ac.uk) and send him your DN so that he gives you access to the repository. Don't forget to mention you are a biomed user.

2. Once validated, you shall be allowed to connect to server cvmfs-upload01.gridpp.rl.ac.uk: create a proxy certificate (voms-proxy-init --voms biomed), then:
$ gsissh -p 1975 cvmfs-upload01.gridpp.rl.ac.uk

CD to cvmfs_repo (link to /cvmfs-mirror/biomed.gridpp.ac.uk), create your own folder, then deploy your files (gsiscp)

3. Once files are replicated (a matter of hours), submit grid jobs at Computing Elements where CVMFS client is configured for biomed, using VO software tag VO-biomed-CVMFS:
Requirements = Member("VO-biomed-CVMFS", other.GlueHostApplicationSoftwareRunTimeEnvironment)

Files are accessed in mount point /cvmfs/biomed.egi.eu or /cvmfs/biomed.gridpp.ac.uk: the job needs to check which one actually exists.

How to have more sites supporting CVMFS for biomed?

There are 3 kinds of sites : (i) sites not supporting CVMFS at all, (ii) sites supporting CVMFS for some VOs, but not biomed, and (iii) sites supporting CVMFS for biomed. Getting from (ii) to (iii) is supposed to be relatively simple, since it only requires a change in the CVMFS configuration from the sites admins.

The biomed support team has run a lobbying campaign on sites from the second category mostly. We had many positive answers, as of today the service is provided to biomed by 44 sites, accounting for 66 CEs an 110 CE queues.

Contact the biomed technical support team if you would like specific sites to provide biomed with CVMFS.

Known limitations

CVMFS space is public, anyone can access it => do not deploy sensitive material.

Copyrighted software is not not acceptable, unless you have a (unlikely proper) license that would potentially apply to any biomed user.

Uploading big files may hamper CVMFS performances: big files are likely not to be cached on local Squids, therefore they'd be downloaded from Stratum-1 each time they are needed. If uploaded files are tarballs, it is strongly recommended that they be extracted locally on the repository.

VirtualBox EMI2 UI Image

A VirtualBox image containing a fully functional EMI2 user interface running CentOS 6 is available for testing.

Image download and installation

The EMI2 VirtualBox image is available on the biomed LFC. Assuming that your VirtualBox VM directory is at ${HOME}/VirtualBox\ VMs:

 cd ${HOME}/VirtualBox\ VMs
 lcg-cp lfn:/grid/biomed/emi2-ui-biomed.tgz file:emi2-ui-biomed.tgz
 tar zxvf emi2-ui-biomed.tgz

You should now have a "EMI2 UI - biomed" image in your VirtualBox.

Accounts

You can login as user "biomed", with password "biomed". The root password is "biomed2012".

UI testing

You will have to install your own biomed grid credentials. The following commands have been tested:

voms-proxy-init -voms biomed
lfc-*
lcg-cr, lcg-cp, lcg-del
glite-wms-job-submit
glite-wms-job-status
glite-wms-job-logging-info
glite-wms-job-output (you will have to create /tmp/jobOutput if used with no option)

A sample JDL file is available in ${HOME}/hello.jdl

UI configuration

For installing a UI, here are some configuration parameters you will need for accessing the biomed VO services:

/opt/glite/etc/biomed/glite_wmsui.conf

[
NSAddresses = {"egee-wms-01.cnaf.infn.it:7443"};
LBAddresses = [[Template:"egee-wms-01.cnaf.infn.it:9003"]];
WMProxyEndPoints = {"https://marwms.in2p3.fr:7443/glite_wms_wmproxy_server"};
OutputStorage  =  "/tmp/jobOutput";
JdlDefaultAttributes =  [
   RetryCount  =  3;
   rank  = - other.GlueCEStateEstimatedResponseTime;
   PerusalFileEnable  =  false;
   AllowZippedISB  =  true;
   requirements  =  other.GlueCEStateStatus == "Production";
   ShallowRetryCount  =  10;
   SignificantAttributes  =  {"Requirements", "Rank", "FuzzyRank"};
   MyProxyServer  =  "lxn1179.cern.ch";
   ];
]

/opt/glite/etc/biomed/glite_wms.conf

[
NSAddresses = {"egee-wms-01.cnaf.infn.it:7443"};
LBAddresses = [[Template:"egee-wms-01.cnaf.infn.it:9003"]];
WMProxyEndPoints = {"https://marwms.in2p3.fr:7443/glite_wms_wmproxy_server"};
OutputStorage  =  "/tmp/jobOutput";
JdlDefaultAttributes =  [
   RetryCount  =  3;
   rank  = - other.GlueCEStateEstimatedResponseTime;
   PerusalFileEnable  =  false;
   AllowZippedISB  =  true;
   requirements  =  other.GlueCEStateStatus == "Production";
   ShallowRetryCount  =  10;
   SignificantAttributes  =  {"Requirements", "Rank", "FuzzyRank"};
   MyProxyServer  =  "lxn1179.cern.ch";
   ];
]

/opt/glite/etc/vomses/biomed-cclcgvomsli01.in2p3.fr

"biomed" "cclcgvomsli01.in2p3.fr" "15000" "/O=GRID-FR/C=FR/O=CNRS/OU=CC-IN2P3/CN=cclcgvomsli01.in2p3.fr" "biomed" "24"

Environment variables

LFC_HOST=lfc-biomed.in2p3.fr
LCG_GFAL_INFOSYS=cclcgtopbdii02.in2p3.fr:2170

User Interface configuration using YAIM

When using YAIM for configuring an UI, you can use the following in your site-info.def configuration file:

 RB_HOST="boszwijn.nikhef.nl"
 LB_HOST="boszwijn.nikhef.nl"
 WMS_HOST="egee-wms-01.cnaf.infn.it"
 PX_HOST="myproxy.cern.ch"
 BDII_HOST="cclcgtopbdii02.in2p3.fr"
 REG_HOST="lcgic01.gridpp.rl.ac.uk"
 CA_REPOSITORY="rpm http://linuxsoft.cern.ch/ LCG-CAs/current production"
 VO_BIOMED_VOMS_SERVERS="'vomss://voms-biomed.in2p3.fr:8443/voms/biomed?/biomed/'"
 VO_BIOMED_VOMSES="'biomed cclcgvomsli01.in2p3.fr 15000 /O=GRID-FR/C=FR/O=CNRS/OU=CC-IN2P3/CN=cclcgvomsli01.in2p3.fr biomed 24'"
 VO_BIOMED_VOMS_CA_DN="'/C=FR/O=CNRS/CN=GRID2-FR'"

Some /etc/profile.d scripts can be useful to set some user variable used by some tools (e.g. lfc-ls(1)):

echo 'export LFC_HOST="lfc-biomed.in2p3.fr"' > /etc/profile.d/lfc-host.sh
echo 'setenv LFC_HOST "lfc-biomed.in2p3.fr"' > /etc/profile.d/lfc-host.csh
chmod +x /etc/profile.d/lfc-host.*

Security configuraton

Secure SSH

Secure your certificate(s) and proxies

  • Don't export a certificate without a passphrase
  • Don't store the passphrase of your certificate in a file
  • Don't share your certificate(s) with other users
  • Don't generate long proxies

Take care of user accounts

  • Don't create accounts for users who are not supposed to access the infrastructure
  • Remove obsolete user accounts
  • Don't share certificates or proxies between user accounts

Restrict your firewall

  • Restrict inbound connectivity. Most of the UI clients for job and file management don't require any open port.

Keep your system up-to-date

Update your software packages regularly:
 yum update

Secure NTP

See documentation at https://www.team-cymru.org/ReadingRoom/Templates/secure-ntp-template.html, in particular section "UNIX ntpd".