Latest News

Icluster2 is definitively stopped !


List of Research Projects using I-cluster2

Apache, ID Laboratory

Brigitte Plateau
Gregory Mounie, Olivier Richard, Yves Denneulin

The Apache ID Laboratory will use the cluster as an experimental platform for a wide range of research activities : Parallel Computing, ressource discovery in grid computing environment, global synchronisation, collective communication algorithms optimization, scheduling, design and implementation of a server for dynamic requests in bioinformatics, distributed filesystem (NFSP), scalable solution for cloning and managing cluster nodes (Ka-tools).

Apache, ID Laboratory - MOVI, INRIA

Parallel 3DModeling

Raffin Bruno, Edmond Boyer

This project aims to implement and validate distributed algorithms for multi-camera 3D modeling (visual hulls computations).
The i-cluster2 will be coupled with the GrImage platform. GrImage will be in charge of Multi-camera data acquisition and multi-projector visualization, while heavy computations will be performed on the i-cluster2.


Emmanuel Cecchet


Dream is a framework dedicated to the construction of asynchronous middleware. It allows the configuration, deployment and administration of distributed services communicating through asynchronous message passing. It combines both the use of asynchronous communications (MOM for Message-Oriented Middleware) - which are recognized as one means to achieve the scalability-extensibility-openness objectives - and the use of component-based technologies - which provide flexibility and configurability.


The aim of the Proboscis project is to study the sharing of storage devices in clusters, where the storage devices are distributed across the nodes in the cluster and accessed with efficient networking technologies.


Frédéric Desprez


The aim of DIET is to provide a transparent access to a pool of computational servers. DIET focuses on offering such a service at a very large scale. A client which has a problem to solve should be able to obtain a reference to the server that is best suited for it. DIET is designed to take into account several schedulers (plug-in scheduler model). DIET uses the data location information when scheduling jobs.


In the project GRAAL, we also work on parallel sparse direct solvers, which require significant amounts of computing power (memory, floating-point operations) depending on the size of the problems to be solved. Research aspects include performance optimization, scheduling issues, memory optimization, out-of-core extension and coupling with DIET. Access to the icluster2 will also allow us to experiment and validate the software package MUMPS on this architecture.

'Grand Large' INRIA - LRI, Orsay

Franck Cappello


MPICH-V is a research effort with theoretical studies, experimental evaluations and pragmatic implementations aiming to provide a MPI implementation based on MPICH, featuring multiple fault tolerant protocols.
Our goal is to use the I-Cluster2 platform to test and compare different inplementation of our tool, and determine which checkpoint policy is best suited for different problems and cluster sizes.

Grenoble Observatory

Scientfic Computation Service

Francoise Roch

Evaluation of the HP-Itanium2 architecture on various applications of the Observatory.

Multi-wavelength modelisation of protoplanetary disks.

Francois Menard

During the first stage of their evolution, stars are surrounded by a disk. The planet formation is supposed to occur within this disk. The aim of this project is to constrain both the disk geometry and the dust grains properties using a multiple scattering Monte-Carlo code. The code is parallelized using open-mp and a multi-parametric scheme is used to compare our models with recent observations.


Laurent Desbat

The CIMENT initiative is an effort toward the federation of all scientific computing needs among Grenoble University. Ciment provides a centralized access point to several distinct computing clusters, allowing those ressources to be shared by researchers from all kind of scientific branches.
The i-cluster2 platform will join this initiative with a 'best effort' policy.

Instituto de Informatica, Group of Parallel and Distributed Processin, Porto Alegre, brazil

Performance Evaluation of the Parallel Gadget Software

Philippe Navaux

Numerical simulations of three-dimensional self-gravitating systems have become an indispensable tool in astrophysics. They are now routinely used to study the non-linear gravitational clustering of dark matter, the formation of clusters of galaxies, the interactions of isolated galaxies, etc.
Without numerical techniques the immense progress made in these fields would have been nearly impossible, since analytic calculations are often restricted to idealized problems of high symmetry, or to approximate treatments of inherently nonlinear problems. The advances in numerical simulations have become possible both by the rapid growth of computer performance and by the implementation of ever more sophisticated numerical algorithms. The development of powerful simulation codes still remains a primary task if one wants to take full advantage of new computer technologies. The GADGET software aims to model isolated self-gravitating systems including gas and stars. The phase fluid is represented by N particles which are integrated along the characteristic curves of the collisionless Boltzmann equation. In essence, this is a Monte Carlo approach whose accuracy depends crucially on a sufficiently high number of particles. The N-body problem is thus the task of following Newton's equations of motion for a large number of particles under their own self-gravity. The GADGET code is written in standard ANSI C and should run on all parallel plataforms that support MPI.
Although suitable to numerical N-body simulation, GADGET presents some already known problems: - I/O problems due to traditional centralized server approach (NFS). - Performance evaluation and scalability study of the Gadget program.

Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre, brazil

The Parallel Research and Development Application Center

Pedro Velho

The Research and Development Center of Parallel Applications (CAP - PUCRS/HP Brazil) is dedicated to the study of techniques and methodologies for the development of parallel solutions for applications needing high performance computation. CAP has two main research lines: support techniques for parallel programs design and development of high performance applications for distributed memory architectures. Support techniques for the design of parallel programs are not directly related to the parallel solution of a given problem, but they offer some ways to simplify and optimize the process of developing parallel systems. In this scenario, the CAP group is interested in research topics like: (a) Analytical modeling of parallel programs with the use of Stochastic Automata Networks (SAN); (b) Formal verification of parallel and distributed programs properties using Objects-Based Graph Grammars (GGBO); (c) Load-balancing algorithms for dynamic irregular aplications over distributed memory platforms; (d) Structural test methodologies for parallel programs. The development of parallel applications for different categories of distributed memory architectures, such as heterogeneous clusters or grids, is another major research line of the CAP group. Our focus is in developing new high performance algorithms and/or programs for scientific or industrial problems. Recently, the CAP group is has been working on parallel solutions for three different applications: (a) Documents rendering engine for high speed printers (FOP - Formatting Objects Processor); (b) Visualization of medical data for image-based diagnosis; (c) Simulation of electrons trajectory on field emission displays.

University of Toronto, Computer Engineering Research Group

Caching and Replication for Dynamic Content Web Sites

Cristiana Amza

As commercial use of the Internet continues to grow, the performance and availability of dynamic content web sites becomes increasingly important.
Consequently, caching and cluster servers, being traditional tools for scaling, are becoming a focus of attention in both industrial and academic communities.

In this project, we study dynamic content caching and clustering techniques based on content replication, providing both high performance and strict consistency guarrantees to the user.
Our system combines ideas in content-aware static servers, relaxed consistency in shared-memory systems, and eager replication with serializability in database research.

To evaluate our replication and caching techniques, we use the TPC-W standard benchmark from Transaction Processing Council. This benchmark is designed to be representative of an e-commerce workload, specifically, an on-line bookstore. It specifies the site's data and the possible interactions with the data.

In a previous collaboration with Emmanuel Cecchet and Julie Marguerite at INRIA, we have implemented a web site meeting the TPC-W specification using different popular platforms for the business logic: PHP, Java servelts and EJB's.
These environments have become a de facto standard, at least on the Unix platforms. We have a previous simulation study which shows promising scaling results. We are planning on investigating scalability limits experimentally on your large cluster.

Laboratory of Glaciology and Environmental Geophysics - CNRS - UJF Grenoble

Local Ice Flow modeling

Olivier Gagliardini

The aim of the project is to study the flow of the ice around the drilling sites in Antarctica and Greenland. The model is able to compute both the velocity of the ice and its fabric evolution. Transient and Steady simulation will be done.


Gérard d'Aubigny

Data Analysis



Stephane Frenot

The project aims at deploying a collection of OSGi gateway, in order to distribute calculus.


Image Processing

Andrei Doncescu

Particle physics and cosmology - College de France

Tristan Beau

LSR-IMAG Laboratory


Claudia Roncancio

The NODS project (Networked Open Database Services) aims at defining an open, adaptable, evolutionary architecture that can be extended and customized on a per-application basis. A database system is seen as an infrastructure comprised of cooperating adaptable and extensible services from which applications can build their customized NODS database components. Furthermore, services or database systems configuration can be adapted at run-time (e.g., add new services, change services internal policies), according to environmental changes.

Laboratory of Molecular Biology of the Cell - ENS Lyon


Vincent Laudet

The LBMC is undertaking research aimed at clarifying the molecular basis for the functioning and the fate of cells (division, proliferation, apoptosis, senescence, differentiation).

PRiSM : Université de Versailles St-Quentin en Yvelines Laboratory


Van-Dat CUNG

Software prototypes for combinatory optimisation application
Grid plateform power evaluation.


Rachid Guerraoui

Evaluate and test different total-order broadcast algorithms in a high-performance cluster

INRIA Sophia Antipolis


Jean-Daniel Boissonnat

CGAL is the Computational Geometry Algorithms Library, written in C++, and developped (in part) by the GEOMETRICA group at INRIA.
CGAL contains many algorithms and data structures such as various kinds of triangulations.
Some of its strengths are robustness, genericity and efficiency.


Isabelle Attali

In the domain of distributed applications, networks (Internet and intranets), smartcards, and terminals, our goal is to propose fundamental principles, techniques and tools for the building, analysis, validation, verification and maintenance of reliable systems. ProActive is a 100 % java library, for parallel, distributed, and concurrent computing on the Grid.

INRIA Futurs


Olivier Coulaud

Our purpose is to analyze, to design and to develop a software environment for steering distributed numerical simulations by the visualization. This software environment should combine the facilities of virtual reality with the capabilities of existing high performance simulations. The simple integration of an existing simulation should allow the end user to visualize the evolutions and interact with the numerical scheme considering the intermediate results.


Marc Schoenauer

This project aims a studying the SBGE stack-based genetic encoding, designed during Samuel Landau's PhD thesis. This genetic encoding is used to evolve structures, and to date, was only applied to the evolution of finite-state automata. During the project, SBGE will be applied to other kind of networked structures, and compared to other kind of genetic encoding. These comparisons will rely on experimental results analysis, which is why computing power is needed.

IRISA Rennes


Francois Bodin

The Caps team studies both hardware and software issues for the design of high performance computer systems. Peak computer performance grows steadily, however this peak performance increase is obtained through ever rising hardware complexity. Several parallelism levels are now used in hardware, and high performance can only be reached through the simultaneous usage on all these levels on applications. Then tuning performance on applications is becoming a very high-tech activity. Researches on the Caps team aim at efficiently exploiting the various levels of parallelism available in machines while hiding most of the hardware complexity to the user.



Olivier Festor

The goal of the MADYNES research team is to design, validate and deploy novel management and control paradigms as well as software architectures able to cope with the growing dynamicity and the scalability issues induced by the ubiquitous Internet. We use Icluster2 for Jmx bench.

INRIA Rhône-Alpes


Cyril Soler

The RealReflect is an endeavour to increase the realism of Virtual Reality technology to levels where it can be used for meaningful qualitative reviews of virtual prototypes and scenes.


Bill Triggs

Learning methods used will include support vector machines and other discriminant methods, statistical models including large mixture models, Markov Random Fields and similar network models. The learning task reduces essentially to large scale continuous optimization or mathematical programming calculations.


Toan Nguyen

The project has several objectives :
to analyze mathematically coupled PDE systems involving one or more disciplines in the perspective of geometrical optimization or control;
to construct, analyze and experiment numerical algorithms for the efficient solution of PDEs (coupling algorithms, model reduction), or multi-criterion optimization of discretized PDEs (gradient-based methods, evolutionary algorithms, hybrid methods, artificial neural networks, game strategies);
to develop software platforms for code-coupling and for parallel and distributed computing. Major applications include the multi-disciplinary optimization of aerodynamic configurations (wings in particular) in partnership with Dassault Aviation and Piaggio Aero France, and the geometrical optimization of antennas in partnership with France Télécom and Thalès Air Défense (see Opratel Virtual Lab.).


Vincent Roch

The main objective of the group is to propose and study new architectures, services and protocols that will enable seamless mobility, enhanced services support and multicast communication through the Internet. We have designed a large block Low-Density Parity-Check (LDPC) codec, and a simpler variant, a Low-Density Generator Matrix (LDGM) codec, both capable of operating on source blocks that are several tens of megabytes long.


Andre Freyssinet

JORAM incorporates a 100% pure Java implementation of JMS (Java Message Service API released by Sun Microsystem, Inc.). It provides access to a MOM (Message Oriented Middleware), built on top of the ScalAgent agents based distributed platform.


Icone page en construction