Contribution au déploiement d’un intergiciel distribué et hiérarchique

Contribution au Déploiement d’un Intergiciel Distribué et Hiérarchique

Preliminaries on Grid Computing

In this chapter, we give information and related work required as a background to understand the first part of this dissertation on the deployment of a Grid-RPC middleware. We first present, in the next section, the concept of Grid Computing: we recall the definition of a Grid, and give examples of several projects providing computing infrastructures. In Section 1.2, we give an overview of the existing grid middleware, i.e., means of accessing grids computing power. We particularly focus on one simple and yet efficient approach to build grid middleware: the Grid-RPC approach, and we present the architecture of one of its implementation: the Diet middleware. Finally, we present in Section 1.3, related work on the problems we are addressing in the next chapter, as well as optimization techniques we make use of. 

Grid Computing

Molecular biology, astrophysics, high energy physics, those are only a few examples among the numerous research fields that have needs for tremendous computing power, in order to execute simulations, or analyze data. Increasing the computing power of the machines to deal with this endlessly increasing needs has its limits. The natural evolution was to divide the work among several processing units. Parallelism was first introduced with monolithic parallel machines, but the arrival of high-speed networks, and especially Wide Area Network (WAN) made possible the concept of clusters of machines, which were further extended to large scale distributed platforms, leading to a new field in computer science, grid computing. The first definition of a grid has been given by Foster and Kesselman in [95]. A grid is a distributed platform which is the aggregation of heterogeneous resources. They do an analogy with the electrical power grid. The computing power provided by a grid should be transparently made available from everywhere, and for everyone. The ultimate purpose is to provide to scientific communities, governments and industries an unlimited computing power, in a transparent manner. This raised lots of research challenges, due to the complexity of the infrastructure. Heterogeneity is present at all levels, from the hardware (computing power, available memory, interconnection network), to the software (operating system, available libraries and software), via the administration policies. From this definition, several kinds of architectures were born. One of the most commonly used architecture, referred to as remote cluster computing, is composed of the aggregation of many networked loosely coupled computers, usually those computers are grouped into clusters of homogeneous and well connected machines. These infrastructures are often dedicated to scientific or industrial needs, and thus provide large amount of computing resources, and a quite good stability. Users from multiple administrative domains can collaborate and share resources by creating a Virtual Organization (VO): a VO grants access to a subset of available machines, to a group of users. We will now present several grid projects, as well as examples of applications that require huge amounts of computing power, and which can make the most of distributed infrastructures. 3 CHAPTER 1. PRELIMINARIES ON GRID COMPUTING 4 1.1.1 Examples of Grid Infrastructures In this section, we choose to present six national or international grid infrastructures, which are representative of existing platforms. We divide them into two categories: research and production grids. Research grids aim at providing a platform to computer scientists, so that they can compare their theoretical research with real executions in a controlled environment. Whereas production grids are stable environment aiming at executing applications for other research fields. Research grids ALADDIN-Grid’5000 [51] is a French project, supported by the French ministry of research, regional councils, INRIA, CNRS and universities, whose goal is to provide an experimental testbed for research on Grid computing since 2003. It provides a nation wide platform, distributed on 9 sites, containing more than 6,200 cores on 30 clusters. All sites are interconnected through 10Gb/s links, supported by the Renater Research and Educational Network [21]. As grids are complex environments, researchers needed an experimental platform to study the behavior of their algorithms, protocols, etc. The particularity of Grid’5000 is to provide a fully controllable platform, where all layers in the grid can be customized: from the network to the operating systems. It also provides an advanced metrology framework for measuring data transfer, CPU, memory and disk consumption, as well as power consumption. This is one of the most advanced research grids, and has served as a model and starting point for building other grids such as the American project FutureGrid. Grid’5000 has also already been interconnected with several other grids such as DAS-3 [4] in the Netherlands, and NAREGI [16] in Japan, and future new sites will be connected outside France such as in Brazil and Luxembourg. FutureGrid [7] is a recent American project, started in October 2009, mimicking Grid’5000 infrastructure. FutureGrid’s goal is to provide a fully configurable platform to support grid and cloud researches. It contains about 5,400 cores present on 6 sites in the USA. One of the goals of the project is to understand the behavior and utility of cloud computing approaches. The FutureGrid will form part of National Science Foundation’s (NSF) TeraGrid high-performance production grid, and extend its current capabilities by allowing access to the whole grid infrastructure’s stack: networking, virtualization, software, . . . OneLab [17] is a European project, currently in its second phase. The first phase, from September 2006 to August 2008, consisted in building an autonomous European testbed for research on the future Internet, the resulting platform is PlanetLab Europe [20]. In its second phase, until November 2010, the project aims at extending the infrastructure with new sorts of testbeds, including wireless (NITOS), and high precision measurements (ETOMIC) testbeds. It also aims at interconnecting PlanetLab Europe with other PlanetLab sites (Japan, and USA), and other infrastructures. PlanetLab hosts many projects around Peer-to-Peer (P2P) systems. P2P technologies allow robust, fault tolerant tools for content distribution. They rely on totally decentralized systems in which all basic entities, called peers, are equivalent and perform the same task. Though initially outside the scope of grid computing, P2P has progressively gained a major place in grid researches. This convergence between P2P systems and grid computing has been highlighted in [94]. Production grids EGEE (Enabling Grids for E-sciencE) [5] is a project started in 2004 supported by the European Commission. It aims at providing researchers in academia or business, access to a production level Grid Infrastructure. EGEE has been developed around three main principles: (i) provide a secured and robust computing and storage grid platform, (ii) continuous improvement of software quality in order to provide reliable services to end-users, and (iii) attract users from both the scientific and industrial community. It currently provides around 150,000 cores spread on more than 260 sites in 55 countries, and also provides 28 petabytes of disk storage and 41 petabytes of long-term tape storage, to more than 14,000 users. Whereas the primary platform usage mainly focused on high energy physics and biomedical applications,   there are now more than 15 application domains that are making use of EGEE platform. Since the end of April 2010, the EGEE project is no longer active. The distributed computing infrastructure is now supported by the European Grid Infrastructure, EGI [6]. TeraGrid [23] is an American project supported by the NFS. It is an open production and scientific grid federating eleven partner sites to create an integrated and persistent computational resource. Sites are interconnected through a high speed dedicated 40Gb/s network. Available machines are quite heterogeneous, as one can find clusters of PCs, vectorial machines, parallel SMP machines or even super calculators. On the whole, there are more than a petaflop of computing capability and more than 30 petabytes of online and archival data storage available to industrials and scientists. World Community Grid [25] is a project initiated and supported by IBM. This system, contrary to previously presented projects, does not rely on federation of dedicated resources, but on federation of individual computers that individuals are willing to share for research. This type of platform is quite different from dedicated computing platforms, as there is no dedicated network, and computing resources cannot be reliable as usually computations only take place when the computer goes to sleep, i.e., when the user does not use it. The researches done on the World Community Grid are mainly medical researches. On its current state, there are about 506,000 users who provide a total of 1,505,860 machines. This platform belongs to another branch of grid computing: desktop computing, also known as volunteer computing. It relies on the principle that personal computers (PCs) sold to the public are nowadays quite powerful, but rarely used to their full capabilities: most of the time, the owners leave their computers into an idle state. Thus, arose in the 1990s the idea to use cycle stealing on those computers to make useful research, with two main projects: Great Internet Mersenne Prime Search in 1996, followed by Distributed.net in 1997. The idea is to use the idle time of PCs connected to the Internet, to run research software. As the performance of PCs and the number of users keeps on increasing, the available computing power allows to solve huge problems. Using such above mentioned platforms to solve large problems ranging from numerical simulations to life science is nowadays a common practice [45, 95]. We will now give examples of such applications ported on a grid. 

What Kind of Applications do we Find in Grid Computing?

Grid infrastructures can be used to solve many types of problems. An application can be computation intensive if it requires lots of computations, or I/O intensive if it requires lots of Input/Output operations; it can also be sequential if it needs to be executed on a single processor, or on the contrary parallel if it runs on several processors. Thus, Grid Computing can in fact encompasses many a research field, and many applications (or services) are available. We now present some of the commonly found applications on grid infrastructures. Lots of simulations such as acoustics and electromagnetics propagation or fluid dynamics can be represented as a set of linear equations stored in very large, but sparse, matrices. Several sparse matrix solvers [30, 74] can also be executed on grid infrastructures. When not relying on linear equations, problems can also be represented by differential equations, which require a different solving approach. A widely used method is adaptive mesh refinement (AMR), such as in universe formation simulations [110, 131, 160] (we study such cosmological simulations in Part IV of this dissertation), or climate predictions [85]. Bioinformatics researches use different approaches in order to discover for example new proteins. Some rely on docking techniques [82], or on comparison of DNA sequences with large databases by using BLAST [29] (Basic Local Alignment Search Tool). Many researches in bioinformatics are realized thanks to the World Community Grid, such as discovering dengue drugs, help find a cure to muscular dystrophy, cancers or even AIDS. High energy physics is one of the most data and computation consuming research field. The LHC (Large Hadron Collider) [13] is the world’s largest and highest-energy particle accelerator. At full operation intensity, the LHC can produce roughly 15 Petabytes of data annually. The data analysis is then   realized thanks to the Worldwide LHC Computing Grid project (WLCG) [24] which federates more than 100,000 processors from over 130 sites in 34 countries. Finally, scientific computing often relies on matrices manipulations. Several linear algebra libraries are thus available either to be integrated directly within an application, or to be remotely called. Among those, we can cite BLAS [84], LAPACK [35] and ScaLAPACK [49], or the SL3 library [153] from SUN. These libraries provide basic linear algebra subroutines, matrix factorizations, eigenvalues computations. . . All these techniques and domains of applications are of course only a small subset of what can be found in grid computing.

Accessing Grid Computing Power

Grid Middleware

Hiding the complexity and heterogeneity of a grid, and providing a transparent access to resources and services is the aim of software tools called middleware. A middleware is a layer between the end-user software, and the services and resources of the grid. Its main goals are to provide data management, service execution, monitoring, and security. Two of the most important grid middleware are gLite [121], which is part of the EGEE project, and Globus Toolkit [93]. Both are toolkits aiming at providing a framework for easing the building of grid applications. We can also cite BOINC [33], which is specifically targeted for volunteer computing. The Grid-RPC Approach Among grid middleware, a simple, yet efficient approach to provide transparent and productive access to resources consists in using the classical Remote Procedure Call (RPC) method. This paradigm allows an object to trigger a call to a method of another object wherever this object is, i.e., it can be a local object on the same machine, or a distant one. In this latter case the communication complexity is hidden. Many RPC implementations exists. Among those, CORBA [134] and Java RMI [152] are the most used. Several grid middleware [57] are available to tackle the problem of finding services available on distributed resources, choosing a suitable server, then executing the requests, and managing the data. Several environments, called Network Enabled Servers (NES) environments [125], have been proposed. Most of them share in common a three main components design: clients which are applications that use the NES infrastructure, agents which are in charge of handling the clients’ requests (scheduling them) and of finding suitable servers, and finally computational servers which provide computational power to solve the requests. Some of the middleware only rely on basic hierarchies of elements, a star graph, such as NetSolve/GridSolve [63, 170], Ninf-G [154], OmniRPC [145] and SmartGridRPC [53]. Others, in order to divide the load at the agents level, can have a more complex hierarchy shape such as WebCom-G [129] and Diet [59]. RPC has been specialized in the context of grid computing and gave birth to the Grid-RPC [146], thanks to the Open Grid Forum [18]. The Grid-RPC working group from the Open Grid Forum defined a standard Grid-RPC API [157], which allows clients to write their applications regardless of the underlying middleware. Currently, only five NES environments implement this standard API: GridSolve, Ninf-G, OmniRPC, SmartGridRPC, and Diet. The Diet Middleware As this dissertation mainly focuses on the Diet middleware, we now present its architecture. Diet is based on the client-agent-server model. The particularity of Diet is that it allows a hierarchy of agents to distribute the scheduling load among the agents. A Diet hierarchy, as presented in Figure 1.1, starts with a Master Agent (MA). This is the entry point, every request has to flow through the MA. A client contacts the MA via the CORBA naming service. The MA can then rely on a tree of Local Agents to forward the requests down the tree until they reach the servers. An agent has essentially two roles. First it forwards incoming requests, and then it aggregates the replies and does partial scheduling based on 7 1.2. ACCESSING GRID COMPUTING POWER some scheduling policy (shortest completion time first, round-robin, heterogeneous earliest finish time, . . . ) Finally, at the bottom of the hierarchy are the computational servers. They are hidden behind Server Daemons (SeDs). A SeD encapsulates a computational server, typically on a single computer, or on the gateway of a cluster. A SeD implements a list of available services, this list is exposed to the parent of the SeD in the Diet hierarchy. A SeD also provides performance prediction metrics which are sent along with the reply whenever a request arrives. These metrics are the building blocks for scheduling policies. As the MA is the only entry point of the hierarchy, it could become a bottleneck. Thus, to cope with this problem, several Diet hierarchies can be deployed alongside, and interconnected using CORBA connections. ! » # » ! » ! » $%& $%& ! » $%& $%& $%& $%& ! » $%& $%& $%& $%& $%& ! » $%& $%& $%& $%& # » # » ‘()%*+ ‘()%*+ ‘()%*+ # » # » ‘,**%-+),*./0%+1%%*/.%2%34(/&567/+3%%./8’9:; »< ‘()%*+ ‘()%*+ ‘()%*+ 5*+%3*4(/&567/+3%%/-,**%-+),*./8’9:; »< Figure 1.1: Diet middleware architecture. 1.2.2 Deployment Software Another concern is how an element is “really” deployed: how the binaries and libraries are sent to the relevant machines, and how they are remotely executed? This problem appears whenever an application needs to be remotely executed: how is it deployed? We can discern three kinds of deployments. The lowest level one consists in deploying system images, i.e., installing a new operating system along with relevant software on a machine. This process requires to be able to distantly reboot the node and install whole system image on it. Kadeploy [12] is an example of software to deploy system images on grid environments. At an intermediary level, we find the deployment of virtual machines, which basically consists in deploying a system on top of another system. Several virtual machines can then be hosted by a single physical machine. This kind of deployment offers different levels of virtualization, as it can either just virtualize the system without hiding the hardware, or it can emulate another kind of hardware. We can cite Xen [38] and QEMU [43] as examples. Finally, the last level of deployment is software deployment. It consists in installing, configuring and executing a piece of software on resources, on top of an already existing operating system. Deployment is particularly relevant for Grid-RPC middleware, as the deployment needs to be coordinated (servers need to be launched once the agents are ready), and many elements need to be deployed (an agent and/or server per machine). Moreover, some deployment software provide autonomic management, which can be of great help when managing a platform over a long period. Several solutions exists. They range from application specific to totally generic software. ADAGE [119, 120] is a generic deployment software. It relies on modules specific for each software deployment. Its design decouples the description of the application from the description of the platform, and allows specific planning algorithms to be  plugged-in. ADAGE is targeted towards static deployment (i.e., “one-shot” deployment with no further modifications), it can however be used in conjunction with CoRDAGe [69] to add basic dynamic adaptations capabilities. ADEM [108] is a generic deployment software, relying on Globus Toolkit. It aims at automatically installing and executing applications on the Open Science Grid (OSG), and can transparently retrieve platform description and information. DeployWare [91, 92] is also a generic deployment software, it relies on the Fractal [137] component model. Mapping between the components and the machines has to be provided, as no automatic planning capabilities are offered. GoDiet [58] is a tool specifically designed to deploy the Diet middleware. An XML file containing the whole deployment has to be provided. TUNe [54] is a generic autonomic deployment software. Like DeployWare, it also relies on the Fractal component model. TUNe targets autonomic management of software, i.e., dynamic adaptation depending on external events. Sekitei [114] is not exactly a “deployment software”, as it only provides an artificial intelligence algorithm to solve the component placement problem. It is meant to be used as a mapping component for other deployment software. Finally, Weevil [162, 163] aims at automating experiment processes on distributed systems. It allows application deployment and workload generation for the experiments. However, it does not provide automatic mapping. Apart from the installation and the execution of the software itself, which has been a well studied field, another important point is the planning of the deployment, i.e., the mapping between the software elements, and the computational resources. Whereas deployment software can cope with the installation and execution part, very few propose intelligent planning techniques. Thus the need for proposing planning algorithms. 

Concerning Deployment

In this section, we present some related work on middleware modelization and evaluation, as well as two optimization techniques used in our approach to solve the problem of deploying a Grid-RPC middleware. 

Middleware Models and Evaluations

The literature does not provide much papers on the modelization and evaluation of distributed middleware systems. In [155], Tanaka et al. present a performance evaluation of Ninf-G. However, no theoretical model is given. In [68], P.K. Chouhan presents a model for hierarchical middleware, and algorithms to deploy a hierarchy of schedulers on cluster and grid environments. She also compares the models with the Diet middleware. She models the maximum throughput of the middleware when it works in steady-state, meaning that only the period when the middleware is fully loaded is taken into account, and not the periods when the workload initially increases, or decreases in the end. However, a severe limitation in this latter work is that only one kind of service could be deployed in the hierarchy. Such a constraint is of course not desirable, as nowadays many applications rely on workflows of different services. Hence, the need to extend the previous models and algorithms to cope with hierarchies supporting several services. Several works have also been conducted on CORBA-based systems. Works on throughput, latency and scalability of CORBA have been conducted [101, 102], or even on the ORB architecture [27]. Several CORBA benchmarking frameworks have been proposed [55], and some web sites also propose benchmarks results, see for example [142]. These benchmarks provide low level metrics on CORBA implementations, such as consumed memory for each data type, CPU usage for each method call. Though accurate, these metrics do not fit in our study, which aims at obtaining a model of the behavior of Diet at a user level, and not at the methods execution level. 

Tools Used in our Approach

We now introduce two techniques used in the next chapter to solve the deployment planning problems. The two methods show quite orthogonal ways of thinking. While linear programming aims at obtaining a   solution to an optimization problem through exact resolution of a system of constraints, genetic algorithm try to find a good solution through random searches and improvements. Linear Programming Linear Programming (LP) is a method for the optimization of a linear function (the objective function), subject to several constraints which can be represented as linear equalities or inequalities. More formally, a linear programming problem can be stated in the following form: Minimize (or maximize) c ⊤x Subject to Ax ≤ b Where x is the vector containing the unknown factors (the variables), c and b are vectors containing constant coefficients, and A is a matrix of known coefficients. When all variables can accept a range of floating values, the problem can be solved in polynomial time using the ellipsoid or interior point algorithms. However, if the unknown variables are all required to be integers, we then talk about Integer Linear Programming (ILP), and the optimization problem becomes NP-hard. Such a technique is widely used, for example for scheduling problems, in operations research, or in our case deployment planning. There exists several software to solve LP problems, such as GNU Linear Programming Kit (GLPK) [8], ILOG CPLEX [11], or lpsolve [14]. Linear programming is used in the next chapter, in Sections 2.3 and 2.4 to design planning algorithms for hierarchical middleware on homogeneous platforms, and communication homogeneous / computation heterogeneous platforms. Genetic Algorithm Genetic Algorithms [126] are part of what are called meta-heuristics such as tabu search, hill climbing, or simulated annealing. More precisely genetic algorithms are evolutionary heuristics. Genetic algorithms were invented by John Holland in the 1970s [107], and are inspired by Darwin’s theory about evolution. The idea is to have a population of abstract representations (called chromosomes or the genotype of the genome) of candidate solutions (called individuals) to an optimization problem evolve towards better solutions. Starting from a random initial population, each step consists in stochastically selecting individuals from the current population (based on their fitness, i.e., their score towards the optimization goal), modifying them by either combining them (using techniques similar to natural crossovers) or mutating them (randomly modifying the chromosomes), before injecting them into a new population, which in turn is used for the next iteration of the algorithm. Usually, this process is stopped after a given number of iterations, or once a particular fitness is attained. There exists quite a lot of parallel distributed evolutionary algorithms and local search frameworks, such as DREAM [36], MAFRA [116], MALLBA [28], and ParadisEO [56]. A genetic algorithm is used in the next chapter, in Section 2.5 for designing a planning algorithm for hierarchical middleware on heterogeneous platform.

Table des matières

Introduction
I Deploying Trees
1 Preliminaries on Grid Computing
1.1 Grid Computing
1.1.1 Examples of Grid Infrastructures
1.1.2 What Kind of Applications do we Find in Grid Computing?
1.2 Accessing Grid Computing Power
1.2.1 Grid Middleware
1.2.2 Deployment Software
1.3 Concerning Deployment
1.3.1 Middleware Models and Evaluations
1.3.2 Tools Used in our Approach
2 Middleware Deployment
2.1 Model Assumptions
2.1.1 Request Definition
2.1.2 Resource Architecture
2.1.3 Deployment Assumptions
2.1.4 Objective
2.2 Servers and Agents Models
2.2.1 “Global” Throughput
2.2.2 Hierarchy Elements Model
2.3 Homogeneous Platform
2.3.1 Model
2.3.2 Automatic Planning
2.3.3 Building the Whole Hierarchy
2.3.4 Comparing Diet and the Model
2.3.5 Benchmarking
2.3.6 Experimental Results
2.4 Heterogeneous Computation, Homogeneous Communication
2.4.1 Algorithm
2.4.2 Experiments
2.5 Fully Heterogeneous
2.5.1 Genetic Algorithm Approach
2.5.2 Quality of the Approach
2.5.3 Experiments
2.6 Elements Influencing the Throughput
2.6.1 Bad Scheduling
2.6.2 Logging
2.6.3 Number of Clients
2.6.4 OmniORB Configuration
2.7 Conclusion
II Organizing Nodes
3 Preliminaries on Clustering
3.1 Graph Clustering
3.1.1 General Clustering Techniques
3.1.2 Network Clustering Techniques
3.2 Distributed Clustering
3.3 Concept of Self-Stabilization
3.3.1 Self-Stabilization
3.3.2 Distributed System Assumptions
3.4 The k-clustering Problem and Self-Stabilizing Clustering
4 Dynamic Platform Clustering
4.1 System Assumptions
4.2 The k-Clustering Problem
4.3 Unfair Composition of Self-Stabilizing Algorithms
4.3.1 The Daemon
4.3.2 Input Variables
4.3.3 Combining Algorithms
4.4 Best Reachable Problem
4.4.1 Algorithm NSSBR
4.4.2 Proof of Correctness for NSSBR
4.5 Self-Stabilizing Best Reachable
4.5.1 Algorithm SSBR
4.5.2 Proof of Correctness for SSBR
4.5.3 An Example Computation of SSBR
4.6 The K-CLUSTERING Algorithm
4.6.1 The Module SSCLUSTER
4.6.2 Proof of Correctness for K-CLUSTERING
4.7 Theoretical Bounds
4.7.1 Memory Requirements
4.7.2 Number of Clusterheads
4.7.3 Number of Rounds
4.8 Simulations
4.8.1 Effect of the k Value
4.8.2 Complexity Bounds
4.9 Conclusion
III Mapping Tasks
5 Preliminaries on Scheduling
5.1 Application Models
5.1.1 Directed Acyclic Graph
5.1.2 Embarrassingly Parallel Applications
5.2 Objectives
5.2.1 Based on Completion Time
5.2.2 Based on Time Spent in the System
5.2.3 Based on Energy Consumption
ix CONTENTS
5.3 Scheduling
5.3.1 Scheduling Models
5.3.2 Difficulty
5.3.3 Approaches
6 Scheduling with Utilization Constraints
6.1 Problem Statement
6.1.1 Platform
6.1.2 Applications
6.1.3 Why single node jobs?
6.1.4 Goals
6.2 Scheduling Jobs without Release Dates
6.2.1 A Linear Program for “Rough” Allocation
6.2.2 Scheduling Requests at the Server Level
6.3 Taking Into Account Release Dates
6.3.1 Problem Formulation without Maximum Cluster Utilization Constraint
6.3.2 Taking into Account the Maximum Cluster Utilization
6.3.3 Coping with the Atomic Tasks Problem
6.3.4 Obtaining a Real Schedule
6.3.5 Finding the Best Maximum Stretch
6.4 Conclusion
IV Eyes in the Sky
7 Contribution to Cosmological Simulations
7.1 Transparently Using a Grid
7.1.1 Applications Management
7.1.2 Bypassing Firewalls
7.1.3 Friendly User Interface
7.1.4 Overall Architecture
7.2 Cosmological Simulations
7.2.1 Determining Motions in the Universe
7.2.2 Initial Conditions Generation
7.2.3 Dark Matter Simulations
7.2.4 Mock Galaxies Catalogs
7.2.5 Observations/Simulations Comparison
7.3 Running Cosmological Simulations with Diet
7.3.1 Dark Matter Simulations
7.3.2 Post-Processing
7.3.3 Simulation Workflow
7.3.4 Prototype
7.3.5 Large Scale Experiment
7.4 Conclusion
Conclusion and Future Works
References
Publications

projet fin d'etudeTélécharger le document complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *