Incidences de pluies exceptionnelles sur un aquifère libre côtier en zone semi-aride (Chaouia, Maroc)
A. Younsi, J. Mania, E. K. Lhadi et J. Mudry
Les eaux souterraines qui représentent les seules ressources en eau de la Chaouia côtière, sont très minéralisées : intrusion marine, évaporation et recyclage des eaux d'irrigation chargées en sels. L'évolution spatio-temporelle piézométrique et hydrochimique de cet aquifère a été observée sur 179 puits, de 1991 à 1998 qui inclue une période exceptionnellement pluvieuse. Une comparaison par rapport à l'état de la nappe en 1971, date du début de l'exploitation des eaux souterraines, a été faite. Les suivis ont montré qu'à la suite des pluies importantes de 1996 (943 mm), un rehaussement important du niveau de la nappe et des dilutions plus ou moins retardées de tous les sels en solution dans l'eau, sont mesurés dans tous les puits. En effet, la nappe qui a accusé des baisses de 10 à 20 m entre 1971 et 1995, remonte en 1996 de 4,5 à 12,5 m en moyenne surtout au centre et à l'amont de la plaine. Quant à la qualité des eaux qui s'était dégradée, elle a connu des améliorations importantes avec des variations de 0,5 à plus de 4 ms/cm au niveau de la conductivité électrique. Ces faits montrent une grande sensibilité de la nappe aux apports par les pluies qui constituent sa principale alimentation.
A study of regional pollution in the coastal aquifer system between Oum-er-Rbia River and Bir Jdid (Coastal Moroccan Meseta, Morocco) was based on data gathered from 1995 to 1998. The study improved the hydrogeological and hydrochemical understanding of the aquifers. The study examined the effects of significant rains on the quantity and quality of coastal groundwater. These waters exhibited high sensitivity to the rain input.
Measurements undertaken since 1991 on 179 wells in the study area reveal that the groundwaters are highly mineralized - conductivity reaches more than 10 mS/cm, depending on season and well location. These results also show high chloride concentrations (more than 3500 mg/l), sodium concentrations frequently in the 500 - 1000 mg/l range and nitrates between 150 and 250 mg/l.
In coastal Chaouia, these high concentrations of dissolved mineral salts aggravate the problem of supplying quality water to the rural population for drinking or even market-gardening irrigation purposes. The area is characterized by increased irrigated surfaces (more than 16000 hectares with 4125 m3 /s) and a demographic rise that has triggered the chaotic boring of more than 2000 wells into a heterogeneous aquifer sensitive to salinity. Possible sources of this high salinity include :
Seawater intrusion into coastal aquifer sectors (mainly into a two-kilometre strip of coastline). The degree and the length of the marine intrusion were exacerbated by intensive pumping for irrigation needs, particularly during the dry season. Other factors such as coastal aquifer permeability, saturation zone thickness and basement depth also affect the degree and length of the marine intrusion. Seawater pollution is more marked in the southwestern coastal area.
Reuse of irrigation saline waters, especially as groundwater circulates in the coastal and eastern sectors deep in the soil (to depths of 10 m).
Several important factors have been highlighted concerning the origin of the chemical elements in solution and the mechanism of hydrochemical distribution: evaporation, lithology of tapped aquifers, water table depth, distance of wells from the coast, type of soil, use of fertilizers, frequency of pumping operations, and rainfall, among others.
A regular network of 179 wells was monitored and surveyed in this study. Spatial and temporal changes in the water table and in the hydrochemistry of the aquifer were monitored between 1991 and 1998, a span which included an exceptionally rainy period (1996). Data are compared with those from 1971, the year groundwater exploitation began.
The studied region is characterized by a semi-arid climate, mean annual rainfall of 391 mm (between 1977 and 1998), mean temperature of 17.8 °C, and mean rainfall input of 142 mm. In the dry season, high temperature combined with low rainfall and intensive pumping operations give rise to salinization of shallow groundwater. Groundwater in the area of study circulates in two principal hydrogeological matrices :
1. Sandy-calcareous Plio-Quaternary and paleozoic strata in costal sectors and the eastern part located between Tnine Chtouka and Bir Jdid. These hydrogeological strata are characterized by significant porosity and permeability and are exploited at a shallow depth, generally less than 14 m, under sandy-clayed soil.
2. A Cretaceous aquifer in marly limestone located between Oum-er-Rbia River and Tnine Chtouka. This aquifer is characterized by a low permeability and a water table generally exceeding 24 m in depth.
Monitoring showed that after the heavy rains of 1996 (943 mm, with infiltration of 142 mm), the water table rose markedly in all study wells and a lagged dilution (3 to 6 months) was noted for all mineral salts dissolved in water. In fact, the water table, which had dropped between 1971 and 1995 (10 m in the costal sectors and 20 m in the others parts of the studied region), rose by an average 4.5 to 12.5 m in the central and up-gradient parts of the plain. Several factors were at work between 1971 and 1995: decreased pumping times (about 15 minutes) combined with sharp decreases in the thickness of the saturated zone; appearance of a closed piezometric level in the western region between Oum-er-Rbia River and Tnine Chtouka; and desiccation of 59 wells. With the aid of a polynomial model, annual level evolution of the five coastal test piezometers showed correlation factors of 0.76 to 0.93 respectively for years 1996 and 1997. Other quantitative effects of exceptional rains on groundwater identified include: increased pumping time (up from less than 15 minutes prior to the heavy rains of 1996); increased saturated zone thickness; rises in the water-table surpassing former sea-level heights in certain wells ; general advancement of the piezometric levels towards the coastline; and increased aquifer thickness stemming from longer pumping times.
In terms of quality, groundwater showed gradually increasing salinity (by a factor of 2) between 1971 and 1995 and significant improvements after the rains of 1996. Between 1971 and 1995, increased mineralization was observed mainly in coastal and eastern parts of the aquifer between Tnine Chtouka and Bir Jdid. Conductivity of the waters in some coastal wells increased from 5 mS/cm to more than 10 mS/cm. After the exceptional rains of 1996, differences ranging from 0.5 to more than 4 mS/cm were observed in 71 % of the 179 wells analysed. Over the same period, chloride concentrations decreased by average values ranging from 150 to 500 mg/l (higher decreases surpassed 1000 mg/l). By contrast, spatial distributions of conductivity and of the main mineral salts remained unchanged and coastal and eastern groundwaters were still the most vulnerable to high salinity.
The quantitative and qualitative effects of heavy rain point to the high sensitivity of these groundwaters to rain input, which is the principal recharge source of the studied aquifer.
P. Miháltz, Zs Csikor, P. Chatellier et B. Siklódi
Une étude expérimentale de la dénitrification a été réalisée sur des bioréacteurs à lits fluidisés alimentés avec un effluent dont la teneur en nitrates peut atteindre 900 mg N-NO3- /l. Des mesures ont montré, qu'à ces concentrations, il n'y a pas d'inhibition des substrats (NO3- et éthanol). Le flux de nitrates éliminé par le réacteur a atteint 10 kg N-NO3- /m3j avec pratiquement 100 % d'élimination des nitrates. Ce flux semble toutefois dépendre de la quantité de biomasse dénitrifiante sur le support de sable.
Pour le contrôle optimal du procédé, il est indispensable de pouvoir
- déterminer facilement la concentration massique (G) de la biomasse
- éviter les valeurs extrêmes conduisant à une expansion trop importante du lit et à des limitations diffusionnelles.
Une étude préalable des auteurs a servi comme base pour établir la corrélation entre le gradient de pression dans le lit, exprimé en fonction de l'écart de masse volumique (Cp), et les valeurs de G. Une corrélation a été déterminée et les constantes validées pour une gamme de G allant jusqu'à 100 mg MVS/g support.
La concentration volumique de biomasse (X) a été ensuite établie à l'aide de paramètres caractérisant le comportement hydrodynamique du lit fluidisé. Dans le cas du système étudié, la valeur maximale de X est de 19 - 20 g MVS/l. Ceci correspond à des valeurs de G de 80-100 mg MVS/g support. Les corrélations obtenues peuvent servir pour le dimensionnement ainsi que pour l'optimisation de l'épaississeur du biofilm dans la conduite du réacteur.
Fluidized sand-bed, fixed-film denitrifying reactors were tested for the treatment of high strength waters and for the optimization and control of biofilm thickness. Two reactors with sand (0.63 - 0.8 mm) as the carrier particle were operated. Ethanol and propionic acid were used as carbon sources. Nitrate concentrations were in the range of 200-900 mg NO3- -N/l. Tests showed no substrate inhibition (NO3- or ethanol) at these concentrations (Fig. 3). The nitrate removal capacity of the reactors reached 10 kg NO3- -N/m3 d, which corresponded practically to 100 % nitrate removal efficiency (Fig. 2). Nitrite formation was only observable where other conditions (e.g., unfavourable pH) hindered nitrate removal (Fig. 4).
Since biofilm growth is a parameter of major influence on reactor performance and mechanical/hydrodynamic functioning, its control is indispensable. A method was developed for simplified determination and optimization of biofilm coverage. This method is based on the expansion coefficient (E) and specific particle volume (ɛ0) parameters. The former is defined as the slope of the bed height-fluidization rate plot (eq.3), and the specific particle volume can be calculated from the intercept (eq.4).
The particle content (Cp) (Fig. 1, eqs. 2 and 6) ties these parameters to the pressure gradient measured along a fluidized bed, as introduced in a previous study (Csikor et al. 1995). This method was simplified to replace biofilm thickness with the gravimetric biofilm coverage (G), which is easy to determine gravimetrically. For the determination of fluidization and biofilm parameters, samples were taken from different points of the fluidized bed with differing biofilm thickness (Fig.6) and tested in a small fluidized bed reactor. It was found that G is linearly correlated to E and ɛ0 (Figs. 8 and 9). The reliability of the linear relationship was controlled by transforming biofilm coverage data to biofilm thickness and comparing with previous results.
It was shown that differences in microbial cultures cause negligible differences in the hydrodynamics of fluidization (Fig. 5). Volumetric biomass concentration (X), which is directly related to G (Fig. 7), can thus be determined using simple hydrostatic pressure tests. It was demonstrated that X has an optimal value (Figs. 7 and 11) and can reach 19 - 20 g VS/l under normal operating conditions. This corresponds to a G between 80 - 100 mg VS/g support. Increased biofilm thickness does not improve X but increases the diffusion limitation.
The sensitivity of the Cp -based biofilm measurement is greater with thin biofilms. However the real volumetric biomass concentration is less sensitive to changes with thick biofilms, which counterbalances this effect (Fig. 10).
J. P. Schlumpf, D. Trebouet, F. Quemeneur, J. P. Maleriat et P. Jaouen
Malgré un traitement biologique préalable, les lisiers et les lixiviats de décharge ont en commun de conserver une Demande Chimique en Oxygène (DCO) résiduelle supérieure à 500 mg O2.l-1 : valeur 4 à 5 fois trop élevée pour un rejet direct dans le milieu naturel. La nanofiltration pourrait constituer une solution comme traitement de finition. Dans le cadre de cette étude expérimentale, deux membranes de nanofiltration sont mises en œuvre à l'échelle pilote afin de comparer leur efficacité pour réduire la DCO non biodégradable des deux effluents précités. Dans un premier temps, l'étude menée à concentration constante, montre que les performances (flux de perméation et DCO dans le perméat) dépendent principalement du couple membrane - effluent. Dans le cas du lisier, la couche de colmatants formée à la surface de la membrane a un caractère compressible et peu structuré ce qui entraîne une plus grande sensibilité aux variations de conditions hydrodynamiques. Dans le cas des lixiviats, la couche formée est moins dépendante des conditions opératoires. Après avoir sélectionné les meilleures conditions opératoires pour réduire la DCO des deux effluents étudiés, les essais de nanofiltration sont ensuite menés en mode "concentration" en fixant la pression à 15 bar et la vitesse de recirculation à 1,5 m.s-1. L'obtention d'un facteur de réduction volumique de 4 entraîne, d'une part, une diminution plus accentuée des flux de perméation dans le cas du lisier que dans celui du lixiviat et, d'autre part, une augmentation plus importante de la DCO du perméat pour le lisier. La valeur de la DCO devient alors supérieure, en fin de concentration, à celle requise pour un rejet en milieu naturel (120 mg O2.l-1).
Pig manure and landfill leachate cannot be treated only by conventional biological treatment because a "refractory" COD persists, superior to 500 mg O2.l-1 : four times too high for a direct discharge in the environment. Nanofiltration, an intermediate process between reverse osmosis and ultrafiltration, may be an interesting alternative as a final treatment. In nanofiltration, lower pressure can be used and fluxes are higher than for reverse osmosis. The present study compared the treatability of pig manure and landfill leachate after biological treatment using a pilot-scale nanofiltration plant. Performances were evaluated in terms of permeate COD and permeate flux versus operating conditions (applied pressure, crossflow velocity and recovery rate). Two tubular organic nanofiltration membranes with 450 diameter cut-offs were used for pilot-scale testing: MPT-20 (polyacrylonitrile) and MPT-31 (polysulfone). Preliminary experiments carried out at constant concentrations show that performance (permeation flux and permeate COD) depends mainly on the nanofiltration membrane/effluent coupling. Permeate fluxes obtained with the MPT-20 membrane were higher than those obtained with the MPT-31. The increased crossflow velocity produced a particularly marked flux increase for pig manure. Moreover, the flux obtained with pig manure decreased at pressures superior to 15 bars whereas for the landfill leachate it became constant regardless of the pressure applied. COD retention was better in the case of pig manure and increased with pressure. On the other hand, high crossflow velocity helped reduce the COD retention, particularly for pig manure. The difference stems mainly from the foulant layer on the membrane surface. This layer is compressible and not organised; in the case of pig manure, it may explain the influence of hydrodynamic parameters: crossflow velocity favours the back migration of potential foulant such as colloids from the membrane surface to the bulk liquid phase. This may explain an increased mass transfer and consequent reduction of COD retention at high tangential velocities. Moreover, higher pressure generates a dense layer, which leads to a reduction of mass transfer. The influence of operating conditions was less important for the leachate, as the foulant layer may be more organised and have better cohesion.
In the second part of this study, the nanofiltration pilot plant was operated in concentration mode in order to evaluate the influence of recovery rate on flux and retention. Since COD retention is better with the MPT-31 membrane, the latter was used for concentration experiments. The applied pressure was fixed at 15 bar and crossflow velocity at 1.5 m.s-1. Both effluents were concentrated with a volume reduction factor of 4. However this reduction of retentate volume led to both a drop in permeation flux and a rise of permeate COD, to a value above to the environmental norm of 120 mg O2.l-1.
Adaptation de la mesure potentiométrique à l'estimation en continu de la teneur en cuivre de solutions aqueuses présentant de fortes variations physico-chimiques
É. Tisserand, P. Schweitzer, P. Tisserand et J. L. Cécile
La mesure en continu, sans prélèvement et sous conditions physico-chimiques variables, de la teneur en cuivre d'une solution aqueuse, constitue le cadre de ce travail. La méthode d'estimation proposée est basée sur la potentiométrie sélective à l'aide d'une électrode spécifique au cuivre. Le comportement de l'électrode est modélisé par une loi non linéaire s'inspirant de la relation de Nernst généralisée. Le modèle intègre les paramètres température, conductivité et pH de la solution. Il rend possible une estimation de la concentration totale de cuivre même en présence de complexations hydroxyles. La phase de modélisation est présentée en détail. Les performances de l'estimateur sont évaluées puis discutées.
Ion Selective electrodes (ISEs) offer an attractive solution for continuously evaluating the content of certain ionic species in aqueous media. Manufacturers propose a wide range of electrodes specific to heavy metals (Cu2+, Pb2+ …). Because they eliminate the need for sampling, are of reasonable size and have few electronic parts, ISEs seem highly appropriate for continuous monitoring in urban purification systems. Measurements obtained by these sensors in controlled media in the laboratory are usually precise, reliable and reproducible. However, it is not so with complex and uncontrolled media. This work falls within the general scope of the continuous measurement of heavy metals in wastewater. More particularly, it is devoted to the description of the behaviour of a copper-selective electrode (ISECu) in a medium presenting wide physicochemical variations.
In order to study ISE behaviour, we developed an experimental platform that allowed us to reproduce in a reactor the physicochemical variations observed in wastewater, particularly with regards to salinity and acidity. The reactor was fitted with a measuring set consisting of five electrodes that measured the following parameters: pH (ref. integrated Ag/Agcl), redox (red), ISECu (ECu), temperature (T) and conductivity (s). A computer system carried out the acquisition of the five signals with a 10-second sampling period. The species concentration in the reactor was determined by calculating the weight of the solutions extracted from or injected into the reactor. Controlling the temperature of the system was undertaken using a cryostat. Sequential tests allowed the pH, redox potential and conductivity of the medium to be varied and were carried out by successive injections of different chemical products. The response times of the conductivity probe and of the pH and redox electrodes are shown here; the short response time of the sensors (20 to 30 s) and the strong correlation between the measured pH and redox are noted.
The model used to explain the ISE response is based on a generalization of Nernst's Law that takes into account the temperature and the activity of the free ions (Cu2+). Taking into consideration chemical equilibria and mass equations allowed us to link the activity of the free copper ions to the total injected copper concentration |Cu2+|tot and to the pH. Redox, strongly correlated to pH, was ignored in the mathematical model. Since hydroxyl complexation is the major complexation reaction (compared to other copper-binding ligands), the potential measured with the ISE took the following form:
ECu=b0+b1T.log[(ϒ2|Cu2+|tot) / (1+b2ϒ210pH+b3ϒ2102pH) + b4]
The activity coefficient ϒ2 of the Cu2+ ions was calculated from the ionic strength (I) of the solution, using the Debye-Hückel approximation. Ionic strength was derived from conductivity corrected to 25 °C. In wastewater, the ranges of the physicochemical parameters were as follows: T from 5 to 35°C; pH from 4 to 9; Omega from 500 to 2000 mS/cm; redox from 400 to -400 mV/ENH; and copper concentrations 10-3 mol/dm3.
In order to identify the bi coefficients of the model, we established an experimental plan comprising 108 measurement points that covered, with a minimal number of experiments, the ranges of variations of the parameters of influence. A dispersion diagram of measured and modelled values gave a linear adjustment coefficient close to 0.99 and a standard deviation of 8.8 mV, which corresponds to a 0.34 decadal standard error in the concentration estimate. With a temperature of 25 °C, the model has a sensitivity of -26.4 mV/decade, very close to the theoretical slope of an electrode sensitive to divalent ions.
ISE measurement of the copper concentration with large pH variations
pH is the parameter which exerts the greatest influence on ISE response, which is why tests simulating copper pollution with large variations of pH were carried out. These tests enabled us to evaluate the performances of the model in terms of the estimation of copper content. Four solutions of total copper concentration equal to 10-6, 10-5, 10-4, 10-3 mol/dm3 respectively, were used. Their temperature was 25 °C and their conductivity was fixed at approximately 500 mS/cm. We varied the pH of each solution between 4 and 10. For the four tests, we show the estimate of the copper concentration obtained with our model starting from the potential measured by the ISE.
In the case of strong copper pollution (10-3 mol/dm3), the model yields an overestimated concentration below pH 7 with a decadal error of less than 0.5. Above pH 7, the concentration is underestimated while maintaining a decadal error of less than 0.5. At pH 7, a 0.04-decade minimal error is found. For pollution equal to or less than 10-4 mol/dm3, the model gives good results in an acid or neutral medium with a decadal error usually less than 0.3. In an alkaline medium, concentration is overestimated. In this case the error increases in a roughly linear manner with the pH and the co-logarithm of copper concentration. From the results of these tests, we defined a valid domain of ISE copper concentration measurement using our model.
In conclusion, the suggested method, although not very accurate, could be used as an indicator of the copper concentration level in wastewater. The ISE-response correction model is currently being tested under operational conditions at a water treatment plant in Nancy-Maxéville (France).
A. Terfous, A. Megnounif et A. Bouanani
Les résultats d'analyse graphique des valeurs instantanées des débits solides en suspension dans le cours d'eau de l'Oued Mouilah et leurs relations avec les débits liquides, durant les compagnes de prélèvements de 1977 à 1993, ont permis de montrer l'existence de deux périodes d'érosion actives. Le flux des matières solides en suspension dans le cours d'eau est très variable d'une année à une autre et la dégradation spécifique moyenne annuelle sur les 16 années d'étude est estimée à 126 tonnes par km2. Cette valeur est relativement faible par rapport à celles trouvées pour d'autres régions à régime hydrologique similaire.
The extent and rates of alluvial deposit and dam siltation caused by sediment deposition from Maghreb streams have prompted a number of attempts to quantify and explain the complex mechanisms of suspended sediment transport. In Algeria, a country with scarce water resources, deposition of sediments in dams is estimated to average 20 million m3 /year, which contributes to a 0.3 % yearly loss of storage capacity from a total capacity estimated at 6.2 billion m3.
Of interest in this context are suspended sediment loads in Mediterranean Algeria's Mouilah River, on which is built the Hammam Boughrara, a 117 million-m3 capacity dam put into service in 1998.
The Mouilah River basin, situated in northwest Algeria, covers a 2650-km2 area and has a 230-km perimeter (Table I).
The Mouilah runs along 124 km, rising at an altitude of 1250 m in Algeria then flowing into Morocco. It is ephemeral; perennial flow sets in near Oujda (Morocco), below which it re-enters Algeria near Maghnia (Figure 1).
The study zone is characterized by a semi-arid climate. From 1977 to 1993, annual mean temperature was 16.7 °C. Rainfall was relatively scarce and unequally distributed throughout the year, with an inter-annual average of 300 mm over the same period (Figure 2).
Analysis of hydrological data
The study used instantaneous water discharge values (m3/s) measured at the mouth of the Mouilah from September 1977 to August 1993 (results calculated and furnished by the National Agency of Hydric Resources [ANRH]). For measured values, suspended loads (g/l) were evaluated using samples taken from the river: total suspended loads were calculated as the product of these concentrations and water discharge. The number of samples was adapted to the hydrological regime: They were taken every other day or, during flood periods, as frequently as quarter-hourly.
Analysis of the instantaneous discharges showed that suspended loads were related to discharge by a power law (Figure 3).
To study the responses of the basin over the hydrological year, we grouped the results - 16 years' worth of data - according to season, and analysed the relationship between liquid discharge and suspended sediment load (Table II).
Graphical analysis of Figure 4 reveals that autumn and the spring are distinguished by strong river discharges leading to important transport of solids. The maximum flow of solids was about 104 000 kg/s, resulting from a water discharge of 1880 m3/s in November 1986. By contrast, winter and summer discharges were much smaller; values did not exceed 220 m3/s in winter and 83 m3/s in summer.
After the dry season, the first rains of autumn encounter dry, hard and barely erodible soil. The response of the basin in terms of suspended-solids generation is therefore very small. It is the heavy rains of October and November that remove large quantities of solids transportable by streams.
After the very dry and cool winter and a succession of freezes and thaws, spring rains fall on poorer soil, leading to relatively high loads, though still lesser than those of autumn.
Summer is marked by very dispersed values encompassing the smallest discharges of the year and some relatively high suspended loads associated with low discharges, the latter arising from seasonal storms.
In summary, stream discharge is very variable throughout the hydrological year. Suspended sediment transport in the Mouilah River basin occurs principally during flood periods. We distinguished two periods of active erosion, one in autumn and another, lesser period in spring.
An annual balance sheet of solid and liquid contributions shows that these two parameters vary regularly and as a function of rainfall (Figure 5). Annual liquid contributions from 1977 to 1993 were evaluated at 48.7 million m3, which corresponds to a mean flood depth of 18.4 mm, and a low flow coefficient of 6 %. Due to very dispersed and extreme values and variability, rainfall-discharge relations are varied. This leads to inter-annual irregularity for flood depths: consequently, a relationship with annual rainfalls (Figure 6) was difficult to establish. However, we noted a tendency of the form: LE=0.0009P1.69 (R=0.74).
The annual mean contribution of suspended sediment at the mouth of the river was estimated at 335 000 tons, which corresponds to a soil erosion rate of 126 tons/km2/year. This value is moderate compared to other basins of the region, such as the Mazafran (Algiers) and Isser (Lakhdaria) river basins, where erosion rates are about 1610 and 2300 tons/km2/year, respectively (Table III).
The inter-annual solid and liquid contributions contrast markedly. Indeed, for liquid contributions, the first and third quartiles are 21.8 and 64.7 million m3, respectively, which shows that 25 % of the annual moduli representing humid years were three times more important than those representing dry years (Figure 7). Furthermore, in annual loads expressed as stream turbidity, variation between the temperate and arid reaches of the stream is apparent.
Figure 8 shows that the highest annual liquid contribution, 117.8 million m3, was recorded during the year 1979-80, and generated a solid contribution of 670 000 tons. This liquid contribution is higher than that of 1986-87, evaluated at 106.4 million m3, which carried 2.69 million tons - an erosion rate of more than 1000 tons/km2/year.
Une méthodologie de modélisation numérique de terrain pour la simulation hydrodynamique bidimensionnelle
Y. Secretan, M. Leclerc, S. Duchesne et M. Heniche
L'article pose la problématique de la construction du Modèle Numérique de Terrain (MNT) dans le contexte d'études hydrauliques à deux dimensions, ici reliées aux inondations. La difficulté est liée à l'hétérogénéité des ensembles de données qui diffèrent en précision, en couverture spatiale, en répartition et en densité, ainsi qu'en géoréférentiation, notamment. Dans le cadre d'un exercice de modélisation hydrodynamique, toute la région à l'étude doit être documentée et l'information portée sur un support homogène. L'article propose une stratégie efficace supportée par un outil informatique, le MODELEUR, qui permet de fusionner rapidement les divers ensembles disponibles pour chaque variable qu'elle soit scalaire comme la topographie ou vectorielle comme le vent, d'en préserver l'intégrité et d'y donner accès efficacement à toutes les étapes du processus d'analyse et de modélisation. Ainsi, quelle que soit l'utilisation environnementale du modèle numérique de terrain (planification d'aménagement, conservation d'habitats, inondations, sédimentologie), la méthode permet de travailler avec la projection des données sur un support homogène de type maillage d'éléments finis et de conserver intégralement l'original comme référence. Cette méthode est basée sur une partition du domaine d'analyse par type d'information : topographie, substrat, rugosité de surface, etc.. Une partition est composée de sous-domaines et chacun associe un jeu de données à une portion du domaine d'analyse par un procédé déclaratoire. Ce modèle conceptuel forme à notre sens le MNT proprement dit. Le processus de transfert des données des partitions à un maillage d'analyse est considéré comme un résultat du MNT et non le MNT lui-même. Il est réalisé à l'aide d'une technique d'interpolation comme la méthode des éléments finis. Suite aux crues du Saguenay en 1996, la méthode a pu être testée et validée pour en démontrer l'efficacité. Cet exemple nous sert d'illustration.
This article exposes the problem of constructing a Numerical Terrain Model (NTM) in the particular context of two-dimensional (2D) hydraulic studies, herein related to floods. The main difficulty is related to the heterogeneity of the data sets that differ in precision, in spatial coverage, distribution and density, and in georeference, among others. Within the framework of hydrodynamic modelling, the entire region under study must be documented and the information carried on a homogeneous grid. One proposes here an efficient strategy entirely supported by a software tool called MODELEUR, which allows to import, gather and merge together very heterogeneous data sets, whatever type they are, scalar like topography or vectorial like wind, to preserve their integrity, and provide access to them in their original form at every step of the modelling exercise. Thus, whatever the environmental purpose of the modelling exercise (enhancement works, sedimentology, conservation of habitats, flood risks analysis), the method allows to work with the projection of the data sets on a homogeneous finite element grid and to conserves integrally the original sets as the ultimate reference. This method is based on a partition of the domain under study for each data type: topography, substrates, surface roughness, etc. Each partition is composed of sub-domains and each of them associates a data set to a portion of the domain in a declarative way. This conceptual model represents formally the NTM. The process of data transfer from the partitions to the final grid is considered as a result of the NTM and not the NTM itself. It is performed by interpolation with a technique like the finite element method. Following the huge Saguenay flood in 1996, the efficiency of this method has been tested and validated successfully and this example serves here as an illustration.
The accurate characteristics description of both river main channel and flood plain is essential to any hydrodynamic simulation, especially if extreme discharges are considered and if the two-dimensional approach is used.
The ground altitude and the different flow resistance factors are basic information that the modeler should pass on to the simulator. For too long, this task remained "the poor relative" of the modeling process because it does not a priori seem to raise any particular difficulty. In practice however, it represents a very significant workload for the mobilisation of the models, besides hiding many pitfalls susceptible to compromise the quality of the hydraulic results. As well as the velocity and water level fields are results of the hydrodynamic model, the variables describing the terrain and transferred on the simulation mesh constitute the results of the Numerical Terrain Model (NTM). Because this is strictly speaking a modeling exercise, a validation of the results that assess the quality of the model is necessary.
In this paper, we propose a methodology to integrate the heterogeneous data sets for the construction of the NTM with the aim of simulating 2D hydrodynamics of natural streams with the finite element method. This methodology is completely supported by a software, MODELEUR, developed at INRS-Eau (Secretan and Leclerc, 1998; Secretan et al., 2000). This tool, which can be assimilated to a Geographical Information System (GIS) dedicated to the applications of 2D flow simulations, allows to carry out all the steps of integrating the raw data sets for the conception of a complete NTM. Furthermore, it facilitates the application and the piloting of hydrodynamic simulations with the simulator HYDROSIM (Heniche et al., 1999).
Scenarios for flow analysis require frequent and important changes in the mesh carrying the data. A return to the basis data sets is then required, which obliges to preserve them in their entirety, to have easily access to them and to transfer them efficiently on the mesh. That is why the NTM should rather put emphasis on basic data rather than on their transformed and inevitably degraded aspect after their transfer to a mesh.
The data integrity should be preserved as far as possible in the sense that it is imperative to keep distinct and to give access separately to different data sets. Two measuring campaigns will not be mixed; for example, the topography resulting from digitised maps will be maintained separated from that resulting from echo-sounding campaigns. This approach allows at any time to return to the measures, to control them, to validate them, to correct them and possibly, to substitute a data set a by another one.
The homogeneity of the data support with respect to the location of the data points is essential to allow the algebraic interaction between the different information layers. The operational objective which bear up ultimately the creation of the NTM in the present context is to be able to transfer efficiently the spatial basic data (measurements, geometry of civil works, etc.) each carried by diverse discretisations towards a single carrying structure.
With these objectives of integrity, accessibility, efficiency and homogeneity, the proposed method consists of the following steps:
1. Import of the data sets into the database, which possibly implies to digitise maps and/or to reformat the raw files to a compatible file format;
2. Construction and assembly of the NTM properly which consists, for each variable (topography, roughness, etc.), to create a partition of the domain under study, that is to subdivide it into juxtaposed sub-domains and to associate to each sub-domain the data set which describes the variable on it. More exactly, this declaratory procedure uses irregular polygons allowing to specify in the corresponding sub-domains the data source to be used in the construction of the NTM. As it is also possible to transform regions of the domain with algebraic functions to represent for example civil works in river (dikes, levees, etc.), the NTM integrates all the validated data sets and the instructions to transform them locally. From this stage, the NTM exists as entity-model and it has a conceptual character;
3. Construction of a finite element mesh;
4. Transfer by interpolation and assembly of the data of the different components of the NTM on the finite element mesh according to the instructions contained in the various partitions. The result is an instance of the NTM and its quality depends on the density of the mesh and the variability of the data. So, it requires a validation with respect to the original data;
5. Realisation of the analysis tasks and/or hydrodynamic simulations. If the mesh should be modified for a project variant or for an analysis scenario, only tasks 3 and 4 are to be redone and task 4 is completely automated in the MODELEUR.
The heterogeneity of the data sources, which constitutes one of the main difficulties of the exercise, can be classified in three groups: according to the measuring technique used; according to the format or the representation model used; according to the geographic datum and projection system.
For the topography, the measuring techniques include conventional or radar satellite, airborne techniques, photogrammetry or laser scanning, ground techniques, total station or GPS station, as well as embarked techniques as the echo-sounder. These data come in the form of paper maps that have to be digitised, in the form of regular or random data points, isolines of altitude, or even as transects. They can be expressed in different datums and projections and sometime are not even georeferenced and must be first positioned.
As for the bed roughness that determines the resistance to the flow, also here the data sets differ one from the other in many aspects. Data can here also have been picked as regular or random points, as homogeneous zones or as transects. Data can represent the average grain size of the present materials, the dimension of the passing fraction (D85 or D50 or median), the represented % of the surface corresponding to every fraction of the grain assemblage, etc... In absence of this basic data, the NTM can only represent the value of the friction parameter, typically n of Manning, which should be obtained by calibration for the hydrodynamic model. For the vegetation present in the flood plain or for aquatic plants, source data can be as variable as for the bed roughness. Except for the cases where data exists, the model of vegetation often consists of the roughness parameter obtained during the calibration exercise. The method was successfully applied in numerous contexts as demonstrated by the application realised on the Chicoutimi River after the catastrophic flood in the Saguenay region in 1996. The huge heterogeneity of the available data in that case required the application of such a method as proposed. So, elevation data obtained by photogrammetry, by total station or by echo-sounder on transects could be coordinated and investigated simultaneously for the purposes of hydrodynamic simulation or of sedimentary balance in zones strongly affected by the flood.
La gestion de l'eau à l'aube du 3ème millénaire: Vers un paradigme scientifique nouveau [Tribune libre / Article bilingue] Water resources management at the turn of the millennium: towards a new scientific paradigm [Tribune libre]
L'objectif de cette tribune est d'analyser la nouvelle approche concernant la gestion des ressources en eau qui a été adoptée par la communauté scientifique au seuil de ce nouveau millénaire. Après une révision de cette nouvelle approche, une méthodologie scientifique est proposée permettant d'exprimer le nouveau concept, qui est plutôt général et descriptif, en termes analytiques et quantitatifs, de façon qu'il soit appliqué dans des cas pratiques.
Depuis quelques dizaines d'années déjà, il a été bien établi que la nouvelle approche va dans le sens de la gestion durable. Ceci veut dire qu'elle intègre des préoccupations sociales et environnementales aux critères traditionnels de performance technique et d'efficacité économique. La question qui se pose maintenant est comment le concept complexe de la durabilité, qui jusqu'à présent a été exprimé seulement de façon générale et qualitative, pourrait être formulé en termes analytiques et quantitatifs d'une méthodologie scientifique.
Sur le plan méthodologique, la modification des critères dont on doit tenir compte dans un cadre cohérent d'hypothèses et de raisonnements, suggèrent une évolution vers un paradigme scientifique nouveau. Le cadre général de ce paradigme que nous proposons est celui de l'analyse quantitative du risque à plusieurs dimensions.
Traditionnellement, l'objectif général de la gestion de l'eau, était la satisfaction de la demande dans diverses utilisations, comme l'agriculture, l'eau potable et l'industrie, en utilisant les ressources en eau disponibles de manière techniquement fiable et économiquement efficace. Dans cette approche, des solutions structuralistes et le plus souvent technocratiques, ont été proposées et réalisées dans plusieurs pays du monde. La construction de barrages et de réservoirs d'eau, la modification des lits des rivières et la dérivation des cours d'eau ont eu cependant, dans de nombreux cas, de sérieux impacts négatifs sur l'environnement et les conditions sociales. De plus, le gaspillage dans l'utilisation de cette ressource précieuse et la pollution galopante provenant de tous les secteurs d'utilisation de l'eau ont mis en question ce mode de gestion. Le concept de la gestion durable des ressources en eau a été évoqué, tout d'abord en 1972 à Stockholm, pendant la Conférence Mondiale des Nations Unies, puis à Rio, en 1992, avec l'Agenda 21.
La nouvelle philosophie est basée sur la gestion intégrée de l'eau à l'échelle du bassin versant. Elle met l'accent sur la protection de l'environnement, la participation active des collectivités locales, la gestion de la demande, les aspects institutionnels, et le rôle de l'éducation continue tout le long de la vie de tous les utilisateurs d'eau.
Sur le plan méthodologique, la gestion intégrée de l'eau reste encore un problème ouvert où plusieurs approches cherchent à définir un paradigme cohérent. Dans cette tribune, nous en proposons un que nous appelons " le paradigme 4E " : Epistémique, Economique, Environnemental, Equitable. Il est basé sur l'analyse quantitative du risque à plusieurs dimensions : scientifique, économique, environnementale et sociale. Ce paradigme utilise soit la théorie des probabilités soit la logique du flou (ou les deux à la fois) afin d'évaluer et d'intégrer les risques technico - économiques et socio-environnementaux dans une perspective de gestion durable des ressources en eau.
The aim of this article is to analyze the new approach to water resources management adopted by the scientific community at the turn of the millennium. After reviewing the basic concept of this approach, a scientific methodology is proposed, in order to express the general and mostly descriptive new concept in analytical and quantitative terms, so that it may be applied in practical cases.
For several decades now the general concept of this new approach has been developing along the lines of sustainable development. This means that social and environmental considerations have been added to the traditional objectives of technical performance and economic effectiveness. The question now being raised is how the complex concept of sustainability, which until now has been expressed in general and descriptive terms only, may be formulated in the analytical and quantitative terms of a scientific methodology.
On the methodological level, the fact that several criteria and objectives within a coherent framework of hypotheses and reasoning are taken into account may suggest a move towards a new scientific paradigm. The general framework of the paradigm proposed in this paper is that of multidimensional quantitative risk analysis.
Traditionally, the general objective of water management has been the satisfaction of demand for various uses, such as agriculture, drinking water or industry, using available water resources in technically reliable and economically efficient ways. This approach has led to structural and mostly technocratic solutions being suggested and implemented in several countries. However in many cases, building dams, modifying riverbeds and diverting rivers has had serious negative repercussions on the environment and on social conditions. Moreover, waste in the use of this precious resource and rampant pollution in all areas of water use have raised doubts about this form of management. The concept of a sustainable management of water resources was first mentioned in Stockholm in 1972, during the United Nations World Conference and then at the Rio summit in 1992 with Agenda 21.
The new philosophy is based on the integrated management of water at the watershed basin level. Emphasis is placed on environmental protection, the active participation of local communities, the management of demand, institutional aspects and the role of continuous and lifelong education of all water users.
On the methodological level, integrated water management remains an open question and several different approaches are seeking to define a coherent paradigm. One possible paradigm is proposed in this article and may be called the " 4E paradigm " : Epistemic, Economic, Environmental, Equitable. It is based on risk analysis, with a multidimensional character: Scientific, Economic, Environmental and Social. This paradigm uses either the theory of probability, or fuzzy logic, or both, in order to assess and integrate technico-economic and socio-environmental risks in a perspective of sustainable management of water resources.