Documents found

  1. 371.

    Thesis submitted to Université de Sherbrooke

    2015

    More information

    La présente recherche porte sur l'étude de l'écoulement de l'eau dans un canal à surface libre aux abords de structures de restauration tels que les épis. Les épis sont utilisés depuis plusieurs décennies pour contrer les problèmes d'érosion de berges et pour réhabiliter les habitats de poissons. Les problèmes d'érosion de berges sont fréquents et la construction d'épis est une bonne alternative aux techniques de stabilisation de berges structurales tel que l'empierrement : il s'agit d'une solution habituellement moins coûteuse et elle a généralement moins d'impact sur l'environnement. Malheureusement, malgré l'utilisation de ces structures de restauration, la connaissance n'est pas encore assez développée pour permettre à tous les projets de restauration avec épis de connaître le succès escompté. Un mauvais choix de géométrie, d'emplacement dans …

  2. 372.

    Article published in Meta (scholarly, collection Érudit)

    Volume 50, Issue 4, 2005

    Digital publication year: 2009

    More information

    AbstractAutomatic treatment of languages states the need to formalize the form / sense articulation. So that, is necessary to know how the semantic of the lexical units works in the speech, its evolution and its sociocultural elements that delimit their sense, as we are in front of a living language.Taking as bases a corpus of documents previously selected on the specific scopes object of study (transport — management), and from a semantic approach in Benveniste line, we will analyze the semantic changes which has been put under this terminology, its present interpretation and the difficulties that raises from the translation. All of it will allow us to conclude the way the new lexemes are registered in the analysed linguistic system ; really, the interactive operation language / speech.

    Keywords: approche sémantique, changement sémantique, fonctionnement interactif langue / discours, créativité néologique

  3. 373.

    Article published in Bulletins et Mémoires de la Société d'anthropologie de Paris (scholarly, collection Persée)

    Volume 8, Issue 4, 1996

    Digital publication year: 2008

    More information

    Summary. — Between 1990 and 1993, three consecutive rescue excavations were carried out in the Late Empire/Early Middle Age necropolis in Yverdon-les-Bains (Canton de Vaud, Switzerland). In 1991, it was decided to introduce new excavation and documentation techniques, in particular the replacement of on-site drawing by high definition photographs was a time-saving issue in this case. During the elaboration computer-aided data processing was based on two fundamental aspects : photos/surveys processing and archiving ; immediate restoration of synthesis data so as to obtain high quality documents — fit for publishing — as fast as the data collecting went on. Moreover, the linkage between texts and maps within the database allowed for multi-criteria requests : the selection may be visualised by means of lists, plans, graphic synthesis, distribution maps, etc.

  4. 374.

    Article published in Les Cahiers de droit (scholarly, collection Érudit)

    Volume 24, Issue 2, 1983

    Digital publication year: 2005

    More information

    Mixing computer technology and linguistic savy as an aid to legal research is no mean undertaking, yet it is the purpose of this article. In it Wallace Schwab attempts to describe those areas in computer science and applied linguistics that either have much to offer or represent formidable obstacles to computerizing legal research. From the simplest aids up through scripts and other artificial intelligence devices, the main theme focusses upon integrating disparate techniques into one finely tuned instrument for linguistically based computer research in law. Ultimately this article leads up to the question as to what limit can be ascribed to digitizing legal data and Schwab proposes short, medium and long term limitations.

  5. 375.

    Article published in Revue des sciences de l'eau (scholarly, collection Érudit)

    Volume 13, Issue 4, 2000

    Digital publication year: 2005

    More information

    We developed an automated methodology for real-time validation of hydrometric data in a sewer network. Our methodology uses real-time validated data to optimise system management and non-real-time data to evaluate day-to-day performance.Two approaches can be used to validate and correct hydrometric data; the choice depends on the number of level gauges present in a system. In single gauge systems, univariate filtering is used to smooth data. For example, frequency filtering systematically eliminates values corresponding to frequencies higher than a predetermined threshold frequency. In systems with several gauging stations-duplex, triplex, or multiplex systems-the multivariate filtering method proposed here can be used to validate data series from each gauge. Material redundancy in duplex or higher order systems makes it possible to detect a deficient gauge, using a decision rule to set aside erroneous readings before averaging accepted values. Part of the underlying principle of this methodology is heavier reliance on gauges that give readings consistent with previous and subsequent validated values in a given series. Thus isolated positive or negative variations within a series are eliminated if corresponding variation values at other gauges are more consistent. To evaluate persistence, a reading is compared to a value predicted by an autoregressive (AR) model calibrated by the previous validated reading.This filtering technique constitutes an intelligent alternative to the frequency filtering method mentioned above. In more practical terms, it compares the deviation of an AR model prediction from a measured value with the deviation of the same AR model prediction from a value estimated by a regressive model at other stations in the network. Among the values measured and estimated by the regressive model, the one nearest the AR model prediction is retained.Our methodology also relies on analytical redundancy generated by direct measurement of flow and hydrological simulation. More precisely, the deviation of the AR model prediction from the measured value is compared with the deviation of the same AR model prediction from a value obtained from a hydrological simulation model. Among measured and simulated values, the one nearest the AR model prediction is retained. To allow consideration of nonstationary models and to avoid the well-known bias of the least squares method, the Kalman filter is used to identify the parameters of the AR model.The methodology we propose employs three models. The first generates analytical redundancy using hydrological modelling. An autoregressive model is then used to predict future runoff rate values. Finally, a voting process model is used to compare measured and simulated values.The proposed methodology was tested on the Verdun sewer system in Quebec with successful results. Two types of artificial disturbance of the measured hydrograph were created: white noise was added to measured values and disturbances of large amplitude and various forms were introduced. The methodology produced the initial values and performance criteria were conclusive. Thus on-site testing confirms that this approach allows completely automated detection and correction of most anomalies. Flood peaks were neither underestimated nor overestimated, and total runoff volumes were retained.

    Keywords: Validation, redondance, débit, mesure, filtre de Kalman, autorégressif, temps réel, assainissement, Validation, redundancy, flow, measurement, Kalman filter, autoregressive, real time, sewer

  6. 376.

    Article published in Revue du notariat (scholarly, collection Érudit)

    Volume 106, Issue 3, 2004

    Digital publication year: 2018

  7. 377.

    Article published in Revue de l'Université de Moncton (scholarly, collection Érudit)

    Volume 45, Issue 1-2, 2014

    Digital publication year: 2017

    More information

    This article analyzes the functions and motivations of literary code-switching and code-mixing in ten contemporary heterolingual novels. A microanalysis is followed by a more general typology. Thousands of instances of code-switching or mixing were coded according to form and function variables using a grid provided by software program Sphinx-Eurêka. A cross-referencing of these variables revealed the main characteristics of each novel, which led to a typology of heterolingual writing at the end of the 20th century. This typology places the the ten novels along the levels of a three-part pyramid. At the base is a “realistic” or mimetic approach which, doubled with irony, becomes parody, and finally, at the top, is the creative approach.

    Keywords: hétérolinguisme, romans plurilingues, alternances de langues, Sphinx-Eurêka, heterolingual novels, codes-switching, code-mixing, Sphinx-Eurêka

  8. 378.

    Article published in Revue de recherches en littératie médiatique multimodale (scholarly, collection Érudit)

    Volume 5, 2017

    Digital publication year: 2018

    More information

    Recently, many high schools in Quebec, particularly private schools, have made purchasing an iPad compulsory for all students. By doing this, these institutions have been trying to systematize tablet technology in the classroom. Nevertheless, it seems that the results of empirical research remain rather rare on the use of digital tablets in the classroom. In this article, we present some reflections on the use of iPads for the teaching of reading and writing based on a case study in which a French teacher, currently uses this technology in her classroom.

    Keywords: iPad, didactique de la lecture et de l'écriture, pratiques pédagogiques, iPad, teaching reading, teaching writing, teaching practices

  9. 379.

    Article published in Revue des sciences de l'eau (scholarly, collection Érudit)

    Volume 12, Issue 2, 1999

    Digital publication year: 2005

    More information

    Most hydraulic and hydrologic softwares offer an increasing choice of models, each with its advantages and disadvantages. Generally, the more sophisticated the model, the better it can represent larger aspects of reality, but also the more difficult it is to use and the longer the data acquisition and calculation times are. In fact, the real difficulty lies in selecting the appropriate model to use. To answer this question, two subproblems must be solved : - what is the validity field for each of the models, with respect to the network structure, operating conditions, type of rainfall, nature of problem, etc. ? - once this information is available, what is the best way to ensure its usability, even by people who are not experts in hydrology or hydraulics ? The first problem can be dealt with by analyzing the theoretical validity field of the different models. Nevertheless, this method raises certain difficulties. It requires a particularly thorough knowledge of the equations, algorithms and calculating artifices used in the software package. But, for various reasons, software designers generally refuse to supply this information. Until now, the solution to the second problem has only been addressed by means of scientific reports, communications or papers. The published data generally gives a global introduction to the validity field of each model. Experience shows that most software users do not have enough information on this subject, or, even if they do, do not use it correctly. To work towards solving these two problems, we propose to introduce into the software packages a decision support system which can help to choose the best model according to the simulated network. In this paper, only the models for flow simulation will be taken into account. Presently, the most commonly used models are the more or less complete Barre de Saint Venant equations and more simple conceptual models like Muskingum model. The major difficulty in solving the first subproblem, was mainly the collection and reformulation of pre-existing knowledge. Considerable bibliographical work had to be supplemented by interviews of experts and by complementary studies (Semsar, 1995) (Mottie, 1996). The result of these studies was the identification of a set of criteria related to the network (slope, fractal dimension, loop index, etc.) or to the working conditions depending on the rainfall event (fullness rate, travel rate, etc.). The answer to the second problem was to develop an "intelligent" man-machine interface able to analyse the background of the simulation (values of the criteria) and to advise the user on the model to select. The knowledge required to build this decision support system can vary in both source and quality, so assessment of its reliability involve the notions of uncertainty and inaccuracy. The problem of uncertainty has been solved by associating uncertainty degrees to the rules. These degrees define a proposal's level of reliability. The approach to inaccuracy is based on the theory of fuzzy sets, according to which the membership of an element of a given set is not settled but relative. The validity of a given fact is represented by a value between 0 (false) and 1 (true). This value can be related to a variable by fuzzy rules, represented by trapezoidal intervals. By this way, each of the criteria has been represented by qualitative decision variables which allow the elaboration of qualitative rules, leading to a "probably better" decision. One more decision variable - the kind of study - has been added because the hydrograph at the outlet of the network is not necessarily the only criterion to be taken into account. Each of the decision variable is represented by one, two, or sometimes more possible qualitative characterisation(s), associated with a degree of possibility (not probability because the sum of all the possibilities is not necessarily equal to 1). The decision support system uses these variables in a set of rules to determine the degree of adequacy of each model. The development of this system shows us that the use of fuzzy sets and qualitative rules seems to be well adapted to represent the used knowledge. The decision support system will be installed in a software package called CANOE developed by INSA de Lyon and SOGREAH. In the future we envisage adding an explanatory unit to the decision support system. This study showed too that there is some lack in the knowledge about flow simulation models, so it seems useful to continue to study the validity field of each model.

    Keywords: Aide à la décision, modèles de propagation, assainissement pluvial, sous-ensembles flous, système expert

  10. 380.

    Article published in Intersections (scholarly, collection Érudit)

    Volume 37, Issue 2, 2017

    Digital publication year: 2020

    More information

    Primarily intended as a mere sonorous depiction of a site, these digital interfaces have now steadily switched from being “maps” to being “counter-maps” through the social, cultural or political commitment of their creators. Besides providing sonic landscapes, these “counter-maps” reveal some specific perspectives upon the daily environment of an individual, regardless of any authoritative control. This article addresses issues at stake regarding these artworks, by examining some of their tokens and providing a model for analyzing topographic markers in a sonic recording.