Qatar Foundation Annual Research Forum Volume 2013 Issue 1
- تاريخ المؤتمر: 24-25 Nov 2013
- الموقع: Qatar National Convention Center (QNCC), Doha, Qatar
- رقم المجلد: 2013
- المنشور: ٢٠ نوفمبر ٢٠١٣
461 - 480 of 541 نتائج
-
-
Development of a spontaneous large vocabulary speech recognition system for Qatari Arabic
المؤلفون: Mohamed ElmahdyIn this work, we develop a spontaneous large vocabulary speech recognition system for Qatari Arabic (QA). A major problem with dialectal Arabic speech recognition is due to the sparsity of speech resources. So, we propose an Automatic Speech Recognition (ASR) framework to jointly use Modern Standard Arabic (MSA) data and QA data to improve acoustic and language modeling by orthographic normalization, cross-dialectal phone mapping, data sharing, and acoustic model adaptation. A wide-band speech corpus has been developed for QA. The corpus consists of 15 hours speech data collected from different TV series and talk-show programs. The corpus was manually segmented and transcribed. A QA tri-gram language model (LM) was linearly interpolated with a large MSA LM in order to decrease Out-Of-Vocabulary (OOV) rate and to improve perplexity. The vocabulary consists of 21K words extracted from the QA training set with additional 256K MSA vocabulary. The acoustic model (AM) was trained with a data pool of QA data and additional 60 hours of MSA data. In order to boost the contribution of QA data, Maximum-A-Posteriori (MAP) adaptation was applied on the resulted AM using only the QA data, effectively increasing the weight of dialectal acoustic features in the final cross-lingual model. All trainings were performed with Maximum Mutual Information Estimation (MMIE) and with Speaker Adaptive Training (SAT) applied on top of MMIE. Our proposed approach achieves more than 16% relative reduction in WER on QA testing set compared to a baseline system trained with only QA data. This work was funded by a grant from the Qatar National Research Fund under its National Priorities Research Program (NPRP) award number NPRP 09-410-1-069. Reported experimental work was performed at Qatar University in collaboration with University of Illinois.
-
-
-
On faults and faulty programs
المؤلفون: Ali JaouaAbstract. The concept of a fault has been introduced in the context of a comprehensive study of system dependability, and is defined as a feature of the system that causes it to fail with respect to its specification. In this paper, we argue that this definition does not enable us to localize a fault, nor to count faults, nor to define fault density. We argue that rather than defining a fault, we ought to focus on defining faulty programs (or program parts); also, we introduce inductive rules that enable us to localize faults to an arbitrary level of precision; finally, we argue that to claim that a program part is faulty one must often make an assumption about other program parts (and we find that the claim is only as valid as the assumption). Keywords. Fault, error, failure, specification, correctness, faulty program, refinement. Acknowledgement: This publication was made possible by a grant from the Qatar National Research Fund NPRP04-1109-1-174. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the QNRF.
-
-
-
Compiler-directed design of memory hierarchy for embedded systems
المؤلفون: Florin BalasaIn embedded real-time communication and multimedia processing applications, the manipulation of large amounts of data has a major effect on both power consumption and performance of the system. Due to the significant amount of data transfers between the processing units and the large and energy consuming off-chip memories, these applications are often called data-dominated or data-intensive. Providing sufficient bandwidth to sustain fast and energy-efficient program execution is a challenge for system designers: due to the growing gap of speed between processors and memories, the performance of the whole VLSI system will mainly depend on the memory subsystem, when memory is unable to provide data and instructions at the pace required by the processor. This effect is, sometimes, referred in the literature as the memory wall problem. At system level, the power cost can be reduced by introducing an optimized custom memory hierarchy that exploits the temporal data locality. Hierarchical memory organizations reduce energy consumption by exploiting the non-uniformity of memory accesses: the reduction can be achieved by assigning the frequently-accessed data to low hierarchy levels, a problem being how to optimally assign the data to the memory layers. This hierarchical assignment diminishes the dynamic energy consumption of the memory subsystem - which expands due to memory accesses. Moreover, it diminishes the static energy consumption as well, since this decreases monotonically with the memory size. Moreover, within a given memory hierarchy level, power can be reduced by memory banking - whose principle is to divide the address space in several smaller blocks, and to map these blocks to physical memory banks that can be independently enabled and disabled. Memory partitioning is also a performance-oriented optimization strategy, because of the reduced latency due to accessing smaller memory blocks. Arbitrarily fine partitioning is prevented since an excessively large number of small banks is area inefficient, imposing a severe wiring overhead -- which increases communication power and decreases performance. This presentation will introduce an electronic design automation (EDA) methodology for the design of hierarchical memory architectures in embedded data-intensive applications, mainly in the area of multidimensional signal processing. The input of this memory management framework is the behavioral specifications of the applications, that are assumed to be procedural and affine. Figure 1 shows an illustrative example of behavioral specification with 6 nested loops. This framework employs a formal model operating with integral polyhedra, using techniques specific to the data-dependence analysis employed in modern compilers. Different from previous works, three optimization problems - the data assignment to the memory layers (on-chip scratch-pad memory and off-chip DRAM), the mapping of multidimensional signals to the physical memories, and the banking of the on-chip memory (see Figure 2) - are addressed in a consistent way, based on the same formal model. The main design target is the reduction of the static and dynamic energy consumption in the memory subsystem, but the same formal model and algorithmic principles can be applied for the reduction of the overall time of access to memories, or combinations of these design goals.
-
-
-
Dynamic Simulation Of Internal Logistics In Aluminum Production Processes
المؤلفون: Anton WinkelmannThe production of aluminum products, like other metallurgical industries, comprehends of a large variety of material processing steps. These processes require a multitude of material handling operations, buffers and transports to interconnect single process steps within aluminum smelters or downstream manufacturing plants like rolling mills or extrusion plants. On the one hand, the electrolysis process as the core process of primary aluminum production requires an amount of material factors more than the volume of metal produced. On the other hand, downstream manufacturing processes are showing an enormous variation of mechanical properties and surface qualities and comprise many fabrication steps including forming, heat treatment and finishing that can appear in an arbitrary order. Therefore the internal logistics composing the entire internal material flow of one production facility is increasingly regarded as one key success factor for efficient production processes as part of the supply-chain management. Dynamic simulations based on discrete event simulation models can be effective tools to support planning processes along the entire value chain in aluminum production plants. Logistics simulation models ideally accompany also improvement and modernization measures and the design of new production facilities to quantify the resulting overall equipment effectiveness and the reduction of energy consumption and emissions. Hydro Aluminium has a long history in solving logistic challenges using simulation tools. Limitations of former models have been the starting point of a further development of simulation tools based on more flexible models. They address the streamlining of operations and transportation, in particular in aluminum smelters, and material flow problems as well as new plant concepts as support to investment decisions. This presentation is giving firstly a brief introduction into the main upstream and downstream processes of aluminum production to understand the different driving forces of material flow. Secondly the principles of mapping the specific material flow of aluminum smelters and in typical downstream manufacturing plants are outlined. Examples demonstrate the benefit of a systematic modeling approach.
-
-
-
A robust method for line and word segmentation in handwritten text
المؤلفون: Abdelaali HassaineLine and word segmentation is a key-step in any document image analysis system. It can be used for instance in handwriting recognition when separating words before their recognition. Line segmentation can also serve as a prior step before extracting the geometric characteristics of lines which are unique to each writer. Text line and word segmentation is not an easy task because of the following problems: 1) text lines do not all have the same direction in the handwritten text; 2) text lines are not always horizontal which makes their separation more difficult; 3) characters may overlap between successive text lines; 4) it is often confusing to distinguish between inter and intra word distances. In our method, line segmentation is done by using a smoothed version of the handwritten document which makes it possible to detect the main line components using a subsequent thresholding algorithm. The connected components of the resulting image are then assigned to a separate label which represents a line component. Then, each text region which intersects only with one line component is assigned to the same label of that line component. The Voronoi diagram of the image thus obtained is then computed in order to label the remaining text pixels. Word segmentation is performed by computing a generalized Chamfer distance in which the horizontal distance is slightly favored. This distance is subsequently smoothed in order to reflect the distances between word components and neglect the distance to dots and diacritics. Word segmentation is then performed by thresholding the distance thus obtained. The threshold depends on the characteristics of the handwriting. We have therefore computed several features in order to predict it, including: the sum of maximum distances within each line component, the number of connected components within the document and the average width and height of lines. The optimal threshold is then obtained by training a linear regression of those features on a training set of about 100 documents. This method achieved the best performance on the ICFHR Handwriting Segmentation Contest dataset reaching a matching score of 97.4% on line segmentation and 91% on word segmentation. The method has also been tested on the QUWI Arabic dataset reaching 97.1% on line segmentation and 49.6% on word segmentation. The relatively low performance of word segmentation in Arabic script is due to the fact that words are very close to each other with respect to English script. The proposed method tackles most of the problems of line and word segmentation and achieves high segmentation results. It can however be improved by combining it with a handwriting recognizer which will eliminates words which are not recognized.
-
-
-
Optimizing Qatar steel supply chain management system
المؤلفون: Mahmoud AlrefaeiWe have developed a linear programming formulation to describe Qatar steel manufacturing supply chain from suppliers to consumers. The purpose of the model is to provide answers regarding the optimal amount of raw materials to be requested from suppliers, the optimal amount of finished products to be delivered to each customer and the optimal inventory level of raw materials. The model is validated and solved using GAMS software. Sensitivity analysis on the proposed model is conducted in order to draw useful conclusions regarding the factors that play the most important role in the efficiency of the supply chain. In the second part of the project, we have set a simulation model to produce a specific set of Key Performance Indicators (KPIs). KPIs are developed to characterize the supply chain performance in terms of responsiveness, efficiency, and productivity/utilization. The model is programmed using WITNESS simulation software. The developed QS WITNESS simulation model aims to assess and validate the current status of the supply chain performance in terms of a set of KPIs, taking into consideration the key deterministic and stochastic factors, from suppliers and production plant processes, to distributors and consumers. Finally, a simulated annealing algorithm has been developed that will be used to set model variables to achieve a multi-criteria tradeoff of the defined supply chain KPIs.
-
-
-
An ultra-wideband RFIC attenuator for communication and radar systems
المؤلفون: Cam NguyenAttenuators are extensively employed as an amplitude control circuit in communication and radar systems. Flatness, attenuation range, and bandwidth are typical specifications for attenuators. Most attenuators in previous studies rely on the basic topologies of the Pi-, T-, and distributed attenuators. The performance of the Pi- and T-attenuators, however, is affected substantially by the switch performance of transistors, and it is hard to obtain optimum flatness, attenuation range, and bandwidth with these attenuators. The conventional distributed attenuator also demands a large chip area for large attenuation ranges. We report the design of a new microwave/millimeter-wave CMOS attenuator. A design method is proposed and implemented in the attenuator to improve its flatness, attenuation range, and bandwidth. It is recognized that the Pi- and T-attenuators at a certain attenuation state inherently has the insertion-loss slope increasing as the frequency is increased. This response is owing to the off-capacitance of the series transistors. On the other hand, the distributed attenuators can be designed to have the opposite insertion-loss slope by shortening the transmission lines. The reason is that short transmission lines causes the center frequency be shifted higher. The proposed design method utilizes the two opposite insertion-loss slopes and is implemented for the Pi-, T-and distributed attenuators in a cascade connection. Over 10-67 GHz, the measured results exhibit attenuation flatness of 6.8 dB and attenuation range of 32-42 dB.
-
-
-
Multiphase production metering: Benefits from an Industrial data validation and reconciliation approach
المؤلفون: Simon MansonMultiphase production metering - Benefits from an Industrial Data Validation and Reconciliation approach Simon Manson, Mohamed Haouche, Pascal Cheneviere and Philippe Julien TOTAL Research Center-Qatar at QSTP - DOHA, Qatar Contact: [email protected] Context and objectives TOTAL E&P QATAR (TEPQ) is the Operator of AL-Khaleej offshore Oil field under Production Sharing Agreement (PSA) with Qatar Petroleum. AL-Khaleej field is characterised by a high water Cut (ratio of water over the liquid), which classifies the field as mature. Operating this field safely and cost effectively requires a large involvement of cutting-edge technologies with a strict compliance with internal procedures and Standards. Metering's main objective is to deliver accurate and close to real-time production data from online measurements, allowing the optimization of the operations and the mitigation of potential HSE-related risks. Solution The solution tested by TEPQ is based on a Data Validation and Reconciliation (DVR) approach. This approach is well-known in hydrocarbon downstream sector and power plants. Its added value lies mainly in the automatic reconciliation of several data originating not only from the multiphase flow meters but also from other process parameters. The expected result of this approach is an improvement of data accuracy and increased data availability for operational teams. A DVR pilot has been implemented in the AL-Khaleej field for multiphase flow determination. It performs automatically online data acquisition, data processing and the daily reporting, thanks to a user friendly user interface. Results The communication intends to present the latest findings obtained from the DVR approach. A sensitivity analysis has been performed to highlight the impact of potentially biased data on the integrated production system and on the oil and water production rates. These findings are of high importance for trouble-shooting diagnostic, to identify the source (instruments, process models, etc.) of a malfunction and to define remedial solutions. Oil and water production data, with their relative uncertainties are presented to illustrate the benefits of the DVR approach in challenging production conditions. Conclusions The main benefits from the DVR approach and its user interface lies mainly on the time saving in data post-processing to obtain automatically reconciled data with better accuracy. In addition, thanks to its error detection capability the DVR facilitates troubleshooting identification (alarming).
-
-
-
Dynamic and static generation of multimedia tutorials for children with special needs: Using Arabic text processing and ontologies
المؤلفون: Amal DandashiWe propose a multimedia-based learning system to teach children with intellectual disabilities (ID) basic concepts in Science, Math and daily living tasks. The tutorials' pedagogical development is based on Mayer's Cognitive Learning Model combined with Skinner's Behaviorist Operant Conditioning Model. Two types of Arabic tutorials are proposed: (1) Statically generated tutorials, which are pre-designed by special needs instructors, and developed by animation experts. (2) Dynamically generated tutorials, which are developed using natural language processing and ontology building. Dynamic tutorials are generated by processing Arabic text, and using machine learning to query the Google engine and generate multimedia elements, which are automatically updated into an ontology system and are hence used to construct a customized tutorial. Both types of tutorials have shown considerable improvements in the learning process and allowed children with ID to enhance their cognitive skills and become more motivated and proactive in the classroom.
-
-
-
A tyre safety study in Qatar and application of immersive simulators
المؤلفون: Max RenaultIn support of Qatar's National Road Safety Strategy and under the umbrella of the National Traffic Safety Committee, Qatar Petroleum's Research and Technology Department in cooperation with Williams Advanced Engineering has undertaken a study of the state of tyre safety in the country. This study has reviewed the regulatory and legislative frameworks governing tyre usage in the country, as well as collected data on how tyres are being used by the populace. To understand the state of tyre usage in Qatar a survey of 239 vehicles undergoing annual inspection was conducted and an electronic survey querying respondents' knowledge of tyres received 808 responses. The findings identified deficiencies in four key areas: accident data reporting, tyre certification for regional efficacy, usage of balloon tyres, and the public's knowledge of tyres and tyre care. Following completion of this study, Qatar Petroleum has commissioned Williams Advanced Engineering to produce an immersive driving simulator for the dual purposes of research and education. This simulator will provide a platform for research investigations of the effect of tyre performance and failure on vehicle stability; additionally this will allow road users, in a safe environment, to experience the effects of various tyre conditions such as a failure, and learn appropriate responses.
-
-
-
Random projections and haar cascades for accurate real-time vehicle detection and tracking
المؤلفون: Mohamed ElHelwThis paper presents a robust real-time vision framework that detects and tracks vehicles from stationary traffic cameras with certain regions of interest. The framework enables intelligent transportation and road safety applications such as road-occupancy characterization, congestion detection, traffic flow computation, and pedestrian tracking. It consists of three main modules:1) detection, 2) tracking, and 3) data association. To this end, vehicles are first detected using Haar-like features. In the second phase, a light-weight appearance-based model is built using random projections to keep track of the detected vehicles. The data association module fuses new detections and existing targets for accurate tracking. The practical value of the proposed framework is demonstrated with evaluation involving several real-world experiments and variety of challenges.
-
-
-
Conceptual reasoning for consistency insurance in logical deduction and application for critical systems
المؤلفون: Samir ElloumiReasoning in propositional logic is a key element in software engineering; it is applied in different domains, e.g: specification validation, code checking, theorem proving, etc. Since reasoning is a basic component in the analysis and verification of different critical systems, significant efforts have been dedicated to improve its efficiency in terms of time complexity, correctness and generalization to new problems support (e.g., SAT- Problem, Inference-Rules, inconsistency detection, etc.). We propose a new Conceptual Reasoning Method for an inference engine in which such improvements will be achieved by the combination of the semantic interpretations of a logical formula and the formal concept analysis mathematical background. In fact, each logical formula is mapped into a truth table formal context and any logical deduction is obtained by Galois connection. More particularly, we combine all truth tables into a global one which has the advantage of containing the complete knowledge of all deducible rules or, possibly, an eventual inconsistency in the whole system. A first version of the new reasoning system was implemented and applied for medical data. Efficiency in conflicts resolutions as well as in knowledge expressiveness and reasoning were shown. Serious challenges related with time complexity have been faced out and still more improvements are under investigation. "Acknowledgement: This publication was made possible by a grant from the Qatar National Research Fund NPRP04-1109-1-174. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the QNRF."
-
-
-
Alice in the Middle East (Alice ME)
المؤلفون: Saquib RazakWe will present an overview and in-progress report of a QNRF-sponsored project for research in the effects of using program visualization in teaching and learning computational thinking. Computers and computing have made possible incredible leaps of innovation and imagination in solving critical problems such as finding a cure for diseases, predicting a hurricane's path, or landing a spaceship on the moon. The modern economic conditions of countries around the world are increasingly related to their ability to adapt to the digital revolution. This, in turn, drives the need for educated individuals who can bring the power of computing-supported problem solving to an increasingly expanded field of career paths. It is no longer sufficient to wait until students are in college to introduce computational thinking. All of today's students will go on to live in a world that is heavily influenced by computing, and many of these students will work in fields that involve or are influenced by computing. They must begin to work with algorithmic problem solving and computational methods and tools in K-12. Many courses taught in K-12 (for example, math and science) teach problem solving and logical thinking skills. Computational thinking embodies these skills and brings them to bear on problems in other fields and on problems that lie at the intersection of these fields. In the same way that learning to read opens a gateway to learning a multitude of knowledge, learning to program opens a gateway to learning all things computational. Alice is a programming environment designed to enable novice programmers in creating 3-D virtual worlds, including animations and games. In Alice, 3-D models of objects (e.g., people, animals and vehicles) populate a virtual world and students use a drag and drop editor to manipulate the movement and activities of these objects. Alice for the Middle East - "Alice ME" is a research project funded by Qatar National Research Fund (QNRF) under their National Priorities Research Program (NPRP) that aims to modify the well-known Alice introductory programming software to be internationally and culturally relevant for the Middle East. In this project, we are working on developing new 3D models that provide animation objects encountered in daily lives (animals, buildings, clothing, vehicles, etc.) as well as artifacts that reflect and respect the heritage of Qatari culture. The new models will provide characters and images that enable students to create animations that tell the stories of local culture, thereby supporting the QNV 2030 aspiration for maintaining a balance between modernizations and preserving traditions. We are also working with local schools to develop an ICT curriculum centered on using Alice to help students learn computational thinking. We are also designing workshops to train teachers in using Alice and delivering this curriculum.
-
-
-
Awareness of the farmers about effective delivery of farm information by ICT mediated extension service in Bangladesh
المؤلفون: Abdul Momen MiahThe main focus of the study was to find out the level of awareness about effectiveness of ICT mediated extension service in disseminating farm information to the farmers. The factors influencing awareness of the farmers and the problems faced by the farmers in getting farm information were also explored. Data were collected from a sample of 100 farmers out of 700. A structured interview schedule and check list were used in collection of data through face to face interviewing and focus group discussion (FGD) from the farmers during May to June 2013. The awareness was measured by using a 4 point rating scale and appropriate weights were assigned to each of the responses and by adding the weights awareness score was calculated. Thus, the awareness score of the respondents ranged from 16 to 50 against the possible range of 0 to 64. The effectiveness of ICT was considered based on amount of information being supplied, acceptability of information, usage of information and outcome/benefit by using information by the farmers. About three-fourths (74 percent) of the farmers had moderate awareness while almost one fourth (23 percent) having low and only 3 percent had high awareness about effective delivery of farm information by ICT centers. The level of education, farm size, family size, annual income, training exposure, organizational participation and extension media contact of the farmers were significantly correlated with their awareness about effective delivery of farm information. The stepwise multiple regression analysis showed that out of 9, four variables such as organizational participation, annual income, farm size and family size of the farmers combined accounted for 47.90 percent of the total variation regarding awareness of effective delivery of farm information. Rendering inadequate services of field extension agents, Frequent power disruption, Lack of skilled manpower (extension agents) at ICT centers, Lack of training facilities for the farmers, Poor supervision and monitoring of field extension activities by the superior officers were the major problems mentioned by the farmers for effective dissemination of farm information by ICT mediated extension service. Key words: Awareness, ICT mediated extension service, effective delivery
-
-
-
Translate or Transliterate? Modeling the Decision For English to Arabic Machine Translation
المؤلفون: Mahmoud AzabTranslation of named entities (NEs) is important for NLP applications such as Machine Translation (MT) and Cross-lingual Information Retrieval. For MT, named entities are major subset of the out-of-vocabulary terms. Due to their diversity, they cannot always be found in parallel corpora, dictionaries or gazetteers. Thus, state-of-the-art MT systems need to handle NEs in speciï¬c ways: (i) direct translation which results in missing many out of vocabulary terms and (ii) blind transliteration of out of vocabulary terms which does not necessarily contribute to translation adequacy and may actually create noisy contexts for the language model and the decoder. For example, in the sentence "Dudley North visits North London", the MT system is expected to transliterate "North" in the former case, and translate "North" in the latter. In this work, we present a classification-based framework, that enables MT system to automate the decision of translation vs. transliteration for different categories of NEs. We model the decision as a binary classification at the token level: each token within a named-entity gets a decision label to be translated or transliterated. Training the classifier requires a set of NEs with token-level decision labels. For this purpose, we automatically construct a set of bilingual lexicon of NEs paired with the translation/transliteration decisions from two different domains: We heuristically extract and label parallel NEs from a large word aligned news parallel corpus and we use a lexicon of bilingual NEs collected from Arabic and Wikipedia titles. Then, we designed a procedure to clean up the noisy Arabic NE spans by part-of-speech verification, and heuristically ï¬ltering impossible items (e.g. verbs). For training, the data is automatically annotated using a variant of edit distance measuring the similarity between an English word and its Arabic transliteration. For test set, we manually reviewed the labels and fixed the incorrect ones. As part of our project, this bilingual corpus of named entities has been released to the research community. Using Support Vector Machines, we trained the classifier using a set of token-based, contextual and semantic features of the NEs. We evaluated our classiï¬er both in the limited news and diverse Wikipedia domains, and achieved promising accuracy of 89.1%. To study the utility of using our classifier on an English to Arabic statistical MT system, we deployed it as a pre-translation component to the MT system. We automatically located the NEs in the source language sentences and used the classiï¬er to ï¬nd those which should be transliterated. For such terms, we offer the transliterated form as an option to the decoder. The impact of adding the classifier to the SMT pipeline resulted in a major reduction of out of vocabulary terms and a modest improvement of the BLEU score. This research is supported by the Qatar National Research Fund (a member of the Qatar Foundation) through grants NPRP-09-1140-1-177 and YSREP-1-018-1-004. The statements made herein are solely the responsibility of the authors.
-
-
-
Technology tools for enhancing English literacy skills
المؤلفون: Mary DiasThe goal of this work is to explore the role of technology tools in enhancing the teaching and learning processes for English as a foreign or second language. Literacy is a crucial skill that is often linked to quality of life. However, access to literacy is not universal. Therefore, the significance of this research is its potential impact on the global challenge of improving child and adult literacy rates. Today's globalized world often demands strong English literacy skills for success because the language of instruction and business is frequently English. Even in Qatar's efforts to create a knowledge economy, Education City was established with the majority of instruction in English. Moreover, NGOs such as Reach Out to Asia are partnering with Education City universities to teach English literacy to migrant laborers in Qatar. Many migrant workers reside and work in Qatar for many years and can often improve their job prospects if they speak and understand English. However, Qatar's English literacy problems are not limited to the migrant population. The latest published (2009) PISA (Program for International Assessment) results show that 15-year-olds in Qatar for the most part are at Level one out of six proficiency levels in literacy. Qatar placed among the lowest of the 65 countries that participated in this PISA Assessment. Several research groups have developed technology to enhance literacy skills and improve motivation for learning. Educational games are in increasing demand and are now incorporated into formal education programs. Since the effectiveness of technology on language learning is dependent on how it is used, literacy experts have identified the need for research about appropriate ways and contexts in which to apply technology. Our work shares some goals with the related work, but there are also significant differences. Most educational games and tools are informed by experts on teaching English skills, focused on the students, and act as fixed stand-alone tools that are used outside the school environment. In contrast, our work is designed to complement classroom activities and to allow for customization while remaining cost effective. As such, it seeks to engage parents, teachers, game developers and other experts to benefit and engage learners. Through this work, we engage with different learner populations ranging from children to adults. Expected outcomes of our work include the design, implementation, and testing of accessible and effective computing technology for enhancing English literacy skills among learners across the world. This suite of computer-based and mobile phone-based tools is designed to take into account user needs, local constraints, cultural factors, available resources, and existing infrastructure. We field-test and evaluate our literacy tools and games in several communities in Qatar and in the United States. Through this work, we are advancing the state-of-the-art in computer-assisted language learning and improving the understanding of educational techniques for improving literacy. Our presentation will provide an overview of the motivation for this work, an introduction to our user groups, a summary of the research outcomes of this work to date, and an outline of future work.
-
-
-
Exploiting space syntax for deployable mobile opportunistic networking
المؤلفون: Khaled HarrasThere are many cities where urbanization occurs at a faster rate than that of communication infrastructure deployment. Mobile users with sophisticated devices are often dissatisfied with this lag in infrastructure deployment; their Internet connection is via opportunistic open Access Points for short durations, or via weak, unreliable, and costly 3G connections. With increased demands on network infrastructure, we believe that opportunistic networking, where user mobility is exploited to increase capacity and augment Internet reachability, can play an active role as a complimentary technology to improve user experience, particularly with delay insensitive data. Opportunistic forwarding solutions were mainly designed using a set of assumptions that have grown in complexity, rendering them unusable outside their intended environment. Figure 1 categorizes sample state-of-the-art opportunistic forwarding solutions based on their assumption complexity. Most of these solutions however are not designed for large scale urban environments. In this work, we believe to be the first to exploit the space syntax paradigm to better guide forwarding decisions in large scale urban environments. Space syntax, initially proposed in the field of architecture to model natural mobility patterns by analyzing spacial configurations, offers a set of measurable metrics that quantify the effect of road maps and architectural configurations on natural movement. By interacting with the pre-built static environment, space syntax predicts natural movement patterns in a given area. Our goal is to leverage space syntax concepts to create efficient opportunistic forwarding distributed solutions for large scale urban environments. We address two communication themes: (1) Mobile-to-Infrastructure: We propose a set of space syntax based algorithms that adapt to a spectrum of simplistic assumptions in urban environments. As depicted in Figure 1, our goal is to gain performance improvement across the spectrum, within each assumption category, when compared to other state-of-the-art solutions. We adopt a data driven approach to evaluate the space syntax based forwarding algorithms we propose, within each of three assumption categories, based on large scale mobility traces capturing vehicle mobility. Overall, our results show that our space syntax based algorithms perform more efficiently within each assumption category. (2) Infrastructure-to-Mobile: We propose a new algorithm, Select&Spray, which leverages space syntax, and enables data transfers to mobile destinations reached directly through the infrastructure or opportunistically via other nodes. This architecture consists of: (i) a select engine that identifies a subset of directly connected nodes with a higher probability to forward messages to destinations, and (ii) a spray engine residing on mobile devices that guide the opportunistic dissemination of messages towards destination devices. We evaluate our algorithm using several mobility traces. Our results show that Select&Spray is more efficient in guiding messages towards their destinations. It helps extend the reach in data dissemination to more than 20% of the interested destinations within very short delays, and successfully reaches almost 90% of the destinations in less than 5 minutes.
-
-
-
Face Detection Using Minimum Distance with Sequence Procedure Approach
المؤلفون: Sayed HamdyIn recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearance based and model-based schemes. In this paper we will represent a new method for face detection called Minimum Distance Detection Approach (MDDA) . The obtained results clearly confirm the efficiency of the developed model as compared to the other methods in terms of Classification accuracy. It is also observed that the new method is a powerful feature selection tool which has indentified a subset of best discriminative features. Additionally, the proposed model has gained a great deal of efficiency in terms of CPU time owing to the parallel implementation. In this model we use a direct model for face detection without using unlabelled data. In this research we tarry to identify one sample from a group of unknown samples using a sequence of processes . The results show that this method is very effective when we use a large a sample of unlabelled data to detect on sample.
-
-
-
Analysis Of Energy Consumption Fairness In Video Sensor Networks
المؤلفون: Bambang SarifThe use of more effective processing tools such as advanced video codecs in wireless sensor networks (WSNs) has enabled widespread adoption of video-based WSNs for monitoring and surveillance applications. Considering that in video-based WSN applications large amounts of energy resources are required for both compression and transmission of video content, optimizing the energy consumption is of paramount importance. There is a trade-off between the encoding complexity and compression performance in the sense that high compression efficiency comes at the expense of increased encoding complexity. On the other hand, there is a direct relationship between coding complexity and energy consumption. Since the nodes in a video sensor network (VSN) share the same wireless medium, there is also an issue with fairness of bandwidth allocation per each node. Nevertheless, the fairness of resource allocation (encoding and transmission energy) for nodes placed at different locations in VSNs has a significant effect on energy consumption. In our study, our objective is to maximize the lifetime of the network by reducing the consumption of the node with the maximum energy usage. Our research focuses on VSNs with linear topology where the nth node relays its data through the nth-1 node, and the node closest to the sink relays information from all the other nodes. In our approach, we analyze the relation between the fairness of nodes' resource allocation, video quality and VSNs' energy consumption to propose an algorithm for adjusting the coding parameters and fairness ratio of each node such that energy consumption is balanced. Our results show that by allocating higher fairness ratios to the closest nodes to the sink, we reduce the maximum energy consumption and achieve a more balanced energy usuge. For instance, in the case of a VSN with six nodes, by allocating the fairness ratios between 0.17 to 0.3 to the closer nodes to the sink, the maximum energy consumption is reduced by 11.28%, with standard deviation of nodes' energy consumption (STDen) of 0.09W compared to 0.25W achieved by the maximum fairness scheme.
-
-
-