- Home
- Conference Proceedings
- Qatar Foundation Annual Research Forum Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Forum Volume 2013 Issue 1
- Conference date: 24-25 Nov 2013
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2013
- Published: 20 November 2013
451 - 500 of 541 results
-
-
The Arabic ontology
More LessWe overview the Arabic Ontology, an ongoing project at Sina Institute, at Birzeit University, Palestine. The Arabic Ontology is a linguistic ontology that represents the meanings (i.e., concepts) of Arabic terms using formal semantic relationships, such as SubtypeOf and PartOf. In this way, the ontology becomes a tree (i.e., classification) of meanings of the Arabic terms. To build this ontology (see Fig.1), the set of all Arabic terms are collected; then for each term, the set of its concepts (polysemy) are identified using unique numbers and described using glosses. Terms referring to same meaning (called synsets) are given the same concept identifier. These concepts are then classified using Subsumption and Parenthood relationships. The Arabic Ontology follows the same design as WordNet (i.e., network of synsets), thus it can be used as an Arabic WordNet. However, unlike WordNet, the Arabic Ontology is logically and philosophically well-founded, following strict ontological principles. The Subsumption relation is a formal subset relation. The ontological correctness of a relation (e.g., whether "PeriodicTable SubtypeOf Table" is true in reality) in WordNet is based on whether native speakers accept such a claim. However, the ontological correctness of the Arabic Ontology is based on what scientists accept; but if it can't be determined by science, then what philosophers accept; and if philosophy doesn't have an answer then we refer to what linguistics accept. Our classification also follows the OntoClean methodology when dealing with, instances, concepts, types, roles, and parts. As described in the next section, the top levels of the Arabic Ontology are derived from philosophical notions, which further govern the ontological correctness of its lower levels. Moreover, glosses are formulated using strict ontological rules focusing on intrinsic properties. Figure 1. Illustration of terms' concepts and its conceptual relations Why the Arabic Ontology It can be used in many application scenarios such as: (1) information search and retrieval, to enrich queries and improve the results' quality, i.e., meaningful search rather than string-matching search; (2) Machine translation and term disambiguation, by finding the exact mapping of concepts across languages, as the Arabic Ontology is also mapped to the English WordNet; (3) Data Integration and interoperability in which the Arabic Ontology can be used as a semantic reference to several autonomous information systems; (4) Semantic web and web 3.0, by using the Arabic Ontology as a semantic reference to disambiguate meanings used in web sites; (5) Conceptual dictionary, allowing people to easily browse and find meanings and the differences between meanings. The Arabic Ontology Top Levels Figure 2 presents the top levels of the Arabic Ontology, which is a classification of the most abstract concepts (i.e., meanings) of the Arabic terms. Only three levels are presented below for the sake of brevity. All concepts in the Arabic Ontology are classified under these top-levels. We designed these concepts after a deep investigation of the philosophy literature and based on well-recognized upper level ontologies like BFO, DOLCE, SUMO, and KYOTO. Figure 2. The Top three levels of the Arabic Ontology (Alpha Version)
-
-
-
Surface properties of poly(imide-CO-siloxane) block copolymers
By Igor NovákSurface Properties Of Poly(Imide-Co-Siloxane) Block Copolymers Aigor Novák, A,Banton Popelka, Cpetr Sysel, A,D Igor Krupa, Aivan Chodák, Amarian Valentin, Ajozef Prachár, Evladimír Vanko Apolymer Institute, Slovak Academy Of Sciences, Bratislava, Slovakia Bcenter For Advanced Materials, Quatar University, Doha, Quatar Cdepartment Of Polymers, Institute Of Chemical Technology, Faculty Of Chemical Technology, Prague, Czech Republic Dcenter For Advanced Materials, Qapco Polymer Chair, Quatar University, Doha, Quatar Evipo, Prtizánske, Slovakia Abstract Polyimides Present An Important Class Of Polymers, Necessary In Microelectronics, Printed Circuits Construction, And Aerospace Investigation, Mainly Because Their High Thermal Stability And Good Dielectric Properties. In The Last Years, Several Sorts Of Block Polyimide Based Copolymers, Namely Poly(Imide-Co-Siloxane) (Pis) Block Copolymers Containing Siloxane Blocks In Their Polymer Backbone Have Been Investigated. In Comparison With Pure Polyimides The Pis Block Copolymers Possess Some Improvements, E.G. Enhanced Solubility, Low Moisture Sorption, And Their Surface Reaches The Higher Degree Of Hydrophobicity Already At Low Content Of Polysiloxane In Pis Copolymer. This Kind Of The Block Copolymers Are Used As High-Performance Adhesives And Coatings. The Surface As Well As Adhesive Properties Of Pis Block Copolymers Depends On The Content And Length Of Siloxane Blocks. The Surface Properties Of Pis Block Copolymers Are Strongly Influenced By Enrichment Of The Surface With Siloxane Segments. Micro Phase Separation Of Pis Block Copolymers Occurs Due To The Dissimilarity Between The Chemical Structures Of Siloxane, And Imide Blocks Even At Relatively Low Lengths Of The Blocks. The Surface Energy Of Pis Block Copolymer Decreases Significantly With The Concentration Of Siloxane From 46.0 Mj.M-2 (Pure Polyimide) To 34.2 Mj.M-2 (10 Wt.% Of Siloxane), And To 30.2 Mj.M-2 (30 Wt.% Of Siloxane). The Polar Component Of The Surface Energy Reached The Value 22.4 Mj.M-2 (Pure Polyimide), Which Decreases With Content Of Siloxane In Pis Copolymer To 4.6 Mj.M-2 (10 Wt.% Of Siloxane) And 0.8 Mj.M-2 (30 Wt.% Of Siloxane) The Decline Of The Surface Energy, And Its Polar Component Of Pis Block Copolymer With Raising Siloxane Content Are Very Intense Mainly Between 0 And 10 Wt.% Of Siloxane In Copolymer. In The Case Of Further Increase Of Siloxane Concentration (Above 20 Wt.% Of Siloxane), The Surface Energy Of Pis Copolymer, And Its Polar Component Is Leveled Off. The Dependence Of Peel Strength Of Adhesive Joint Pis Copolymer-Epoxy Versus Polar Fraction Of The Copolymer Shows, That The Steepest Gradient Is Reached At 15 Wt.% Of Siloxane Pis Block Copolymer, And Then It Is Leveled Off. This Relation Allows The Determination Of The Non-Linear Relationship Between Adhesion Properties Of Pis Block Copolymer And Polar Fraction Of The Copolymer. Acknowledgement This Contribution Was Supported By Project No. 26220220091 By The "Research & Development Operational Program" Funded By The Erdf, And As A Part Of The Project „Application Of Knowledge-Based Methods In Designing Manufacturing Systems And Materials", Project Mesrssr No. 3933/2010-11, And Project Vega 2/0185/10.
-
-
-
An Arabic edutainment system: Using multimedia and physical activity to enhance the cognitive experience for children with intellectual disabilities
More LessIncreasing attention has recently been drawn in the human-computer interaction community towards the design and development of accessible computer applications for children and youth with developmental or cognitive impairments. Due to better healthcare and assistive technology, the quality of life for children with intellectual disability (ID) has evidently been improved. Many children with ID often have cognitive disabilities, along with overweight problems due to lack of physical activity. This paper introduces an edutainment system specifically designed to help these children have an enhanced and enjoyable learning process, and addresses the need for integrating physical activity into their daily lives. These games are developed with the following pedagogical model in mind: A combination of Mayer's Cognitive Theory of Multimedia Learning with a mild implementation of Skinner's Operant Condition, incorporated with physical activity as part of the learning process. The system proposed consists of a padded floor mat that consists of sixteen square tiles supported by sensors, which are used to interact with a number of software games specifically designed to suit the mental needs of children with ID. The games consist of multimedia technology with a tangible user interface. The edutainment system consists of three games, each with three difficulty levels meant to suit specific needs of different children. The system aims at enhancing the learning, coordination and memorization skills of the children with ID while involving them into physical activities, thus offering both mental and physical benefits. The edutainment system was tested on 100 children with different IDs, half of which have Down syndrome (DS. The children pertain to three disability levels; mildly, moderately and severely disabled. The obtained results depict a high increase in the learning process as the children became more proactive in the classrooms. Assessment methodology took into account the following constraints; disability type, disability level, gender, scoring, timing, motivation, coordination, acceptance levels and relative performance. The following groups, when compared with other groups, achieved best results in terms of scores and coordination: children with DS, mildly disabled children and females. In contrast to children with other IDs, moderately and severely disabled children, and males performed with lower scores and coordination levels, but all the above mentioned groups exhibited high motivation levels. Rejection rate was found to be very low, at 3% of children refusing to participate. When children repeated the games, 92% were noted to achieve significantly higher results. The edutainment is developed with the following aims: helping children with ID have an enhanced cognitive experience, allowing them a learning environment where they can interact with the game and exert physical activities, ensuring a proactive role for all in the classroom, boosting motivation, coordination and memory levels. Results proved that the system had very positive effects on the children, in terms of cognition and motivational levels. Instructors also expressed willingness to incorporate the edutainment system into the classroom on a daily basis, as a complementary tool to conventional learning.
-
-
-
Enhancement of multispectral face recognition in unconstrained environment using regularized linear discriminant analysis (LDA)
More LessIn this paper, Face recognition in unconstrained illumination conditions is investigated. A two manifold contribution is proposed: 1) Firstly, Three sate of the art algorithms, namely Multiblock local Binary pattern (MBLBP), Histogram of Gabor Phase Patterns (HGPP) and Local Gabor binary pattern histogram sequence (LGBPHS) are challenged against The IRIS-M3 multispectral face data base. 2) Secondly, The Performance of the three mentioned algorithms, being drastically decreased due to the non monotonic illumination variation that distinguish the IRIS-M3 face database, This performance is enhanced using multispectral images (MI) captured in the visible spectrum. The use of MI images like near infrared images(NIR), short wave infrared images images (SWIR) or even visible images captured at different wavelengths rather then the usual RGB spectrum, is getting more and more the trust of researcher to solve problems related to uncontrolled imaging conditions that usually affect real world application like securing areas with valuable assets, controlling high traffic borders or law enforcement. However, one weakness of MI is that they may significantly increase the system processing time due to the huge quantity of data to mine (in some cases thousands of MI are captured for each subject). To solve this issue, we proposed to select the optimal spectral bands (channels) for face recognition. Best spectral bands selection will be achieved using linear discriminant analysis (LDA) to increase data variance between images of different subjects(between class variance) while decreasing the variance between images of the same subject(within class variance). To avoid the problem of data overfitting that generally characterize LDA technique, we propose to include a regularization constraints that reduce the solution space of the chosen best spectral bands. Obtained results highlighted further the still challenging problem of face recognition in conditions with high illumination variation, as well as proven the effectiveness of our multispectral images based approach to increase the accuracy of the studied algorithms namely MBLBP, HGPP and LGBPHS of at least 9% upon the proposed database.
-
-
-
Caloric expenditure estimation for health games
By Aiman ErbadWith the decline in physical activities among young people, it is essential to monitor their physical activity and ensure their calorie expenditure is within the range necessary to lead a healthy life style. For many children and young adults, video gaming is one favorable venue for physical activities. A new flavor of video games on popular platforms, such as Wii and Xbox, aim to improve the health of young adults through competing in games which require players to perform various physical activities. These popular platforms detect the user movements, and through an avatar in the game, players can be part of the game activities, such as boxing, playing tennis, dancing, avoiding obstacles, etc. Early studies used self-administered questionnaires or interviews to collect data about the patient's activities. These self-reporting methods ask participants to report their activities on hourly, daily, or weekly basis. But self-reporting techniques suffer from a number of limitations, such as inconvenience in entering data, poor compliance, and inaccuracy due to bias or poor memory. Reporting activities is a sensitive task for overweight/obese individuals with research evidence showing that they tend overestimate the calories they burn. Having a tool to help estimate calories consumption is therefore becoming essential to manage obesity and over-weight issues. We propose providing a calories expenditure estimation service. This service would augment the treatment provided by an obesity clinic or a personal trainer for obese children. Our energy expenditure estimation system consists of two main components: activity recognition and calories estimation. Activity recognition systems have three main components: low-level sensing module to gather sensor data, feature selection module to process raw sensor data and select the features necessary to recognize activities, and a classification module to infer the activity from the captured features. Using the activity type, we can estimate the calories consumption using existing models on energy expenditure developed on the gold standard of respiratory gases measurements. We choose Kinetic as our test platform. The natural user interface in Kinect is the low-level sensing module providing skeleton tracking data. The skeleton positions will be the raw input to our activity recognition module. From this raw data, we define the features that help the classifier, such as the speed of the hands and legs, body orientation, and rate of change in the vertical and horizontal position. These are some features that can be quantified and passed periodically (e.g., every 5 seconds) to the classifier to distinguish between different activities. Other important features might need more processing, such as the standard deviation, the difference between peaks (in periodic activities), and the distribution of skeleton positions. We also plan to build an index for the calories expenditure for game activities using the medical gold standard of oxygen consumption, CO2. Game activities, such as playing tennis, running, and boxing are different from the same real world activities in terms of enegy concumption and it would be useful to quantify the difference in order to answer the question of whether these “health” games are useful for weight loss.
-
-
-
Towards socially-interactive telepresence robots for the 2022 world cup
More LessThe World Cup is the most widely viewed sporting event. The total attendance is in the hundreds of thousands. The first-ever World Cup in an Arab nation will be hosted by Qatar in 2022. As the country prepares for this event, the current paper proposes telepresence robots for the World Cup. Telepresence robots refer to robots that allow a person to work remotely in another place as he or she is physically present there. For such a big event like the World Cup, we can envision that organizers need to monitor the minute-by-minute events as they occur in multiple venues. Likewise, soccer fans who will not be able to attend the events can be present even if they are not physically present there and without the need to travel. Telepresence robots can make the organizers and participants to “be there”. This work describes some of the author's findings involving the interactions of humans and robots. Specifically, the works describe the user's perceptions and physiological data when touch and gestures are passed over the internet. The results show that there is much potential for telepresence robots to enhance the utility and the organizers' and participants' overall experience of the 2022 World Cup.
-
-
-
Assistive technology for improved literacy among the deaf and hard of hearing
More LessWe describe an assistive technology for improved literacy among the Deaf and Hard of Hearing, that is cost-wise and accessible to deaf individuals and their families/service providers (e.g., educators), businesses which employ them or list them as customers and healthcare professionals. The technology functions as (1) A real-time translation system between Moroccan Sign Language (a visual-gestural language) and standard written Arabic. Moroccan Sign Language (MSL) is a visual/gestural language that is distinct from spoken Moroccan Arabic and Modern Standrad Arabic (SA) and has no text representation. In this context, we describe some challenges in SA-to-MSL machine translation. In Arabic, word structure is not built linearly as is the case in concatenative morphological systems, which results in a large space of of morphological variation. The language has a large degree of ambiguity in word senses, and further ambiguity attributable to a writing system that omits diacritics. (e.g. short vowels, consonant doubling, inflection marks). The lack of diacritics coupled with word order flexibility are causes of ambiguity in the syntactic structure of Arabic. The problem is compounded when translating into a visual/gestural language that has far fewer signs than words of the source language. In this presentation, we show how Natural language processing tools are integrated into the system, the architecture of the system and provide a demo of several input examples with different levels of complexity. Our Moroccan Sign Language database has currently 2000 Graphic signs and their corresponding video clips. The database extension is an ongoing process task that is done in collaboration with MSL interpreters, deaf signers and educators in Deaf schools in different regions in Morocco. (2) Instructional tool: Deaf school children, in general, have poor reading skills. It is easier for them to understand text represented in sign language than in print. Several works have demonstrated that a combination of sign language and spoken/written language can significantly improve literacy and comprehension (Singleton, Supalla, Litchfield, & Schley, 1998; Prinz, Kuntz, & Strong, 1997; Ramsey & Padden, 1998). While many assistive technologies have been created for the blind, such as hand-held scanners and screen readers, there are only a few products targeting poor readers who are deaf. An example of such technology is the iCommunicator™ which translates in real time: speech to text, speech/typed text to videos of signs, and speech/typed text to computer generated voice. This tool, however, does not generate text from scans and display them with sign graphic supports that a teacher can print, edit, and use to support reading. It also does not capture screen text. We show a set of tools aiming at improving literacy among the Deaf and Hard-of-Hearing. Our tools offer a variety of input and ouput options, including scanning, screen text transfer, sign graphics and video clips. The technology we have developed is useful to teachers, educators, Health care professionals, speech/language pathologists, etc. who have a need to support understanding of Arabic text with Moroccan Sign Language signs for purposes of literacy improvement, curriculum enhancement, or communication in emergency situations.
-
-
-
Unsupervised Arabic word segmentation and statistical machine translation
More LessWord segmentation is a necessary step for natural language processing applications, such as machine translation and parsing. In this research we focus on Arabic word segmentation to study its impact on Arabic to English translation. There are accurate word segmentation systems for Arabic, such as MADA (Habash, 2007). However, such systems usually need manually-built data and rules of the Arabic language. In this work, we look at unsupervised word segmentation systems to see how well they perform on Arabic, without relying on any linguistic information about the language. The methodology of this research can be applied to many other morphologically complex languages. We focus on three leading unsupervised word segmentation systems proposed in the literature: Morfessor (Creutz and Lagus, 2002), ParaMor (Monson, 2007), and Demberg's system (Demberg, 2007). We also use two different segmentation schemes of the state of the art MADA and compare their precision with the unsupervised systems. After training the three unsupervised segmentation systems, we apply their resulting models to segment the Arabic part of the parallel data for Arabic to English statistical machine translation (SMT) and measure its impact on translation quality. We also build segmentation models using the two schemes of MADA on SMT to compare against the baseline system. The 10-fold cross validation results indicate that unsupervised segmentation systems turn out to be usually inaccurate with a precision that is less than 40%, and hence do not help with improving SMT quality. We also observe both segmentation schemes of MADA have very high precision. We experimented with two MADA schemes. A scheme with a measured segmentation framework improved the translation accuracy. A second scheme which performs more aggressive segmentation, failed to improve SMT quality. We also provide some rule-based supervision to correct some of the errors in our best unsupervised models. While this framework performs better than the baseline unsupervised systems, it still does not outperform the baseline MT quality. We conclude that in our unsupervised framework, the noise by the unsupervised segmentation offsets the potential gains that segmentation could provide to MT. We conclude that a measured supervised word segmentation improves Arabic to English quality. In contrast aggressive and exhaustive segmentation introduces new noise to the MT framework and actually harms its quality. This publication was made possible by the generous support of the Qatar Foundation through Carnegie Mellon University's Seed Research program provided to Kemal Oflazer. The statements made herein are solely the responsibility of the authors.
-
-
-
A services-oriented infrastructure for e-science
By Syed AbidiThe study of complex multi-faceted scientific questions demand innovative computing solutions—solutions that transcend beyond the management of big data to dedicated semantics-enabled, services-driven infrastructures that can effectively aggregate, filter, process, analyze, visualize and share the cumulative scientific efforts and insights of the research community. From a technical standpoint, E-Science purports technology-enabled collaborative research platforms to (i) collect, store and share multi-modal data collected from different geographic sites, (ii) perform complex simulations and experiments using sophisticated simulation models; (iii) design complex experiments by integrating data and models and executing them as per the experiment workflow; (iv) visualize high-dimensional simulation results; and (v) aggregate and share the scientific results (Fig 1). Taking a knowledge management approach, we have developed an innovative E-Science platform— termed Platform for Ocean Knowledge Management (POKM)—that is built using innovative web-enabled services, services-oriented architecture, semantic web, workflow management and data visualization technologies. POKM offers a suite of E-Science services that allow oceanographic researchers to (a) handle large volumes of ocean and marine life data; (b) access, share, integrate and operationalize the data and simulation models; (c) visualization of data and simulation results; (d) multi-site collaborations in joint scientific research experiments; and (e) form a broad, virtual community of national and international researchers, marine resource managers, policy makers and climate change specialists. (Fig 2) The functional objective of our E-Science infrastructure is to establish an online scientific experimentation platform that supports an assortment of data/knowledge access and processing tools to allow a group of scientists to collaborate and conduct complex experiments by sharing data, models, knowledge, computing resources and expertise. Our E-Science approach is to complement data-driven approaches with domain-specific knowledge-centric models in order to establish causal, associative and taxonomic relations between (a) raw data and modeled observations; (b) observations and their causes; and (c) causes and theoretical models. This is achieved by taking a unique knowledge management approach, whereby we have exploited semantic web technologies to semantically describe the data, scientific models, knowledge artifacts and web services. The use of semantic web technologies provides a mechanism for the selection and integration of problem-specific data from large repositories. To define the functional aspects of the e-science services we have developed a services ontology that provides a semantic description of knowledge-centric e-science services. POKM is modeled along a services-oriented architecture that exposes a range of task-specific web services accessible through a web portal. The POKM architecture features five layers—Presentation Layer, Collaboration Layer, Service Composition Layer, Service Layer and Ontology Layer (Fig 3). POKM is applied to the domain of oceanography to understand our changing eco-system and its impact on marine life. POKM helps researchers investigate (a) changes in marine animal movement on time scales of days to decades; (b) coastal flooding due to changes in certain ocean parameters; (c) density of fish colonies and stocks; and (d) time-varying physical characteristics of the oceans (Fig 4 & 5). In this paper, we present the technical architecture and functional description of POKM, highlighting the various technical innovations and their applications to E-Science.
-
-
-
Informatics and technology to address common challenges in public health
By Jamie PinaHealth care systems in countries around the world are focused on improving the health of their populations. Many countries face common challenges related to capturing, structuring, sharing and acting upon various sources of information in service of this goal. Information science, in combination with information and communications technologies (ICT) such as online communities and cloud-based services, can be used to address many of the challenges encountered when developing initiatives to improve population health. This presentation by RTI International will focus on the development of the Public Health Quality Improvement Exchange (www.phqix.org), where informatics and ICT have been used to develop new approaches to public health quality improvement; a challenge common to many nations. The presentation will also identify lessons learned from this effort and the implications for Gulf Cooperation Council (GCC) countries. This presentation addresses two of Qatar's Cross-cutting Research Grand Challenges; "Managing the Transition to a Diversified, Knowledge-based Society," and "Developed modernized Integrated Health Management." The first grand challenged is addressed by our research on the use of social networks and their relationship to public health practice environments. The second is addressed through our research in the development of taxonomies that align with the expectations of public health practitioners to facilitate information sharing [1]. Health care systems aim to have the most effective practices for detecting, monitoring, and responding to communicable and chronic conditions. However, national systems may fail to identify and share lessons gained through the practices of local and regional health authorities. Challenges include having appropriate mechanisms for capturing, structuring, and sharing these lessons in uniform, cost-effective ways. The presentation will explore how a public health quality improvement exchange, where practitioners submit and share best practices through an online portal, help address these challenges. This work also demonstrates the advantages of a user-centered design process to create an online resource that can successfully accelerate learning and application of quality improvement (QI) by governmental public health agencies and their partners. Public health practitioners, at the federal, state, local and tribal levels, are actively seeking to promote the use of quality improvement to improve efficiency and effectiveness. The Public Health Quality Improvement Exchange (PHQIX) was developed to assist public health agencies and their partners in sharing their experiences with QI and to facilitate increased use of QI in public health practice. Successful online exchanges must provide compelling incentives for participation, site design that aligns with user expectations, information that is relevant to the online community and presentation that encourages use. Target audience members (beneficiaries) include public health practitioners, informatics professionals, and officials within health authorities. This discussion aims to help audience members understand how new approaches and Web-based technologies can create highly reliable and widely accessible services for critical public health capabilities including quality improvement and data sharing. 1. Pina, J., et al., Synonym-based Word Frequency Analysis to Support the Development and Presentation of a Public Health Quality Improvement Taxonomy in an Online Exchange. Stud Health Technol Inform, 2013. 192: p. 1128.
-
-
-
Development of a spontaneous large vocabulary speech recognition system for Qatari Arabic
More LessIn this work, we develop a spontaneous large vocabulary speech recognition system for Qatari Arabic (QA). A major problem with dialectal Arabic speech recognition is due to the sparsity of speech resources. So, we propose an Automatic Speech Recognition (ASR) framework to jointly use Modern Standard Arabic (MSA) data and QA data to improve acoustic and language modeling by orthographic normalization, cross-dialectal phone mapping, data sharing, and acoustic model adaptation. A wide-band speech corpus has been developed for QA. The corpus consists of 15 hours speech data collected from different TV series and talk-show programs. The corpus was manually segmented and transcribed. A QA tri-gram language model (LM) was linearly interpolated with a large MSA LM in order to decrease Out-Of-Vocabulary (OOV) rate and to improve perplexity. The vocabulary consists of 21K words extracted from the QA training set with additional 256K MSA vocabulary. The acoustic model (AM) was trained with a data pool of QA data and additional 60 hours of MSA data. In order to boost the contribution of QA data, Maximum-A-Posteriori (MAP) adaptation was applied on the resulted AM using only the QA data, effectively increasing the weight of dialectal acoustic features in the final cross-lingual model. All trainings were performed with Maximum Mutual Information Estimation (MMIE) and with Speaker Adaptive Training (SAT) applied on top of MMIE. Our proposed approach achieves more than 16% relative reduction in WER on QA testing set compared to a baseline system trained with only QA data. This work was funded by a grant from the Qatar National Research Fund under its National Priorities Research Program (NPRP) award number NPRP 09-410-1-069. Reported experimental work was performed at Qatar University in collaboration with University of Illinois.
-
-
-
On faults and faulty programs
By Ali JaouaAbstract. The concept of a fault has been introduced in the context of a comprehensive study of system dependability, and is defined as a feature of the system that causes it to fail with respect to its specification. In this paper, we argue that this definition does not enable us to localize a fault, nor to count faults, nor to define fault density. We argue that rather than defining a fault, we ought to focus on defining faulty programs (or program parts); also, we introduce inductive rules that enable us to localize faults to an arbitrary level of precision; finally, we argue that to claim that a program part is faulty one must often make an assumption about other program parts (and we find that the claim is only as valid as the assumption). Keywords. Fault, error, failure, specification, correctness, faulty program, refinement. Acknowledgement: This publication was made possible by a grant from the Qatar National Research Fund NPRP04-1109-1-174. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the QNRF.
-
-
-
Compiler-directed design of memory hierarchy for embedded systems
More LessIn embedded real-time communication and multimedia processing applications, the manipulation of large amounts of data has a major effect on both power consumption and performance of the system. Due to the significant amount of data transfers between the processing units and the large and energy consuming off-chip memories, these applications are often called data-dominated or data-intensive. Providing sufficient bandwidth to sustain fast and energy-efficient program execution is a challenge for system designers: due to the growing gap of speed between processors and memories, the performance of the whole VLSI system will mainly depend on the memory subsystem, when memory is unable to provide data and instructions at the pace required by the processor. This effect is, sometimes, referred in the literature as the memory wall problem. At system level, the power cost can be reduced by introducing an optimized custom memory hierarchy that exploits the temporal data locality. Hierarchical memory organizations reduce energy consumption by exploiting the non-uniformity of memory accesses: the reduction can be achieved by assigning the frequently-accessed data to low hierarchy levels, a problem being how to optimally assign the data to the memory layers. This hierarchical assignment diminishes the dynamic energy consumption of the memory subsystem - which expands due to memory accesses. Moreover, it diminishes the static energy consumption as well, since this decreases monotonically with the memory size. Moreover, within a given memory hierarchy level, power can be reduced by memory banking - whose principle is to divide the address space in several smaller blocks, and to map these blocks to physical memory banks that can be independently enabled and disabled. Memory partitioning is also a performance-oriented optimization strategy, because of the reduced latency due to accessing smaller memory blocks. Arbitrarily fine partitioning is prevented since an excessively large number of small banks is area inefficient, imposing a severe wiring overhead -- which increases communication power and decreases performance. This presentation will introduce an electronic design automation (EDA) methodology for the design of hierarchical memory architectures in embedded data-intensive applications, mainly in the area of multidimensional signal processing. The input of this memory management framework is the behavioral specifications of the applications, that are assumed to be procedural and affine. Figure 1 shows an illustrative example of behavioral specification with 6 nested loops. This framework employs a formal model operating with integral polyhedra, using techniques specific to the data-dependence analysis employed in modern compilers. Different from previous works, three optimization problems - the data assignment to the memory layers (on-chip scratch-pad memory and off-chip DRAM), the mapping of multidimensional signals to the physical memories, and the banking of the on-chip memory (see Figure 2) - are addressed in a consistent way, based on the same formal model. The main design target is the reduction of the static and dynamic energy consumption in the memory subsystem, but the same formal model and algorithmic principles can be applied for the reduction of the overall time of access to memories, or combinations of these design goals.
-
-
-
Dynamic Simulation Of Internal Logistics In Aluminum Production Processes
More LessThe production of aluminum products, like other metallurgical industries, comprehends of a large variety of material processing steps. These processes require a multitude of material handling operations, buffers and transports to interconnect single process steps within aluminum smelters or downstream manufacturing plants like rolling mills or extrusion plants. On the one hand, the electrolysis process as the core process of primary aluminum production requires an amount of material factors more than the volume of metal produced. On the other hand, downstream manufacturing processes are showing an enormous variation of mechanical properties and surface qualities and comprise many fabrication steps including forming, heat treatment and finishing that can appear in an arbitrary order. Therefore the internal logistics composing the entire internal material flow of one production facility is increasingly regarded as one key success factor for efficient production processes as part of the supply-chain management. Dynamic simulations based on discrete event simulation models can be effective tools to support planning processes along the entire value chain in aluminum production plants. Logistics simulation models ideally accompany also improvement and modernization measures and the design of new production facilities to quantify the resulting overall equipment effectiveness and the reduction of energy consumption and emissions. Hydro Aluminium has a long history in solving logistic challenges using simulation tools. Limitations of former models have been the starting point of a further development of simulation tools based on more flexible models. They address the streamlining of operations and transportation, in particular in aluminum smelters, and material flow problems as well as new plant concepts as support to investment decisions. This presentation is giving firstly a brief introduction into the main upstream and downstream processes of aluminum production to understand the different driving forces of material flow. Secondly the principles of mapping the specific material flow of aluminum smelters and in typical downstream manufacturing plants are outlined. Examples demonstrate the benefit of a systematic modeling approach.
-
-
-
A robust method for line and word segmentation in handwritten text
More LessLine and word segmentation is a key-step in any document image analysis system. It can be used for instance in handwriting recognition when separating words before their recognition. Line segmentation can also serve as a prior step before extracting the geometric characteristics of lines which are unique to each writer. Text line and word segmentation is not an easy task because of the following problems: 1) text lines do not all have the same direction in the handwritten text; 2) text lines are not always horizontal which makes their separation more difficult; 3) characters may overlap between successive text lines; 4) it is often confusing to distinguish between inter and intra word distances. In our method, line segmentation is done by using a smoothed version of the handwritten document which makes it possible to detect the main line components using a subsequent thresholding algorithm. The connected components of the resulting image are then assigned to a separate label which represents a line component. Then, each text region which intersects only with one line component is assigned to the same label of that line component. The Voronoi diagram of the image thus obtained is then computed in order to label the remaining text pixels. Word segmentation is performed by computing a generalized Chamfer distance in which the horizontal distance is slightly favored. This distance is subsequently smoothed in order to reflect the distances between word components and neglect the distance to dots and diacritics. Word segmentation is then performed by thresholding the distance thus obtained. The threshold depends on the characteristics of the handwriting. We have therefore computed several features in order to predict it, including: the sum of maximum distances within each line component, the number of connected components within the document and the average width and height of lines. The optimal threshold is then obtained by training a linear regression of those features on a training set of about 100 documents. This method achieved the best performance on the ICFHR Handwriting Segmentation Contest dataset reaching a matching score of 97.4% on line segmentation and 91% on word segmentation. The method has also been tested on the QUWI Arabic dataset reaching 97.1% on line segmentation and 49.6% on word segmentation. The relatively low performance of word segmentation in Arabic script is due to the fact that words are very close to each other with respect to English script. The proposed method tackles most of the problems of line and word segmentation and achieves high segmentation results. It can however be improved by combining it with a handwriting recognizer which will eliminates words which are not recognized.
-
-
-
Optimizing Qatar steel supply chain management system
More LessWe have developed a linear programming formulation to describe Qatar steel manufacturing supply chain from suppliers to consumers. The purpose of the model is to provide answers regarding the optimal amount of raw materials to be requested from suppliers, the optimal amount of finished products to be delivered to each customer and the optimal inventory level of raw materials. The model is validated and solved using GAMS software. Sensitivity analysis on the proposed model is conducted in order to draw useful conclusions regarding the factors that play the most important role in the efficiency of the supply chain. In the second part of the project, we have set a simulation model to produce a specific set of Key Performance Indicators (KPIs). KPIs are developed to characterize the supply chain performance in terms of responsiveness, efficiency, and productivity/utilization. The model is programmed using WITNESS simulation software. The developed QS WITNESS simulation model aims to assess and validate the current status of the supply chain performance in terms of a set of KPIs, taking into consideration the key deterministic and stochastic factors, from suppliers and production plant processes, to distributors and consumers. Finally, a simulated annealing algorithm has been developed that will be used to set model variables to achieve a multi-criteria tradeoff of the defined supply chain KPIs.
-
-
-
An ultra-wideband RFIC attenuator for communication and radar systems
By Cam NguyenAttenuators are extensively employed as an amplitude control circuit in communication and radar systems. Flatness, attenuation range, and bandwidth are typical specifications for attenuators. Most attenuators in previous studies rely on the basic topologies of the Pi-, T-, and distributed attenuators. The performance of the Pi- and T-attenuators, however, is affected substantially by the switch performance of transistors, and it is hard to obtain optimum flatness, attenuation range, and bandwidth with these attenuators. The conventional distributed attenuator also demands a large chip area for large attenuation ranges. We report the design of a new microwave/millimeter-wave CMOS attenuator. A design method is proposed and implemented in the attenuator to improve its flatness, attenuation range, and bandwidth. It is recognized that the Pi- and T-attenuators at a certain attenuation state inherently has the insertion-loss slope increasing as the frequency is increased. This response is owing to the off-capacitance of the series transistors. On the other hand, the distributed attenuators can be designed to have the opposite insertion-loss slope by shortening the transmission lines. The reason is that short transmission lines causes the center frequency be shifted higher. The proposed design method utilizes the two opposite insertion-loss slopes and is implemented for the Pi-, T-and distributed attenuators in a cascade connection. Over 10-67 GHz, the measured results exhibit attenuation flatness of 6.8 dB and attenuation range of 32-42 dB.
-
-
-
Multiphase production metering: Benefits from an Industrial data validation and reconciliation approach
By Simon MansonMultiphase production metering - Benefits from an Industrial Data Validation and Reconciliation approach Simon Manson, Mohamed Haouche, Pascal Cheneviere and Philippe Julien TOTAL Research Center-Qatar at QSTP - DOHA, Qatar Contact: [email protected] Context and objectives TOTAL E&P QATAR (TEPQ) is the Operator of AL-Khaleej offshore Oil field under Production Sharing Agreement (PSA) with Qatar Petroleum. AL-Khaleej field is characterised by a high water Cut (ratio of water over the liquid), which classifies the field as mature. Operating this field safely and cost effectively requires a large involvement of cutting-edge technologies with a strict compliance with internal procedures and Standards. Metering's main objective is to deliver accurate and close to real-time production data from online measurements, allowing the optimization of the operations and the mitigation of potential HSE-related risks. Solution The solution tested by TEPQ is based on a Data Validation and Reconciliation (DVR) approach. This approach is well-known in hydrocarbon downstream sector and power plants. Its added value lies mainly in the automatic reconciliation of several data originating not only from the multiphase flow meters but also from other process parameters. The expected result of this approach is an improvement of data accuracy and increased data availability for operational teams. A DVR pilot has been implemented in the AL-Khaleej field for multiphase flow determination. It performs automatically online data acquisition, data processing and the daily reporting, thanks to a user friendly user interface. Results The communication intends to present the latest findings obtained from the DVR approach. A sensitivity analysis has been performed to highlight the impact of potentially biased data on the integrated production system and on the oil and water production rates. These findings are of high importance for trouble-shooting diagnostic, to identify the source (instruments, process models, etc.) of a malfunction and to define remedial solutions. Oil and water production data, with their relative uncertainties are presented to illustrate the benefits of the DVR approach in challenging production conditions. Conclusions The main benefits from the DVR approach and its user interface lies mainly on the time saving in data post-processing to obtain automatically reconciled data with better accuracy. In addition, thanks to its error detection capability the DVR facilitates troubleshooting identification (alarming).
-
-
-
Dynamic and static generation of multimedia tutorials for children with special needs: Using Arabic text processing and ontologies
More LessWe propose a multimedia-based learning system to teach children with intellectual disabilities (ID) basic concepts in Science, Math and daily living tasks. The tutorials' pedagogical development is based on Mayer's Cognitive Learning Model combined with Skinner's Behaviorist Operant Conditioning Model. Two types of Arabic tutorials are proposed: (1) Statically generated tutorials, which are pre-designed by special needs instructors, and developed by animation experts. (2) Dynamically generated tutorials, which are developed using natural language processing and ontology building. Dynamic tutorials are generated by processing Arabic text, and using machine learning to query the Google engine and generate multimedia elements, which are automatically updated into an ontology system and are hence used to construct a customized tutorial. Both types of tutorials have shown considerable improvements in the learning process and allowed children with ID to enhance their cognitive skills and become more motivated and proactive in the classroom.
-
-
-
A tyre safety study in Qatar and application of immersive simulators
By Max RenaultIn support of Qatar's National Road Safety Strategy and under the umbrella of the National Traffic Safety Committee, Qatar Petroleum's Research and Technology Department in cooperation with Williams Advanced Engineering has undertaken a study of the state of tyre safety in the country. This study has reviewed the regulatory and legislative frameworks governing tyre usage in the country, as well as collected data on how tyres are being used by the populace. To understand the state of tyre usage in Qatar a survey of 239 vehicles undergoing annual inspection was conducted and an electronic survey querying respondents' knowledge of tyres received 808 responses. The findings identified deficiencies in four key areas: accident data reporting, tyre certification for regional efficacy, usage of balloon tyres, and the public's knowledge of tyres and tyre care. Following completion of this study, Qatar Petroleum has commissioned Williams Advanced Engineering to produce an immersive driving simulator for the dual purposes of research and education. This simulator will provide a platform for research investigations of the effect of tyre performance and failure on vehicle stability; additionally this will allow road users, in a safe environment, to experience the effects of various tyre conditions such as a failure, and learn appropriate responses.
-
-
-
Random projections and haar cascades for accurate real-time vehicle detection and tracking
More LessThis paper presents a robust real-time vision framework that detects and tracks vehicles from stationary traffic cameras with certain regions of interest. The framework enables intelligent transportation and road safety applications such as road-occupancy characterization, congestion detection, traffic flow computation, and pedestrian tracking. It consists of three main modules:1) detection, 2) tracking, and 3) data association. To this end, vehicles are first detected using Haar-like features. In the second phase, a light-weight appearance-based model is built using random projections to keep track of the detected vehicles. The data association module fuses new detections and existing targets for accurate tracking. The practical value of the proposed framework is demonstrated with evaluation involving several real-world experiments and variety of challenges.
-
-
-
Conceptual reasoning for consistency insurance in logical deduction and application for critical systems
More LessReasoning in propositional logic is a key element in software engineering; it is applied in different domains, e.g: specification validation, code checking, theorem proving, etc. Since reasoning is a basic component in the analysis and verification of different critical systems, significant efforts have been dedicated to improve its efficiency in terms of time complexity, correctness and generalization to new problems support (e.g., SAT- Problem, Inference-Rules, inconsistency detection, etc.). We propose a new Conceptual Reasoning Method for an inference engine in which such improvements will be achieved by the combination of the semantic interpretations of a logical formula and the formal concept analysis mathematical background. In fact, each logical formula is mapped into a truth table formal context and any logical deduction is obtained by Galois connection. More particularly, we combine all truth tables into a global one which has the advantage of containing the complete knowledge of all deducible rules or, possibly, an eventual inconsistency in the whole system. A first version of the new reasoning system was implemented and applied for medical data. Efficiency in conflicts resolutions as well as in knowledge expressiveness and reasoning were shown. Serious challenges related with time complexity have been faced out and still more improvements are under investigation. "Acknowledgement: This publication was made possible by a grant from the Qatar National Research Fund NPRP04-1109-1-174. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the QNRF."
-
-
-
Alice in the Middle East (Alice ME)
By Saquib RazakWe will present an overview and in-progress report of a QNRF-sponsored project for research in the effects of using program visualization in teaching and learning computational thinking. Computers and computing have made possible incredible leaps of innovation and imagination in solving critical problems such as finding a cure for diseases, predicting a hurricane's path, or landing a spaceship on the moon. The modern economic conditions of countries around the world are increasingly related to their ability to adapt to the digital revolution. This, in turn, drives the need for educated individuals who can bring the power of computing-supported problem solving to an increasingly expanded field of career paths. It is no longer sufficient to wait until students are in college to introduce computational thinking. All of today's students will go on to live in a world that is heavily influenced by computing, and many of these students will work in fields that involve or are influenced by computing. They must begin to work with algorithmic problem solving and computational methods and tools in K-12. Many courses taught in K-12 (for example, math and science) teach problem solving and logical thinking skills. Computational thinking embodies these skills and brings them to bear on problems in other fields and on problems that lie at the intersection of these fields. In the same way that learning to read opens a gateway to learning a multitude of knowledge, learning to program opens a gateway to learning all things computational. Alice is a programming environment designed to enable novice programmers in creating 3-D virtual worlds, including animations and games. In Alice, 3-D models of objects (e.g., people, animals and vehicles) populate a virtual world and students use a drag and drop editor to manipulate the movement and activities of these objects. Alice for the Middle East - "Alice ME" is a research project funded by Qatar National Research Fund (QNRF) under their National Priorities Research Program (NPRP) that aims to modify the well-known Alice introductory programming software to be internationally and culturally relevant for the Middle East. In this project, we are working on developing new 3D models that provide animation objects encountered in daily lives (animals, buildings, clothing, vehicles, etc.) as well as artifacts that reflect and respect the heritage of Qatari culture. The new models will provide characters and images that enable students to create animations that tell the stories of local culture, thereby supporting the QNV 2030 aspiration for maintaining a balance between modernizations and preserving traditions. We are also working with local schools to develop an ICT curriculum centered on using Alice to help students learn computational thinking. We are also designing workshops to train teachers in using Alice and delivering this curriculum.
-
-
-
Awareness of the farmers about effective delivery of farm information by ICT mediated extension service in Bangladesh
More LessThe main focus of the study was to find out the level of awareness about effectiveness of ICT mediated extension service in disseminating farm information to the farmers. The factors influencing awareness of the farmers and the problems faced by the farmers in getting farm information were also explored. Data were collected from a sample of 100 farmers out of 700. A structured interview schedule and check list were used in collection of data through face to face interviewing and focus group discussion (FGD) from the farmers during May to June 2013. The awareness was measured by using a 4 point rating scale and appropriate weights were assigned to each of the responses and by adding the weights awareness score was calculated. Thus, the awareness score of the respondents ranged from 16 to 50 against the possible range of 0 to 64. The effectiveness of ICT was considered based on amount of information being supplied, acceptability of information, usage of information and outcome/benefit by using information by the farmers. About three-fourths (74 percent) of the farmers had moderate awareness while almost one fourth (23 percent) having low and only 3 percent had high awareness about effective delivery of farm information by ICT centers. The level of education, farm size, family size, annual income, training exposure, organizational participation and extension media contact of the farmers were significantly correlated with their awareness about effective delivery of farm information. The stepwise multiple regression analysis showed that out of 9, four variables such as organizational participation, annual income, farm size and family size of the farmers combined accounted for 47.90 percent of the total variation regarding awareness of effective delivery of farm information. Rendering inadequate services of field extension agents, Frequent power disruption, Lack of skilled manpower (extension agents) at ICT centers, Lack of training facilities for the farmers, Poor supervision and monitoring of field extension activities by the superior officers were the major problems mentioned by the farmers for effective dissemination of farm information by ICT mediated extension service. Key words: Awareness, ICT mediated extension service, effective delivery
-
-
-
Translate or Transliterate? Modeling the Decision For English to Arabic Machine Translation
By Mahmoud AzabTranslation of named entities (NEs) is important for NLP applications such as Machine Translation (MT) and Cross-lingual Information Retrieval. For MT, named entities are major subset of the out-of-vocabulary terms. Due to their diversity, they cannot always be found in parallel corpora, dictionaries or gazetteers. Thus, state-of-the-art MT systems need to handle NEs in speciï¬c ways: (i) direct translation which results in missing many out of vocabulary terms and (ii) blind transliteration of out of vocabulary terms which does not necessarily contribute to translation adequacy and may actually create noisy contexts for the language model and the decoder. For example, in the sentence "Dudley North visits North London", the MT system is expected to transliterate "North" in the former case, and translate "North" in the latter. In this work, we present a classification-based framework, that enables MT system to automate the decision of translation vs. transliteration for different categories of NEs. We model the decision as a binary classification at the token level: each token within a named-entity gets a decision label to be translated or transliterated. Training the classifier requires a set of NEs with token-level decision labels. For this purpose, we automatically construct a set of bilingual lexicon of NEs paired with the translation/transliteration decisions from two different domains: We heuristically extract and label parallel NEs from a large word aligned news parallel corpus and we use a lexicon of bilingual NEs collected from Arabic and Wikipedia titles. Then, we designed a procedure to clean up the noisy Arabic NE spans by part-of-speech verification, and heuristically ï¬ltering impossible items (e.g. verbs). For training, the data is automatically annotated using a variant of edit distance measuring the similarity between an English word and its Arabic transliteration. For test set, we manually reviewed the labels and fixed the incorrect ones. As part of our project, this bilingual corpus of named entities has been released to the research community. Using Support Vector Machines, we trained the classifier using a set of token-based, contextual and semantic features of the NEs. We evaluated our classiï¬er both in the limited news and diverse Wikipedia domains, and achieved promising accuracy of 89.1%. To study the utility of using our classifier on an English to Arabic statistical MT system, we deployed it as a pre-translation component to the MT system. We automatically located the NEs in the source language sentences and used the classiï¬er to ï¬nd those which should be transliterated. For such terms, we offer the transliterated form as an option to the decoder. The impact of adding the classifier to the SMT pipeline resulted in a major reduction of out of vocabulary terms and a modest improvement of the BLEU score. This research is supported by the Qatar National Research Fund (a member of the Qatar Foundation) through grants NPRP-09-1140-1-177 and YSREP-1-018-1-004. The statements made herein are solely the responsibility of the authors.
-
-
-
Technology tools for enhancing English literacy skills
By Mary DiasThe goal of this work is to explore the role of technology tools in enhancing the teaching and learning processes for English as a foreign or second language. Literacy is a crucial skill that is often linked to quality of life. However, access to literacy is not universal. Therefore, the significance of this research is its potential impact on the global challenge of improving child and adult literacy rates. Today's globalized world often demands strong English literacy skills for success because the language of instruction and business is frequently English. Even in Qatar's efforts to create a knowledge economy, Education City was established with the majority of instruction in English. Moreover, NGOs such as Reach Out to Asia are partnering with Education City universities to teach English literacy to migrant laborers in Qatar. Many migrant workers reside and work in Qatar for many years and can often improve their job prospects if they speak and understand English. However, Qatar's English literacy problems are not limited to the migrant population. The latest published (2009) PISA (Program for International Assessment) results show that 15-year-olds in Qatar for the most part are at Level one out of six proficiency levels in literacy. Qatar placed among the lowest of the 65 countries that participated in this PISA Assessment. Several research groups have developed technology to enhance literacy skills and improve motivation for learning. Educational games are in increasing demand and are now incorporated into formal education programs. Since the effectiveness of technology on language learning is dependent on how it is used, literacy experts have identified the need for research about appropriate ways and contexts in which to apply technology. Our work shares some goals with the related work, but there are also significant differences. Most educational games and tools are informed by experts on teaching English skills, focused on the students, and act as fixed stand-alone tools that are used outside the school environment. In contrast, our work is designed to complement classroom activities and to allow for customization while remaining cost effective. As such, it seeks to engage parents, teachers, game developers and other experts to benefit and engage learners. Through this work, we engage with different learner populations ranging from children to adults. Expected outcomes of our work include the design, implementation, and testing of accessible and effective computing technology for enhancing English literacy skills among learners across the world. This suite of computer-based and mobile phone-based tools is designed to take into account user needs, local constraints, cultural factors, available resources, and existing infrastructure. We field-test and evaluate our literacy tools and games in several communities in Qatar and in the United States. Through this work, we are advancing the state-of-the-art in computer-assisted language learning and improving the understanding of educational techniques for improving literacy. Our presentation will provide an overview of the motivation for this work, an introduction to our user groups, a summary of the research outcomes of this work to date, and an outline of future work.
-
-
-
Exploiting space syntax for deployable mobile opportunistic networking
More LessThere are many cities where urbanization occurs at a faster rate than that of communication infrastructure deployment. Mobile users with sophisticated devices are often dissatisfied with this lag in infrastructure deployment; their Internet connection is via opportunistic open Access Points for short durations, or via weak, unreliable, and costly 3G connections. With increased demands on network infrastructure, we believe that opportunistic networking, where user mobility is exploited to increase capacity and augment Internet reachability, can play an active role as a complimentary technology to improve user experience, particularly with delay insensitive data. Opportunistic forwarding solutions were mainly designed using a set of assumptions that have grown in complexity, rendering them unusable outside their intended environment. Figure 1 categorizes sample state-of-the-art opportunistic forwarding solutions based on their assumption complexity. Most of these solutions however are not designed for large scale urban environments. In this work, we believe to be the first to exploit the space syntax paradigm to better guide forwarding decisions in large scale urban environments. Space syntax, initially proposed in the field of architecture to model natural mobility patterns by analyzing spacial configurations, offers a set of measurable metrics that quantify the effect of road maps and architectural configurations on natural movement. By interacting with the pre-built static environment, space syntax predicts natural movement patterns in a given area. Our goal is to leverage space syntax concepts to create efficient opportunistic forwarding distributed solutions for large scale urban environments. We address two communication themes: (1) Mobile-to-Infrastructure: We propose a set of space syntax based algorithms that adapt to a spectrum of simplistic assumptions in urban environments. As depicted in Figure 1, our goal is to gain performance improvement across the spectrum, within each assumption category, when compared to other state-of-the-art solutions. We adopt a data driven approach to evaluate the space syntax based forwarding algorithms we propose, within each of three assumption categories, based on large scale mobility traces capturing vehicle mobility. Overall, our results show that our space syntax based algorithms perform more efficiently within each assumption category. (2) Infrastructure-to-Mobile: We propose a new algorithm, Select&Spray, which leverages space syntax, and enables data transfers to mobile destinations reached directly through the infrastructure or opportunistically via other nodes. This architecture consists of: (i) a select engine that identifies a subset of directly connected nodes with a higher probability to forward messages to destinations, and (ii) a spray engine residing on mobile devices that guide the opportunistic dissemination of messages towards destination devices. We evaluate our algorithm using several mobility traces. Our results show that Select&Spray is more efficient in guiding messages towards their destinations. It helps extend the reach in data dissemination to more than 20% of the interested destinations within very short delays, and successfully reaches almost 90% of the destinations in less than 5 minutes.
-
-
-
Face Detection Using Minimum Distance with Sequence Procedure Approach
By Sayed HamdyIn recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearance based and model-based schemes. In this paper we will represent a new method for face detection called Minimum Distance Detection Approach (MDDA) . The obtained results clearly confirm the efficiency of the developed model as compared to the other methods in terms of Classification accuracy. It is also observed that the new method is a powerful feature selection tool which has indentified a subset of best discriminative features. Additionally, the proposed model has gained a great deal of efficiency in terms of CPU time owing to the parallel implementation. In this model we use a direct model for face detection without using unlabelled data. In this research we tarry to identify one sample from a group of unknown samples using a sequence of processes . The results show that this method is very effective when we use a large a sample of unlabelled data to detect on sample.
-
-
-
Analysis Of Energy Consumption Fairness In Video Sensor Networks
More LessThe use of more effective processing tools such as advanced video codecs in wireless sensor networks (WSNs) has enabled widespread adoption of video-based WSNs for monitoring and surveillance applications. Considering that in video-based WSN applications large amounts of energy resources are required for both compression and transmission of video content, optimizing the energy consumption is of paramount importance. There is a trade-off between the encoding complexity and compression performance in the sense that high compression efficiency comes at the expense of increased encoding complexity. On the other hand, there is a direct relationship between coding complexity and energy consumption. Since the nodes in a video sensor network (VSN) share the same wireless medium, there is also an issue with fairness of bandwidth allocation per each node. Nevertheless, the fairness of resource allocation (encoding and transmission energy) for nodes placed at different locations in VSNs has a significant effect on energy consumption. In our study, our objective is to maximize the lifetime of the network by reducing the consumption of the node with the maximum energy usage. Our research focuses on VSNs with linear topology where the nth node relays its data through the nth-1 node, and the node closest to the sink relays information from all the other nodes. In our approach, we analyze the relation between the fairness of nodes' resource allocation, video quality and VSNs' energy consumption to propose an algorithm for adjusting the coding parameters and fairness ratio of each node such that energy consumption is balanced. Our results show that by allocating higher fairness ratios to the closest nodes to the sink, we reduce the maximum energy consumption and achieve a more balanced energy usuge. For instance, in the case of a VSN with six nodes, by allocating the fairness ratios between 0.17 to 0.3 to the closer nodes to the sink, the maximum energy consumption is reduced by 11.28%, with standard deviation of nodes' energy consumption (STDen) of 0.09W compared to 0.25W achieved by the maximum fairness scheme.
-
-
-
-
-
Quantifying The Cybersecurity Of Cloud Computing As A Mean Failure Cost
More LessQuantifying the Cybersecurity of Cloud Computing as a Mean Failure Cost
-
-
-
Exploration Of Optical Character Recognition Of Historical Arabic Documents
More LessOver 70,000 historical books exist in Qatar's Heritage books collection that forms an invaluable part of the Arabic Heritage. The digitization of these books will help to improve accessibility to these texts while ensuring their preservation. The aim of this project is to explore Optical Character Recognition (OCR) techniques for digitizing historical Arabic texts. In this project, the techniques for improving the OCR pipeline were explored in three stages. First, an exploration of page layout analysis was conducted. Next, new Arabic Language translation models were built and the recognition rates were analyzed. Finally, an analysis of using various language models for OCR was conducted. An important initial step in the OCR pipeline is the page layout analysis which requires the identification and classification of regions of interest from a scanned page. In many historic Arabic texts scholars have written side notes on the page margins which add significant value to the main content. Thus, an exploration of techniques was conducted to improve the identification of side notes during the digitization of historic Arabic texts. First, an evaluation of text line segmentation was conducted using two notable open source OCR software: OCRopus and Tesseract. Next, geometric layout analysis techniques were explored using OCRopus and MATLAB to identify text line orientation. After the layout analysis, the next step in the OCR pipeline is the recognition of words and characters from the different text lines that are segmented in the page layout step. OCRopus was the main open source OCR software analyzed which directly extracted the characters from the segmented lines. A number of character recognition models were created for extensive training of the OCRopus system. The historical Arabic text data was then tested on the trained OCRopus models for the calculation of character recognition rates. Additionally, another Open source tool called the IMPACT D-TR4.1 was tested to check the accuracy of clustering within the characters of the Historical Arabic text. A later stage in OCR after the recognition of characters is word boundary identification. In written Arabic, spaces appear between individual words, and possibly within a word, which makes word boundary identification problem difficult. This part of the project assumes character level OCR and proceeds from there. For a given stream of characters, word boundaries are to be identified using perplexities of a Language Model (LM), on Character level and Word level. Character level language model is explored in two ways: the first approach uses segment program supported by SriLM toolkit (Stolcke, 2002). The second approach maps the segmentation to an SMT problem, and uses MOSES. Word level language model is also explored in two ways: the first is naive approach, where all possible prior word boundaries are explored per word, and the one with highest probability is chosen. The second approach uses dynamic programming to find the overall boundaries placement to minimize cost, i.e. maximize probability. This work is the result of a project done at QCRI's 2013 summer internship program.
-
-
-
Contextual Spellchecker To Improve Human-Robot Interaction
More LessThis work focuses on developing a contextual spellchecker to improve the correctness of input queries to a multi-lingual, cross-cultural robot receptionist system. Queries that have fewer misspellings will improve the robot's ability to answer them and in turn improve the effectiveness of the human-robot interaction. We focus on developing an n-gram model based contextual spell-checker to correct misspellings and increase the query-hit rate of the robot. Our test bed is a bi-lingual, cross-cultural robot receptionist, Hala, deployed at the Carnegie Mellon University in Qatar's reception area. Hala can accept typed input queries in Arabic and English and speak responses in both languages as she interacts with users. All input queries to Hala are logged. These logs allow the study of multi-lingual aspects, the influence of socio-cultural norms and the nature of human-robot interaction within a multicultural, yet primarily ethnic Arab, setting. A recent statistical analysis has shown that 26.3% of Hala's queries are missed. The missed queries are due to either Hala not having the required answer in the knowledge base or due to misspellings. We have measured that 50% are due to misspellings. We designed, developed and assessed a custom spellchecker based on an n-gram model. We focused our efforts on a spellchecker for the English mode of Hala. We trained our system on our existing language corpus consisting of valid input queries making the spellchecker more specific to Hala. Finally, we adjusted the n in the n-gram model and evaluated the correctness of the spellchecker in the context of Hala. Our system makes use of the Hunspell, which is an engine that uses algorithm based on n-gram similarity, rule and dictionary based pronunciation data and morphological analysis. Misspelled words are passed through the Hunspell spellchecker and the output is a list of possible words. Utilizing the list of words, we apply our n-gram model algorithm to find which word is best suited in a particular context. The model calculates the conditional probability P(w|s) of a word w given the previous sequence of words s, that is, predicting the next word based on the preceding n-1 words. To assess the effectiveness of our system, we evaluate it using 5 different cases of misspelled word location. The table below lists our results, correct indicates when the sentence is correctly spellchecked and incorrect when the sentences did not change after passing through the spellchecker, or the sentences included transliterated Arabic, or were incorrectly spellchecked which resulted in loss of semantics. Refer to table. We observed that context makes the spellchecking of a sentence more sensible resulting in a higher hit rate in Hala's knowledge base. For case 5, despite having more context than the previous cases, the hit rate is lower. This is because other sources of errors were introduced such as user making use of SMS languages or mixture of English and Arabic. In our future work we would like to tackle the above-mentioned problems and also work on a Part-of-Speech tagging system that would help in correcting real-word mistakes.
-
-
-
Speaker Recognition Using Multimodal Neural Networks And Wavelet Analysis
More LessSPEAKER RECOGNITION USING MULTIMODAL NEURAL NETWORKS AND WAVELET ANALYSIS
-
-
-
Building on strengths of indigenous knowledge to promote sustainable development in crisis conditions from community level: The case of Palestine
More LessAbstract This study began focused on the use of traditional knowledge in promoting sustainable development in crisis conditions. Presented the question: How have successful community- level sustainable development efforts undertaken under crisis conditions drawn upon indigenous knowledge to achieve positive outcomes? The study is a cross case analysis. The three cases addressed in this study have explained some of the ways that indigenous knowledge has played significant positive roles in promoting sustainable development for communities living under crisis conditions in Palestine. Indigenous knowledge community based patterns indicated significant focus on strengths of local culture, social cohesion, the integration process, and special advantages for policy implementation from the community level as key components of sustainable development in crisis conditions. This study especially focuses on efforts to implement sustainable development in crisis conditions. As the World Commission on Environment and Development, better known as the Brundtland Commission, explained in its seminal report (1987), the core problem for sustainable development is the need to integrate social development, economic development, and environmental protection to ensure “development that meet the needs of the present without compromising the ability of the future generations to meet their own needs” (World Commission on Environment and Development 1987, p. 8). And as that report and subsequent studies indicate, too often the social development dimension of the living triangle has been ignored or dramatically undervalued as those involved in development have concentrated on economic development and to some extent environmental protection (World Commission on Environment and Development 1987). In addition to the important ongoing problem of lack of focus on social aspects, sustainable development is particularly important and especially challenging in crisis conditions that include war, terrorism, and civil disorder and their aftermath. This research specifically considers the challenges of sustainable development in the Palestinian context with sensitivity to the need for the integration of all three elements of the living triangle and with concern for the special challenges presented by efforts to achieve sustainable development in crisis conditions. The study contributes to theory by analyzing common elements from the case studies and providing a set of testable propositions, grounded in those successful experiences that can be a starting point for building theory. Practically, the study has generated lessons that sustainable development policy implementers and decision makers can learn from when addressing sustainable development in different crisis conditions contexts such as the aftermath of what is called the “Arab Spring” contexts.
-
-
-
Integrated methodological framework for assessing urban open spaces in Doha from inhabitants' reactions to structured evaluations
Authors: Ashraf M. Salama, Fatma Khalfani, Ahood Al-Maimani and Florian WiedmannThe current fast track urban growth is an important characteristic of the emerging city of Doha. However, very few studies have addressed several important growth aspects, including the examination of the way in which the inhabitants comprehend and react to their built environment and the resulting spatial experience. The availability of attractive open spaces is an essential feature of a liveable urban environment, for the inhabitants of cities and urban areas. Such importance is sometimes oversimplified when making decisions about land-use or discussing the qualities of the built form. As a city characterized by rapid development, urban open spaces in Doha are scattered around from its peripheries to its centre. Varying in form, function, and scale, some spaces are often located within enclave developments, or within larger urban interventions, while others represent portions of spaces with dense urban districts or open waterfronts. The objective of this paper is to investigate different parameters relevant to the qualities of the most important urban open spaces in the city. It adopts a multi-layered research methodology. First, a photo interview mechanism was implemented where 100 inhabitants reacted to imagery and the spatial qualities of twelve urban open spaces. Second, a walking tour assessment procedure was applied to assess the functional, perceptual and social aspects of these spaces. Results indicate correlations between inhabitants' reactions and assessment outcomes pertaining to positive and demerit qualities. Conclusions are developed to offer recommendations for improving existing spaces while envisioning responsive parameters for the design of future urban open spaces.
-
-
-
The oral historian: An infrastructure to support mobile ethnography
Authors: David Bainbridge, Annika Hinze and Sally Jo CunninghamQatar's rich Arabic heritage is not only captured in its buildings, artworks and stories, but also in the living memory of its people. Historians and ethnographers work to capture these stories by interviewing people in “oral history” projects, typically one by one. This is, naturally, a slow process, which can only record selected highlights of a people's rich memory. We here introduce a digital infrastructure that enables crowd-sourcing and distribution of Qatar's oral history. The goal is to inspire people to actively participate in shaping their country's historic record. People in Qatar as well as those living overseas will be stimulated to participate, thus weaving a rich heritage tapestry available electronically to Qataris and tourists alike. Each user will have our software installed as an app on their smartphone. While they are moving though their environs, users are prompted to create audio recording using our app. As in an interview with an ethnographer, they are guided through a set of questions, one at a time. The person acting as the Oral Historian creates the dynamically configurable script of questions and further defines time limits for each answer. If a response is brief, the app prompts for more detail, if a response is overly lengthy, the app prompts for closure on that question. Our software automatically captures some basic metadata: the time, GPS location, and length of the recording. It can further prompt for semi-structured metadata from its user, such as descriptions of their surroundings and the period to which the recording refers. Users are also prompted to upload any photos, documents, or videos that might relate to their audio contribution. At the end of a recording, a user is asked if they wish to provide personal data, e.g., name and age. The system does not automatically use registration account details as a user might be accompanied by another person who contributes to the oral history recording. These captured oral histories are grouped into collections and curated by an Oral Historian to prevent misuse and ensure quality. They may additionally decide to periodically publish a new set of questions to the registered users, for example to enrich particular topic areas in the collection. The end user software is available as a smartphone app, whereas the interface for the historian is server-based for ease of use. The underlying infrastructure uses the Open Source digital library system Greenstone, which has a pedigree of two decades (www.greenstone.org). It is sponsored by UNESCO as part of its "Information for All" programme, and its user interface has been translated into over 50 languages. Recent developments enable mobile phone operation and multimedia content. Greenstone therefore provides an ideal platform to deliver this integrated, mobile infrastructure for crowd-sourcing oral history information. The captured oral histories and any accompanying multimedia artefacts are sent to a central library where Greenstone's content management tools are available to the curating Oral Historian. The items are thus integrated into an evolving, public collection that are available to mobile and web users alike.
-
-
-
University roots and branches between ‘glocalization’ and ‘mondialisation’: Qatar's (inter)national universities
More LessAs in many parts of the world, the tertiary education sector in Qatar is growing rapidly, viewed as key to national development on the path to the “knowledge society”. The states of the Islamic world, with a significant but long-obscured past of scientific achievement, are witnessing a contemporary renaissance. The establishment of international offshore, satellite or branch campuses in the Arabian Gulf region emphasizes the dynamism of higher education development there: more than a third of the estimated hundred such university campuses worldwide exist there. Within the context of extraordinary expansion of higher education and science in this region, Qatar presents a valuable case of university development to test the diffusion of an emerging global model not only in quantitative, but also in qualitative terms. With an abbreviated history of several decades, Qatar's higher education and science policies join two contrasting strategies. These contrasting strategies are prevalent in capacity building attempts worldwide: (1) to match the strongest global exemplars through massive infrastructure investment and direct importation of existing organizational ability, faculty, and prestige, and (2) to cultivate native human capital through development of local competence. Thus, university-related and science policy making on the peninsula has been designed to directly connect with global developments while building local capacity in higher education and scientific productivity. Ultimately, the goal is to establish an “indigenous knowledge economy”. To what extent has Qatar's two-pronged strategy succeeded in building such bridges? Does the combination of IBCs and a national institution represent a successful and sustainable path for the future of higher education and science in Qatar — and for its neighbors?
-
-
-
The importance of developing the intercultural and pragmatic competence of learners of colloquial Arabic
More LessLanguage and culture are bound together. In Arabic, courtesy expressions play an important role. Consequently, a learner should be aware of them in order to fully master the Arabic language. The current research studies the compliment responses in Colloquial Arabic and their use when teaching Arabic as a foreign language. The speech act of complimenting was chosen because of the important role it plays in human communication. Compliments strengthen solidarity between the speakers and are an explicit reflection of cultural values. The first part of the study was a comparative ethnographic study on compliment responses in peninsular Spanish and in Lebanese Arabic. 72 selected members of a Lebanese and a Spanish social network participated in the research. The independent variables were: origin, age and gender. In both social networks, parallel communicative situations were created. The participants were linked by kinship or friendship and paid a compliment on the same topic. Secret recordings were used in order to register these communicative interactions and create a corpus formed with natural conversations. Compliment response sequences were analysed following a taxonomy created by the researcher for the specific study of the Spanish and Lebanese corpora. In the Lebanese corpus, formulaic expressions and invocations against the 'evil eye' were used. In Arabic and Islamic societies, it is believed that a compliment could attract the 'evil eye' if it is not accompanied by expressions invoking God's protection. In the Spanish corpus, both long and detailed explanations were frequently used. In the second part of the research, a corpus of courtesy expressions in colloquial Arabic is being built. The corpus of the current research could serve for future studies in the field of Arabic dialectology and sociolinguistics as it is the first one to include all the different Arabic dialects. The researcher will study the relationship between language and culture in Arabic societies. Participants of the second part of research are female Arab University students with an advance proficiency level of English and French. The independent variable is origin. Muted videos are the instrument to collect the data for this study. Three different videos for compliments about physical appearance, belongings and skills were recorded in Beirut and Bahrain. Students are requested to recreate the dialogue between the characters in Colloquial Arabic. The compliment response sequence is collected through this instrument because it enhances the students' creative freedom. The objectives of the study are: - Building a corpus of courtesy expressions in colloquial Arabic and conducting a comparative analysis of the formulaic responses to compliments in all Arabic dialects. - Studying if courtesy expressions are included in textbooks for teaching Arabic as a foreign language and if they are currently taught in Institutions and Universities. The results of the present research have some pedagogical implications. Courtesy expressions in spoken Arabic are essential and therefore should be introduced in the language classroom through real language examples. Developing the pragmatic competence plays an important role in teaching foreign languages and it helps Arabic learners to become intercultural speakers.
-
-
-
Cutting-edge research and technological innovations: The Qatar National Historic Environment Record demonstrates excellence in cultural heritage management
Authors: Richard Cuttler, Tobias Tonner, Faisal Abdulla Al Naimi, Lucie Dingwall and Liam DelaneySince 2008, the Qatar Museums Authority (QMA) and the University of Birmingham have collaborated on a cutting-edge research programme called the Qatar National Historic Environment Record (QNHER). This has made a significant contribution to our understanding of Qatar's diverse cultural heritage resource. Commencing with the analysis of terrestrial and marine remotely sensed data, the project expanded to undertake detailed terrestrial and marine survey across large parts of the country, recording archaeological sites and palaeoenvironmental remains ranging from the Palaeolithic to modern times. The project was not simply concerned with the collection of heritage data, but how that data is then stored and accessed. After consultation with Qatar's Centre for GIS, the project team designed and developed a custom geospatial web application which integrated a large variety of heritage-related information, including locations, detailed categorisations, descriptions, photographs and survey reports. The system architecture is based around a set of REST and OGC compliant web services which can be consumed by various applications. Day-to-day access for all stakeholders is provided via the QNHER Web App client, a fully bilingual Arabic and English HTML5 web application. The system accesses internet resources such as base mapping provided by Google Maps and Bing Maps and has become an invaluable resource for cultural heritage research, management and mitigation and currently holds over 6,000 cultural heritage records. Future development will see modules for survey, underwater cultural heritage, translation and web access for educational institutions. The QNHER geospatial web application has become pivotal in providing evidence-based development control advice for the QMA, in the face of rapid urbanisation, highlighting the importance of research, protection and conservation for Qatar's cultural heritage. However, this application has a much wider potential than simply heritage management within Qatar. Many other countries around the globe lack this kind of geospatial database that would enable them to manage their heritage. Clearly the diversity of cultural heritage, site types and chronologies means that simply attempting to transplant a system directly is inappropriate. However, with the input of regional heritage managers, particularly with regard to language and thesauri, the system could be customised to address the needs of cultural resource managers around the world. Most antiquities departments around the globe do not have country-wide, georeferenced base mapping or access to geospatial inventories. Access to internet resources has major cost-saving benefits, while providing improved mapping and data visualisation. More importantly this offers the opportunity for cultural heritage management tools to be established with minimal outlay and training. The broad approach the project has taken and the technological and methodological innovations it introduced make the QNHER a leader in this field - not only in the Gulf, but also in the wider world.
-
-
-
Veiling in the courtroom: Muslim women's credibility and perceptions
Authors: Nawal Ammar and Nawal AmmarThis presentation provides systematic evidence on an emergent debate about a Muslim woman's dress and her perceived credibility as a witness in a court context. The objective of this research is to understand how Muslim women's dress impacts their perceived credibility. The issue of testifying in western courts while wearing either a head veil (hijab) or a face veil (niqab) has been strongly contested on the grounds that physically seeing the witness's face helps observers judge her credibility (e.g., R. v. N.S., 2010). Canada's Supreme Court (N.S. v. Her Majesty the Queen, et al., 2012) ruled that judges will disallow the niqab "whenever the effect of wearing it would impede an evaluation of the witness's credibility." In 2010, an Australian judge ruled that a Muslim woman must remove her full veil while giving evidence before a jury. In 2007, the UK guidance on victims also indicated that the niqab may affect the quality of evidence given in the court room. All of those decisions and opinions run contrary to systematic research. Psychological studies suggest that nonverbal cues are not only poor indicators of veracity, but that they are the least useful indicators of deception (e.g., DePaulo et al., 2003, Vrij, 2008). This presentation discusses results of a research project that examined Muslim women's credibility. Using a quasi-experimental design, three groups of Muslim women lied or told the truth while testifying about a mock crime: 1) women without any coverings (safirat), 2) women wearing hijab (muhjabat) and 3) women wearing niqab (munaqabat). Videos of these women were then shown to audiences who assessed the credibilityof the witnesses. The research further explored the witness's general perceptions of being an eyewitness within the court system, and to what extent (if any) their dress impacted their perceptions. The presentation fits within the Social Sciences, Arts and Humanities thematic pillar of Qatar's National Research Strategy. It more particularly fits within two of the grand challenges: Managing Transition to a Diversified Knowledge- based Society: Build a knowledge-based society by emphasizing a robust research culture, and Holistic and Systematic Assessment of the Rapidly Changing Environment: Foster motivation, scholarship, and prosperity among Qatari nationals and expatriates along with cultural accommodations that are in-sync with modern practice.
-
-
-
Culture embodied: An anthropological investigation of pregnancy (and loss) in Qatar
Authors: Susie Kilshaw, Kristina Sole, Halima Al Tamimi and Faten El TaherThis paper explores the emergent themes from the first stage of our cross cultural research (UK and Qatar) into pregnancy and pregnancy loss. This paper presents a culturally grounded representation of pregnancy and the experience of pregnant women in Qatar. In order to understand the experience of miscarriage in Qatar, it is necessary to first develop an ethnotheory of pregnancy. This research uses the approach and methods of medical anthropology. However, this project is particularly exciting because of its commitment to interdisciplinary research: the research is informed and led by our collaboration between anthropologists and medical doctors. Ethnographic methods provide an in-depth understanding of the experience of pregnancy and pregnancy loss. Our main method is semi structured interviews, but true to our anthropological foundation, we are combining this with other forms of data collection. We are observing clinical encounters (doctors appointments, sonography sessions) and conducting participant observation, such as accompanying women when they shop in preparation for the arrival of their baby. , The research is longitudinal and incorporates 12 months of ethnographic fieldwork in Qatar. Part of this is following 20 pregnant women throughout their pregnancy to better understand their developing pregnancy, their experience of pregnancy, the medical management of the pregnant body, and the development of fetal personhood. Women are interviewed on several occasions but we are also in contact for more informal knowledge sharing. 40 women who have recently miscarried (40 in Qatar and 40 in the UK) will also be interviewed. However, this paper will focus on our first cohort: pregnant Qatari women. After 6 months of fieldwork in Qatar, we have discovered a number of emergent themes which help us to better understand the social construction of pregnancy in Qatar. This will then allow us to better understand what happens when a pregnancy is unsuccessful. Here we develop a culturally specific representation of pregnancy in Qatar including: the importance of fetal environment on the developing fetus and cultural theories of risk (evil eye, food avoidance). Issues around risk and blame are explored, as these will likely be activated when a pregnancy is unsuccessful. We also look at the experience of pregnancy and how this is impacted upon by past experiences of pregnancy loss (both stillbirth and miscarriage). The importance of motherhood in Qatar is considered, as it is a central concern for our participants. By exploring these themes we are developing a better understanding of the experience of pregnancy in Qatar, which will enable us to shed light on the impact of pregnancy loss on the mother and those around her.
-
-
-
Geolocated video in Qatar: A media demonstration research project
Authors: John Pavlik and Robert E. VanceGeolocated video represents an opportunity for innovation in journalism and media. Reported here are the results of a proof-of-concept research project demonstrating and assessing the process of creating geolocated video in journalism and media. Geolocated refers to tagging video or other media content with geographic location information, usually obtained from GPS data. Geolocation is a growing feature of news and media content. It is being used increasingly in photographs and social media, including Twitter posts. Geolocation in video is a relatively new application. Employing a proof-of-concept method, this project demonstrates how geolocated video (using Kinomap technology) serves several purposes in news and media (see Figure 1). First, geolocation allows the content to be automatically uploaded to Google Earth or other mapping software available online. This enables others anywhere to access that content by location. It is an aspect of Big Data, in that it permits mapping or other analysis of geolocated content. Such analysis can reveal a variety of insights about the production of media content. Second, geolocation, in concert with other digital watermarking, provides a useful tool to authenticate video. Geolocation in a digital watermark is a valuable tool to help establish the veracity of video or other content. Geolocation can help document when and where video produced by users or freelancers or even professionally employed reporters covering a sensitive story was captured. Reporters (or lay-citizens) providing smartphone video of an event can use geolocation to help establish time, date and location. Third, geolocation supports freelance media practitioners in protecting copyright or intellectual property rights by helping provide a strong digital watermark that includes their identity and the precise time, date and location the video was captured. Fourth, geolocated video represents an opportunity for a new approach to storytelling including in digital maps. Geolocated videos have been produced and made available on Google Earth. Viewing can occur immersively with mobile or wearable devices (e.g., augmented reality, Google Glass). This project aligns with three of the Qatar Research Grand Challenges. First, it supports Culture, Arts, Heritage, Media and Language within the Arabic Context, providing a medium to foster investment in the nation's legacy in Arabic arts, design, architecture, and cultural programs. Second, it supports Holistic and Systematic Assessment of the Rapidly Changing Environment, exploring the roles of communication (e.g., education, journalism, traditional and social media channels) in fostering awareness of social issues. Third, it supports Sustainable Urbanization-Doha as a smart city, demonstrating a state-of-the-art communications technology especially effective and efficient in facilitating location-based communications challenges and needs.
-
-
-
“Society and Daily Life Practices in Qatar Before Oil Industry, A Historic Study in light of Texts and Archaeological Evidence”
More LessA series of archaeological sites in Qatar have been attesting multifaceted aspects of the society and daily life practices in the eve of oil industry, particularly in the period from the 17th to the mid-20th century. The archaeological walled town of Al-Zubarah, for instance, has been excavated, first in the early 1980s by a Qatari mission, and since 2010 by the University of Copenhagen, in partnership with Qatar Museum Authority. Due to its outstanding cultural importance to the common heritage of humanity, the town of al-Zubarah (from ca. 17th century to the mid-20th century) has recently been inscribed on the UNESCO World Heritage List. The geostrategic location of the town on the north western coast alongside with its environmental landscape and physical remains such as the sea port, the fortified canal leading to the former Murayr and the rich archaeological discoveries, are attesting the town's role as a major pearl and trade center in the Gulf region. In addition, the uncovered quarters, palaces, courtyard houses and huts alongside with the town mosque, market and the other domestic architecture, are essential components of a major Islamic trade center, planned and built according to the Islamic law (Shari'a) and local social traditions. In addition to the uncovered architecture, the revealed material culture, particularly the large assemblages of vessels and tools made of different material for different purposes and originated from various regions are cinsidered a primary physical source for reconstructing multifaceted aspects of the social history, society inter-relations, daily life practices and contact with neighboring and far cultures. In light of texts, archaeological record and the field observations of the author, particularly during his excavation at al-Zubarah, this paper endeavors to reconstruct Qatar society and daily life practices in the period from the 17th through the mid-20th century, focusing on the following points: - Society and gender immediate needs in light of the uncovered architecture and in context with the Islamic law (Shari'a) and local traditions. - Communal identity and daily life practices in light of the uncovered material culture such as tools and vessels. - Evidences of contact with the surrounding regions and cultures.
-
-
-
Different trajectories to undergraduate literacy development: Student experiences and texts
Authors: Silvia Pessoa, Ryan Miller and Natalia GattiThis presentation draws on data from a four-year longitudinal study of undergraduate literacy development at an English-medium university in Qatar. While previous studies have documented literacy development at the primary and secondary school levels (Christie & Derewianka, 2010, Coffin, 2006) and much has been researched about the nature of writing genres at the graduate and professional levels (Hyland, 2009, Swales, 1990, 2004), there is a limited body of research on writing at the undergraduate level (Ravelli & Ellis, 2005). The limited work at this level has been either largely qualitative (Leki, 2007, Sternglass, 1997) or primarily text-based (Byrnes, 2010, Colombi, 2002, Nessi, 2009, 2011, North, 2005, Woodward-Kron, 2002, 2005, 2008). Recently, drawing on systemic functional linguistics (SFL) and genre pedagogy, the work of Dreyfus (2013), and Humphrey and Hao (2013) has begun to shed light on the nature of disciplinary writing and writing development at the undergraduate level. However, there is much to learn about the nature of undergraduate writing. This study aims to contribute to this area by examining faculty expectations, student trajectories, and development from a text-based and ethnographic approach. This presentation reports on different trajectories of academic literacy development by presenting four case studies of multilingual students at an English-medium university in the Middle East. While there have been several studies that have looked at the developmental literacy trajectories of undergraduate students (Leki, 2007, Sternglass, 1997, Zamel, 2004) using case studies, they have not closely and systematically examined writing development from a text-based approach. This paper aims to contribute to the growing interest in understanding the nature of undergraduate writing, especially among multilingual students, by using detailed case studies and analysis of student writing longitudinally. The presenters will describe the college experiences of four students and present longitudinal analysis of their writing over four years in the disciplines of business administration and information systems. The findings suggest that students enter the university with differing pre-college experiences that shape their college experiences and impact their rate of development. While personal, social, and academic development is documented in all cases, there are differences between those who came in with a strong academic background in English and those that had limited experiences with English academic reading and writing. Using the tools of Systemic Functional Linguistics (Halliday, 1984), the text analysis of student writing shows development as their writing progressively becomes more academic (with increasing use of nominalizations and abstractions), analytical (with increasing use of evaluations), and better organized, with differences among the four case studies. Overall, the findings suggest that while weaker students do improve their literacy skills while in college, many still graduate with inconsistencies and infelicities in their writing. Documenting the literacy development of university students in Qatar is pivotal as Qatar continues to invest in English-medium education to build its human capital. This project aims to generate insights for curricular planning and assessment in Qatar and a basis for research on academic literacy development that will be of interest to scholars internationally.
-
-
-
Life satisfaction among female doctors vs. other female workers in Gaza
Authors: Sulaiman Abuhaiba, Khamis Elessi, Samah Afana, Islam Elsenwar and Arwa AbudanBackground: 50% of those who had ever used an on-line life satisfaction measurement tool were considered optimally satisfied about their lives. Categorization into age, sex, country of origin and religion did not seem to affect the results of the on-line database for life satisfaction scores. In Gaza, Palestine, most of the public believed that being a female doctor would kill any form of life enjoyment and it is a common belief here that doctors tend to wait longer than other female workers are before getting into a stable marital relationship. The aims of our study were to quantify life satisfaction among female doctors in Gaza, compare their results with those from other work sectors and finally to prove that a medical career does not affect adversely life satisfaction for Gazian female doctors. Methods: We used random sample tables to choose the work places for our sample groups. We have interviewed any female worker at the given facility using convenient sampling technique. 50 female doctors and 50 other workers were compared to each other using objective standard measurement tool for life satisfaction which was composed of 14 specific questions with a possible total score from (14 to 70) where 70 is the most satisfied total score and a total score of more than 50 was the cut-off for defining satisfaction. Total average scores and average scores for each question were compared between the two groups using statistical analysis methods. The frequency of use of over the counter medications was also compared between the two groups. Results: Average age for female doctors (FD) and other workers (OW) was comparable (FD 30.16 years, OW 30.4 years). Response rate was 90% between both groups. Average age, number of children per family and matched scores for the 14 questions, were all of no statistical significant difference between married female doctors and married other workers (p = 0.4; 0.7 and 0.6 respectively). Life satisfaction among married female doctors and other workers was not statistically significantly different between the two groups (FD 13/25 VS OW 9/25); p = 0.4. Average age, matched average scores for each of the 14 questions and life satisfaction proportions were not statistically significantly different between single females of the two groups (p = 0.2; 0.4; and 1.0 respectively). Use of the over the counter drugs was statistically more commonly reported among single female doctors; p = 0.02. Interpretation: We have proved that there is no real association between being a female doctor in Gaza and having a low life satisfaction score. We can assure our female doctors they do not have lower enjoyment of their lives compared to other female workers. The average age for female singles between the two groups was not different which stands against the wide belief in our society that female doctors tend to get married later than other workers. Finally, our single female doctors should be discouraged about the non-rationale use of over the counter drugs.
-
-
-
Learning how to survey the Qatari population
More LessUniversal surveys—such as the World Values Survey—seek to promote generalizability across contexts. But what if two different cultures interpret and respond to a general question in two different ways (e.g., King et al. 2004)? Using this methodological conundrum as a starting point, over the past year I have a led a research team of four faculty and twelve students—drawn from Northwestern University in Qatar, Qatar University, and Georgetown University in Qatar—in creating, translating, and analyzing a context-sensitive survey of the Qatari population, funded through a Qatar National Research Fund UREP (Undergraduate Research Experience Program) grant. The survey was conducted by Qatar University's Social and Economic Survey Research Institute (SESRI) from January 15 to February 3 for a total of 798 Qatari respondents, making it a professional and valuable addition to the literature. Further, we were able to insert many questions that had never previously been asked of the population, including ones spanning the relative importance of tradition versus modern symbolism, specific opinions on the national education reforms, personal versus state priorities, satisfaction with particular welfare benefits offered by the state, and measures of religiosity. Both the process of creating a contextualized survey for the Qatari population—including what could and could not be asked, and how sensitive concepts were translated—as well as the fascinating results, which have opened up new avenues of research into the sociopolitical transformations of the Qatari people, are well worth presenting to the community and receiving feedback. Even more importantly, the insights gained from how we contextualized the survey can be applied to improve the current state of social science survey research in Qatar. The explosion of survey research in Qatar—pioneered by the Qatar Statistics Authority and Qatar University's SESRI, and recently joined by the multinational surveys of the World Values Survey, Harris Polling, Zogby's, and the Arab Barometer—demonstrates the need for questions that are contextually and culturally sensitive and ensure full understanding of the Qatari population. The ability to present my work to the Qatar Foundation Annual Research Conference will provide valuable feedback and networking opportunities with likeminded professionals, researchers, and community members in my quest to continue this collaborate and important research effort. Citation: King, Gary, Christopher Murray, Joshua Salomon, and Ajay Tandon. 2004. “Enhancing the Validity and Cross-Cultural Comparability of Measurement in Survey Research.” American Political Science Review 98 (1): 191-207.
-
-
-
Abuse of volatile substances
More Lessملخص بحث تعاطي المواد الطيارة البروفيسور العياشي عنصر وآخرون يتمثل الهدف الرئيسي لهذه الدراسة في التعرف على مدى انتشار تعاطي المواد الطيارة أو "المستنشقات" بين المراهقين في دولة قطر. ويتفرع عن هذا الهدف الرئيسي مجموعة من الأهداف الفرعية. وقد انطلقت الدراسة من مجموعة من التساؤلات الرئيسية مثل: ما هي الخصائص الديموغرافية والاجتماعية لمتعاطي المواد الطيارة. ما الذي أوصلهم إلى تعاطي هذه المواد الطيارة. أين ومتي يتعاطون هذه المواد. ما هي شدة الإقبال عليها، وما طول فترة التعاطي بين المراهقين. ثم ما مدى وعي المراهقين بأضرار هذه المواد على صحتهم الجسمية والنفسية. اعتمدت الدراسة على منهج المسح الاجتماعي الذي يعتبر من أشهر مناهج البحث وأكثرها استخداماً في الدراسات الوصفية، وربما يعود ذلك إلى كونه من أقدم الأساليب التي اعتمد عليها البحث الاجتماعي. كما يعود ذلك إلى ما يوفره من بيانات كثيرة ومعلومات دقيقة، فضلا عن كونها مستقاة من الواقع الفعلي للناس. يتكون مجتمع الدراسة من طلاب وطالبات المرحلتين الإعدادية والثانوية في المدارس المستقلة التابعة للمجلس الأعلى للتعليم في مختلف مناطق الدولة. كانت عينة البحث عشوائية طبقية (Stratified Random Sample) لأنها المناسبة أكثر لمجتمع الدراسة الذي يضم مراهقين من مرحلتين دراسيتين مختلفتين ومن الذكور والإناث. وقد جرى استخدام الاستبيان كأداة لجمع البيانات وبلغ عدد الاستبيانات التي تم تطبيقها في بداية الدراسة 1223 استبيان، وبعد التدقيق جرى استبعاد 25 استمارة لعدم صلاحيتها، وبذلك كان العدد النهائي 1198 استمارة، 2/3 منها من الذكور. توصلت الدراسة إلى مجموعة من النتائج المهمة منها على سبيل المثال: أن نسبة المراهقين المتعاطين بلغت 15.94% من إجمالي العينة البالغة 1198 مراهق، 55% منهم من الذكور و45% من الإناث،، ويمثل القطريون ثلثين منهم، فيما يمثل غير القطريين الثلث الآخر. أما بالنسبة لمتغير العمر فكانت أعلى نسب التعاطي بين المراهقين الذين تتراوح أعمارهم بين 13-18 سنة وبلغت 87% من جملة المتعاطين. يمثل المراهقون في المرحلة الثانوية حوالي ثلثي مجموعة المتعاطين بنسبة 63.4% مقابل الثلث تقريبا من المرحلة الإعدادية 36.6% من مجموع الطلاب المتعاطين البلغ عددهم 191 مفردة. كما كشفت النتائج أن نسب المتعاطين أعلى في المناطق ذات الكثافة السكانية مثل بلدية الدوحة و بلدية الريان، ثم تأتي بعد ذلك المناطق الأقل كثافة سكانية مثل أم صلال، والوكرة والخور بنسب ضعيفة. وأظهرت النتائج أن 50% من المتعاطين تناولوا هذه المواد لأقل من سنة، و16.7% تعاطوا لسنة أو سنتين، ثم 12.5%% من ثلاث إلى أربع سنوات، وأخيرا 14.6% تعاطوا لفترة خمس سنوات فأكثر. ومن النتائج المهمة التي بينتها الدراسة أن البنزين الصناعي ومشتقاته أكثر المواد الطيارة انتشاراً بين المراهقين المتعاطين حيث بلغت نسبة المتعاطين 53.4%، يليه طلاء الأظافر بنسبة 12%، ثم الصمغ بنسبه 8.9% بينما كانت المواد الأخرى أقل استعمالا. أما بالنسبة لأماكن التعاطي، فأظهرت النتائج أن الشارع يأتي في المقدمة، يليه المنزل، ثم المدرسة. وتختلف أماكن التعاطي عند الإناث عن الذكور، حيث تتعاطى الإناث في المنزل بالدرجة الأولى، بينما يحتل الشارع الرتبة الأولى لدى الذكور. الكلمات المفتاحية: تعاطي- المواد الطيارة- المستنشقات- المراهقون- التأثيرات الصحية
-
-
-
Training model to develop the Qatar workforce using emerging learning technologies
By Mohamed AllyThe Qatar National Vision aims at “transforming Qatar into an advanced country by 2030, capable of sustaining its own development and providing for a high standard of living for all of its people for generations to come”. The grand challenge of Human Capacity Development aims to develop sustainable talent for Qatar's knowledge economy in order to meet the needs for a high-quality workforce. In order for Qatar to achieve its 2030 National Vision and become an advanced country by 2030, it has to train its citizens to function in a globalized and competitive world. Important skills for Qatari to function in the 21st century are communication and use of emerging technologies skills. This presentation will propose a training model to develop the Qatar workforce for the 21st century using emerging learning technologies. The training model was based on a mobile learning research project funded by the Qatar Foundation through the Qatar National Research Fund. The project is a collaborative research project with Qatar University, Qatar Petroleum, Qatar Wireless Innovation Centre, and Athabasca University, Canada. The project developed and implemented training lessons on Communication Skills for the oil and gas industry using mobile technology to deliver the training. The workers were employed at Qatar Petroleum and completed the training as part of their professional development to improve their English communication skills. Results from the project showed that workers performance improved after they completed the training and they reported that use of mobile technology to deliver the training provides flexibility for learning on the job. They suggested that the training should be more interactive and game-like. This is important since today's young workers are comfortable using mobile technologies and they need to be motivated to learn using the mobile technologies. The proposed Qatar National Training Model (QNTM) (Figure 1) is based on the mobile learning research project funded by the Qatar Foundation through the Qatar National Research Fund. In the QNTM, the learner/trainee/worker is at the center of the learning since the goal of training is to provide the knowledge and skills to improve workers' performance on the job. The design of the training must follow good learning design principles including preparing the learner for the training, providing activities for the learners to complete to improve their knowledge and skills, allowing learners to practice to improve their performance, certifying learners based on their performance, and providing opportunities for learners to transfer what they learn to the job environment. The delivery of the training should be flexible using a blended approach that includes face-to-face, hands-on, E-learning, mobile learning, and online learning. A variety of learning strategies such as practice with feedback, tutorials, simulations, games, and problem solving can be used depending on the learning outcomes to be achieved. The proposed Qatar National Training Model will allow for learner-centered training, lifelong learning, just-in-time learning, learning in context, developing skills required for 21st century learning, and interaction between learners and between learners and the trainer using social media.
-
-
-
Building independent schools capacity in Qatar through school based support program (SBSP): Perceptions of participating schools' teachers and principals
More LessOver the past years, significant and rapid changes in many aspects of society and the world have led countries such as Qatar and others in the Gulf Region to reform their national education systems, focusing on the integration of standards, assessment, and accountability. One of the key elements in most of these reforms is the professional development as a central feature of such educational improvement initiatives for the many contributions it can make. It is reasonably assumed that improving teachers' knowledge, skills, and dispositions is one of the most critical steps to improving students achievements (King & Newmann, 2001). Further, it plays a key role in addressing the gap between educators preparation and standard-based reform. However, proposal from many quarters argue that professional development itself need to be reformed (King & Newmann, 2001). Much of the professional development that is offered for teachers and principals simply does not meet the challenges of the reform movement (Birman, Desimone, Porter, and Garet, 2000). Professional development in Qatar is no exception. Professional development has always taken place in Qatar independent schools. In contrast, "teachers and principals noted a downside to the steep quantity of professional development opportunities: Teachers reported feeling overwhelmed and burned out" (Brewer, et al., 2009, p.50). However, the quality and effectiveness of professional development were highly variable. As evidence, the Supreme Education Council (SEC) in Qatar found that relying primarily on international organizations to deliver staff development have not developed the capacity to prepare current and future educators for the reform schools. Despite a substantial national investment in professional development initiatives, concerns remain about the quality of the educational staff and the subsequent impact on instruction (Brewer, et al., 2009). Further, teachers and principals at independent schools in Qatar have raised important questions on the effectiveness of traditional professional development programs and its impact on their performance. They have attended many professional development programs, yet significant professional development needs remain (Palmer et al., 2010-2011). Another concern is the difficulty for teachers and principals to carve out time during the work day to participate in professional development because of the increased workload that many Qatar Independent school teachers reported. Most of them have to stay after regular working hours and into the evening to attend workshops, so many of their days became quite long (Brewer, et al., 2009, p.50). In response to these concerns and needs, School Based Support Program (SBSP) was launch in September 2011 , by the National Center for educator development (NCED) at Qatar university, to address some of the concerns and needs noted. In particular, to conduct high quality, practical, and school based professional learning activities derived from research-based best practices, to significantly improve the performance of the participating independent schools and their principals and teachers professional practices . Therefore, the purpose of the study was to measure the impact of SBSP program as perceived by participating schools' principals and teachers.
-