- Home
- Conference Proceedings
- Qatar Foundation Annual Research Forum Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Forum Volume 2013 Issue 1
- Conference date: 24-25 Nov 2013
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2013
- Published: 20 November 2013
441 - 460 of 541 results
-
-
Dynamic Non-Local Coherent Phase Approximation (Dnlcpa) Model For Spin Wave Dynamics In Ultrathin Magnetic Fe-Gd Films Presenting Interfacial Structural Disorder
More LessIt is widely believed in the semiconductor community that the progress of the electronics based information technology is coming to an end (ITRS 2007), owing to fundamental electronic limitations. A promising alternative to electrons is the use of spin wave excitations. This has ushered a potentially powerful solution towards fabricating devices that use these excitations to transmit and process information (Khitun and Wang 2006). This new approach to information-processing technology, known as magnonics, is rapidly growing (Kruglyak and Hicken 2006), and key magnonic components such as spin wave guides, emitters, and ï¬lters are currently explored (Choi et al. 2007). The first working spin wave based logic device has been experimentally demonstrated by Kostylev et al (2005). In the present paper we develop and apply a model to analyze the spin dynamics for iron-gadolinium films of a few Gd(0001) atomic planes between two Fe(110) atomic planes. These ultrathin systems may be deposited layer by layer on a nonmagnetic substrate using techniques like dc-sputtering or pulsed laser deposition. They constitute prototypes for iron-gadolinium nanojunctions between iron leads in magnonics. In this system the Fe/Gd interfaces present structural disorder due to the mismatch between the Fe_bcc and Gd_hcp lattices. This engenders a quasi-infinite ensemble of Fe-Gd cluster configurations at the interface. In the absence of DFT or ab initio results for the magnetic Fe-Gd exchange, we have developed an integration based analytic approach to determine the spatial dependence of this exchange using available experimental data from the corresponding multilayer systems. A dynamic non-local CPA method is also developed to analyze the spin dynamics for the disordered systems. This DNLCPA introduces the idea of a scattering potential built up from the phase matching of the spin dynamics on structurally disordered Fe-Gd interface clusters with the spin dynamics of the virtual crystal. This method accounts properly for the quasi-infinite ensemble of interfacial structural configurations, and yields the configurationally averaged Green's function for the disordered system. The computations yield the spin wave eigenmodes, their energies, life-times, and local densities of states, for any given thickness of the ultrathin magnetic Fe-Gd film. Fig.1 DNLCPA calculated dispersion branches for the spin waves propagating in the ultrathin magnetic 1Fe-5Gd-1Fe film (7 atomic planes) presenting structural interfacial disorder. The normalized energies are in units of iron exchange and spin J(Fe-Fe)S(Fe). The curves are plotted as a function of the y-component of the wave-vector (inverse angstroms), the z-component = 0, in both figures. Fig.2 DNLCPA calculated life-times in picoseconds of the spin waves propagating in the ultrathin magnetic 1Fe-5Gd-1Fe film. Acknowledgements: The authors acknowledge QNRF financial support for the NPRP 4-184-1-035 project. References - S. Choi, K.S. Lee, K.Y. Guslienko, S.K. Kim, Phys. Rev. Lett. 98, 087205 (2007) - A. Khitun and K. L. Wang, Proceedings of the Third International Conference on Information Technology, New Generations ITNG, 747 (2006) - M.P. Kostylev, A.A. Serga, T. Schneider, B. Leven, B. Hillebrands, Appl. Phys. Lett. 87 153501 (2005) - V.V. Kruglyak and R.J. Hicken, J. Magn. Magn. Mater. 306, 191 (2006)
-
-
-
Quantum Imaging: Fundamentals And Promises
More LessQuantum imaging can be defined as an area of quantum optics that investigates the ultimate performance limits of optical imaging allowed by the quantum nature of light. Quantum Imaging techniques possess a high potential for improving the performance in recording, storage, and readout of optical images beyond the limits set by the standard quantum level of fluctuations known as the shot noise. This talk aims at giving an overview of the fundamentals of Quantum Imaging as well as its most important directions. We shall discuss generation of the spatially multimode squeezed states of light be means of a travelling-wave optical parametric amplifier. We shall demonstrate that this kind of light allows us to reduce the spatial fluctuations of the photocurrent in a properly chosen homodyne detection scheme with a highly efficient CCD camera. It will be shown that using the amplified quadrature of the light wave in a travelling-wave optical parametric amplifier, one can perform noiseless amplification of optical images. We shall provide recent experimental results demonstrating a single-shot noiseless image amplification by a pulsed optical parametric amplifier. One of important experimental achievements of Quantum Imaging, coined in the literature as quantum laser pointer, is a precise measurement of position and transverse displacement of a laser beam with resolution beyond the limit imposed by the shot noise. We shall describe briefly the idea of the experiment in which the transverse displacement of a laser beam was measured with resolution of the order of Angstrom. The problem of precise measurement of transverse displacement of a light beam brings us to a more general question about the quantum limits of optical resolution. The classical resolution criterion, derived in the nineteen century by Abbe and Rayleigh, states that the resolution of an optical instrument is limited by diffraction and is related to the wavelength of the light used in the imaging scheme. However, it was for a long time recognised in the literature that in some cases when one has some a priori information about the object, one can improve the resolution beyond the Rayleigh limit using the so-called super-resolution techniques. As a final topic in this talk, we shall discuss the quantum limits of optical super-resolution. In particular, we shall formulate the standard quantum limit of super-resolution and demonstrate how one can go beyond this limit using specially designed multimode squeezed light.
-
-
-
Computation Of Magnetizations Of …Co][Co_(1-C)Gd_C]_L ][Co]_L [Co_(1-C)Gd_C ]_L [Co… Nanojunctions Between Co Leads, And Of Their Spin Wave Transport Properties
More LessUsing the effective field theory (EFT) and mean field theory (MFT), we investigate the magnetic properties of the nanojunction system...Co][Co_(1-c)Gd_c]_l ][Co]_l [Co_(1-c)Gd_c ]_l [Co... between Co leads. The amorphous ][Co_(1-c)Gd_c]_l ] composite nanomaterial is modeled as a homogeneous random alloy of concentrations "c" on an hcp crystal lattice, and "l" is the number of its corresponding hcp (0001) atomic planes. In particular, EFT determines the appropriate exchange constants for Co and Gd by computing their single-atom spin correlations and magnetizations, in good agreement with experimental data in the ordered phase. The EFT results for the Co magnetization in the leads serve to seed the MFT calculations for the nanojunction from the interfaces inward. This combined analysis yields the sublattice magnetizations for Co and Gd sites, and compensation effects, on the individual nanojunction atomic planes, as a function of the alloy concentration, temperature and nanojunction thicknesses. We observe that the magnetic variables are different for the first few atomic planes near the nanojunction interfaces, but tend to limiting solutions in the core planes. The EFT and MFT calculated exchange constants and sublattice magnetizations are necessary elements for the computation of the spin dynamics for this nanojunction system, using a quasi-classical treatment over the spin precession amplitudes at temperatures distant from the critical temperatures of $Co_{1-c}Gd_c$ alloy. The full analysis in the virtual crystal approximation (VCA) over the magnetic ground state of the system yields both the spin waves (SWs) localized on the nanojunction, and also the ballistic scattering and transport across the nanojunction for SWs incident from the leads by applying the phase field matching theory (PFMT). The model results demonstrate the possibility of resonance assisted maxima for the SW transmission spectra owing to interactions between the incident SWs and the localized spin resonances on the nanojunction. The spectral transmission results for low frequency spin waves are of specific interest for experimentalist, because these lower frequency SWs correspond to submicroscopic wavelengths which are of present interest in experimental magnonics research and the VCA is increasingly valid as a model approximation for such frequencies. Fig.1: Calculated spin variables sigma_Co and sigma_Gd, in the first layer of [Co_(1-c}Gd_(c)]_2[Co]_2[Co_(1-c}Gd_(c)]_2 layer nanojunction as a function of kT in meV. Fig.2: The total reflection R and transmission T cross sections associated with the scattering at the nanojunction for the cobalt leads SW modes 1, 2, with the selected choices of incident angle (phi_z, phi_y). Acknowledgements: The authors acknowledge QNRF financial support for the NPRP 4-184-1-035 project.
-
-
-
In investigation into the optimal usage of social media networks in international collaborative supply chains
More LessSocial Media Networks (SMNs) are collaborative tools used in an increasing rate in many business and industrial environments. These are often used in parallel with dedicated Collaborative Technologies (CTs) which are specifically designed to handle dedicated tasks. Within this research project the specific area of supply chain management is the focus of investigation. Two case studies where CTs are already extensively employed have been conducted to evaluate the scope of SMN usage and to confirm the particular benefits provided and to identify limitations. The overall purpose being to provide guidelines on joint CT/SMN deployment in developing supply chains. The application of SMNs to supply chain operations is fundamental to addressing the emerging need for increased P2P (peer-to-peer) type communication. This type of communication is between individuals and is typified by increased relationship type interaction. This is in contrast to traditional B2B (business-to-business) communication which is typically conducted on a transactional basis especially where it is confined by the characteristics of dedicated CTs. SMNs can be applied in supply chain networks to deal with unexpected circumstances or problem solving, capture market demands and customer feedback, and in general provide a medium to react to unplanned events. Crucially, they provide a platform where the issues can be addressed on a P2P basis in the absence of confrontational, transactional type interactions. The case studies reported in this paper concern EU based companies, one being a major national aluminium profile extruder, the second being a bottling plant for a global soft drinks manufacturer. In both cases the application of CTs to their supply chains is well established. However whilst both companies could readily identify the strengths of their CT systems (information and data sharing, data storage and retrieval) they could also identify limitations. These limitations included the lack of real time interaction at a P2P level and, interestingly, the lack of a common language used between different CT systems in B2B communication. Overall, the comments of the case study companies was that the SMN provided valuable adjuncts to existing CT systems, but that the SMNs were not integrated with the CT systems. There was a strongly perceived need for a better understanding of the contrasting and complementary capabilities of CTs and SMNs so that in future fully integrated systems could be implemented. Future work in this area will focus on the development of guidelines and procedures for achieving such complementarity in international collaborative supply chains.
-
-
-
Middleware architecture for cloud-based services using software defined networking (SDN)
By Raj JainIn modern enterprise and Internet-based application environments, a separate middlebox infrastructure for providing application delivery services such as security (e.g., firewalls, intrusion detection), performance (e.g., SSL off loaders), and scaling (e.g., load balancers) is deployed. However, there is no explicit support for middleboxes in the original Internet design; forcing datacenter administrators to accommodate middleboxes through ad-hoc and error-prone network configuration techniques. Given their importance and yet the ad-hoc nature of their deployment, we plan to study application delivery (in general) in the context of cloud-based application deployment environments. To fully leverage these opportunities, ASPs need to deploy a globally distributed application-level routing infrastructure to intelligently route application traffic to the right instance. But, since such an infrastructure would be extremely hard to own and mange, it is best to design a shared solution where application-level routing could be provided as a service by a third party provider having a globally distributed presence, such as an ISP. Although these requirements seem separate, they can be converged into a single abstraction for supporting application delivery in the cloud context.
-
-
-
A proposed transportation tracking system for mega construction projects using passive RFID technology
More LessThe city of Doha has witnessed a rapid change in its demographics over the past decade. The city has been thoroughly modernized, a massive change in its inhabitants culture and behavior has occurred and the need to re-develop its infrastructure has arose creating multiple mega construction projects such as the New Doha International Airport, the Doha Metro Network, and the Lusail City. A mega-project such as the new airport in Doha requires 30,000 workers on average to be on site every day. This research tested the applicability of radio frequency identification (RFID) technology in tracking and monitoring construction workers during their trip from their housing to the construction site or between construction sites. The workers tracking system developed in this research utilized the passive RFID tracking technology due to its efficient tracking capabilities, low cost, and easy maintenance. The system will be designed to monitor construction workers ridership in a safe and non-intrusive way. It will use a combination of RFID, GPS (Global Positioning System), and GPRS (General Packet Radio Service) technologies. It will enable the workers to receive instant SMS alerts when bus is within 10 minutes of the designated pick up and drop off points reducing the time the workers spend on the street. This is very critical especially in a country like Qatar where temperatures can reach 50º degrees Celsius during summer time. The system will also notify management via SMS when the workers board and alight from the bus or enters/leaves the construction site. In addition, the system will display the real-time location of the bus and the workers inside the bus at any point in time. Each construction worker is issued one or more unique RFID card(s) to carry. The card will be embedded in the cloth for each worker. As the worker's tag is detected by the reader installed in the bus upon entering or leaving the bus, the time, date and location is logged and transmitted to a secure database. It will require no action on the part of drivers or workers, other than to carry the card and will deliver the required performance without impeding the normal loading and unloading process. To explore the technical feasibility of the proposed system, a set of tests were performed in the lab. These experiments showed that the RFID tags were effective and stable enough to be used for successfully tracking and monitoring construction workers using the bus.
-
-
-
A routing protocol for smart cities: RPL robustness under study
More LessSmart cities could be defined as developed urban areas, creating sustainable economic development and high quality of life by excelling in multiple key areas such as transportation, environment, economy, living, and government. This excellence could be reached through efficiency based on the intelligent management and integrated Information and Communication Technologies (ICT). Motivations. In the near future (2030), two thirds of the world population will reside on a city, thus increasing drastically demands on city infrastructures. As a result, urbanization is becoming a crucial issue. The Internet of Things (IoT) vision foresees billions of devices to form a worldwide network of interconnected objects including computers, mobile phones, RFID tags and wireless sensors. In this study, we focus on Wireless Sensor Networks (WSNs). The WSNs are a specific technology suitable to create Smart Cities. A distributed network of intelligent sensor nodes could measure numerous parameters and communicate them wirelessly and in real-time and makes possible a more efficient management of the city. For example, the pollution concentration in each street can be monitored, water leaks can be detected or noise maps of the city obtained. The number of applications with WSNs available for smart cities is only bounded by imagination: environmental care, sustainable development, healthcare, efficient traffic management, energy supply, water management, green buildings, etc. In short, WSN could improve the quality of life in a city. Scope. However, such urban applications often use multi-hop wireless networks with high density to obtain sufficient area coverage. As a result, they need networking stacks and routing protocols that can scale with network size and density, while remaining energy-efficient and lightweight. To this end, the IETF RoLL working group has designed the IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL). This paper presents experimental results on the RPL protocol. The RPL properties in terms of delivery ratio, control packet overhead, dynamics and robustness are studied. The results are obtained by several experimentations conducted on two large WSNs testbeds composed of more than 100 sensor nodes each. In this real-life scenario (high density and convergecast traffic), several intrinsic characteristics of RPL are underlined: path length stability but reduced delivery ratio and important overhead (Fig. 1). However, the routing metrics, as defined by default, favor the creation of "hubs", aggregating most of 2-hops nodes (Fig. 2). To investigate the RPL robustness, we observe its behavior when facing a sudden death of several sensors and when several sensors are redeployed. RPL shows good abilities to maintain the routing process despite such events. However, the paper highlights that this ability can be reduced if only few critical nodes fail. To the best of our knowledge, it is the first study of RPL on such large platform.
-
-
-
Optimization Model For Modern Retail Industry
More LessIn a growing market with demanding consumers, the retail industry needs decision support tools that reflect emerging practices and integrate key decisions cutting across several of its departments (e.g. marketing and operations). The current tools may not be adequate to sufficiently tackle this kind of integration complexity in relation to pricing in order to satisfy the retailing experience of the customers. Of course, it has to be understood that the retailing experience can differ from one country to another and from one culture to another. Therefore, the off-the-shelve models may not be able to capture the behavior of customers in a particular region. This research aims at developing novel optimization mixed-integer linear/nonlinear formulations with extensive analytical and computational studies based on the experience of a large retailer in Qatar. The model addresses a product lines of substitutable items serving the same customer need but differ by secondary attributes. The research done in this project demonstrates that there is added value in identifying the shopping characteristics of the consumer base and well studying the consumer behavior in order to develop the appropriate retail analytics solution. The research is supported by a grant obtained through the NPRP project of Qatar Foundation.
-
-
-
My Method In Teaching Chemistry And Its Application Using Modern Technology
By Eman ShamsSince I studied chemical education chemistry in a very unique way at Oklahoma State University, Oklahoma , United States Of America. teaching chemistry became a hobby not a job for me. which motivated me to apply the teaching with educational technology to every chemistry aspect that I teach as I applied my unique expertise through the smart digital flipped classroom technique in teaching the different chemistry laboratories, on the blackboard and in the different chemistry tutorial classes that I taught offered by the chemistry Department, college of arts and sciences , Qatar University. Through building An educational web site with the theme chemistry flipped for teaching chemistry. The blended learning chemistry web site provides students with explanatory material that is augmented by audio visual simulations with technology immersed education. The general idea that students work at their own pace, receiving lectures at home via online videos or podcasts which I record and post for them so they got prepared before coming to the class. the students are able to use class time to practice what they've learned with traditional schoolwork—but with my freed up for additional one-on-one time. Students can review lessons anytime, anywhere on their personal computers and smart phones, reserving class time for in-depth discussions or doing the actual experiment and most importantly they know and understand what they are doing. Through unlocking knowledge and empowering minds. The video lecture is often seen as the key ingredient in the flipped approach. My Talk will be on the use of educational technology in teaching chemistry as my chem Demos go behind the magic as I used new techniques for helping students visualize the concept which helped them understand the topic more. I converted the writing discussion into a conversation into the cloud system. The web site includes online interactive pre/post-classroom activities assessment pages, live podcast capture for the experiments done by the students, post-classroom activities, dynamic quizzes and practice exams pages, honor pages for recognizing the hard working students, in class lecture live podcasting, linked experiments and many more. Probabilistic system analysis is used for keeping track of the students' progress, their access and their learn and I used statistic to relate the students results before and after the use of my blended learning audiovisual simulation flipped classroom. During the study the students showed an extraordinary passion to chemistry , they study it on iTunes, YouTube, on Facebook, on Twitter, they learned it very well, chemistry with them everywhere even I their free time. demonstrate the advantages associated with using web based learning with the flipped chemistry teaching methods provide support for students in a large lecture and laboratory classes. The two biggest changes resulting from the method are the manner in which content is delivered (outside of class) and the way students spend time in the classroom (putting principles into practice). Feedback from students has conveyed that the style is more dynamic and motivational than traditional, passive teaching. Which help keep open courseware going and growing.
-
-
-
High order spectral symplectic methods for solving PDEs on GPU
More LessEfficient and accurate numerical solving of partial differential equations (PDE) is essential for many problems in science and engineering. In this paper we discuss spectral symplectic methods with different numerical accuracy on example of Nonlinear Schrodinger Equation (NLSE), which can be taken as a model for versatile kinds of conservative systems. First, second and fourth order approximation have been observed and reviewed considering execution speed vs. accuracy trade off. In order to utilize the possibility of modern hardware, the numerical algorithms are implemented both on CPU and GPU. Results are compared in sense of execution speed, single/double precision, data transfer and hardware specifications.
-
-
-
The Arabic ontology
More LessWe overview the Arabic Ontology, an ongoing project at Sina Institute, at Birzeit University, Palestine. The Arabic Ontology is a linguistic ontology that represents the meanings (i.e., concepts) of Arabic terms using formal semantic relationships, such as SubtypeOf and PartOf. In this way, the ontology becomes a tree (i.e., classification) of meanings of the Arabic terms. To build this ontology (see Fig.1), the set of all Arabic terms are collected; then for each term, the set of its concepts (polysemy) are identified using unique numbers and described using glosses. Terms referring to same meaning (called synsets) are given the same concept identifier. These concepts are then classified using Subsumption and Parenthood relationships. The Arabic Ontology follows the same design as WordNet (i.e., network of synsets), thus it can be used as an Arabic WordNet. However, unlike WordNet, the Arabic Ontology is logically and philosophically well-founded, following strict ontological principles. The Subsumption relation is a formal subset relation. The ontological correctness of a relation (e.g., whether "PeriodicTable SubtypeOf Table" is true in reality) in WordNet is based on whether native speakers accept such a claim. However, the ontological correctness of the Arabic Ontology is based on what scientists accept; but if it can't be determined by science, then what philosophers accept; and if philosophy doesn't have an answer then we refer to what linguistics accept. Our classification also follows the OntoClean methodology when dealing with, instances, concepts, types, roles, and parts. As described in the next section, the top levels of the Arabic Ontology are derived from philosophical notions, which further govern the ontological correctness of its lower levels. Moreover, glosses are formulated using strict ontological rules focusing on intrinsic properties. Figure 1. Illustration of terms' concepts and its conceptual relations Why the Arabic Ontology It can be used in many application scenarios such as: (1) information search and retrieval, to enrich queries and improve the results' quality, i.e., meaningful search rather than string-matching search; (2) Machine translation and term disambiguation, by finding the exact mapping of concepts across languages, as the Arabic Ontology is also mapped to the English WordNet; (3) Data Integration and interoperability in which the Arabic Ontology can be used as a semantic reference to several autonomous information systems; (4) Semantic web and web 3.0, by using the Arabic Ontology as a semantic reference to disambiguate meanings used in web sites; (5) Conceptual dictionary, allowing people to easily browse and find meanings and the differences between meanings. The Arabic Ontology Top Levels Figure 2 presents the top levels of the Arabic Ontology, which is a classification of the most abstract concepts (i.e., meanings) of the Arabic terms. Only three levels are presented below for the sake of brevity. All concepts in the Arabic Ontology are classified under these top-levels. We designed these concepts after a deep investigation of the philosophy literature and based on well-recognized upper level ontologies like BFO, DOLCE, SUMO, and KYOTO. Figure 2. The Top three levels of the Arabic Ontology (Alpha Version)
-
-
-
Surface properties of poly(imide-CO-siloxane) block copolymers
By Igor NovákSurface Properties Of Poly(Imide-Co-Siloxane) Block Copolymers Aigor Novák, A,Banton Popelka, Cpetr Sysel, A,D Igor Krupa, Aivan Chodák, Amarian Valentin, Ajozef Prachár, Evladimír Vanko Apolymer Institute, Slovak Academy Of Sciences, Bratislava, Slovakia Bcenter For Advanced Materials, Quatar University, Doha, Quatar Cdepartment Of Polymers, Institute Of Chemical Technology, Faculty Of Chemical Technology, Prague, Czech Republic Dcenter For Advanced Materials, Qapco Polymer Chair, Quatar University, Doha, Quatar Evipo, Prtizánske, Slovakia Abstract Polyimides Present An Important Class Of Polymers, Necessary In Microelectronics, Printed Circuits Construction, And Aerospace Investigation, Mainly Because Their High Thermal Stability And Good Dielectric Properties. In The Last Years, Several Sorts Of Block Polyimide Based Copolymers, Namely Poly(Imide-Co-Siloxane) (Pis) Block Copolymers Containing Siloxane Blocks In Their Polymer Backbone Have Been Investigated. In Comparison With Pure Polyimides The Pis Block Copolymers Possess Some Improvements, E.G. Enhanced Solubility, Low Moisture Sorption, And Their Surface Reaches The Higher Degree Of Hydrophobicity Already At Low Content Of Polysiloxane In Pis Copolymer. This Kind Of The Block Copolymers Are Used As High-Performance Adhesives And Coatings. The Surface As Well As Adhesive Properties Of Pis Block Copolymers Depends On The Content And Length Of Siloxane Blocks. The Surface Properties Of Pis Block Copolymers Are Strongly Influenced By Enrichment Of The Surface With Siloxane Segments. Micro Phase Separation Of Pis Block Copolymers Occurs Due To The Dissimilarity Between The Chemical Structures Of Siloxane, And Imide Blocks Even At Relatively Low Lengths Of The Blocks. The Surface Energy Of Pis Block Copolymer Decreases Significantly With The Concentration Of Siloxane From 46.0 Mj.M-2 (Pure Polyimide) To 34.2 Mj.M-2 (10 Wt.% Of Siloxane), And To 30.2 Mj.M-2 (30 Wt.% Of Siloxane). The Polar Component Of The Surface Energy Reached The Value 22.4 Mj.M-2 (Pure Polyimide), Which Decreases With Content Of Siloxane In Pis Copolymer To 4.6 Mj.M-2 (10 Wt.% Of Siloxane) And 0.8 Mj.M-2 (30 Wt.% Of Siloxane) The Decline Of The Surface Energy, And Its Polar Component Of Pis Block Copolymer With Raising Siloxane Content Are Very Intense Mainly Between 0 And 10 Wt.% Of Siloxane In Copolymer. In The Case Of Further Increase Of Siloxane Concentration (Above 20 Wt.% Of Siloxane), The Surface Energy Of Pis Copolymer, And Its Polar Component Is Leveled Off. The Dependence Of Peel Strength Of Adhesive Joint Pis Copolymer-Epoxy Versus Polar Fraction Of The Copolymer Shows, That The Steepest Gradient Is Reached At 15 Wt.% Of Siloxane Pis Block Copolymer, And Then It Is Leveled Off. This Relation Allows The Determination Of The Non-Linear Relationship Between Adhesion Properties Of Pis Block Copolymer And Polar Fraction Of The Copolymer. Acknowledgement This Contribution Was Supported By Project No. 26220220091 By The "Research & Development Operational Program" Funded By The Erdf, And As A Part Of The Project „Application Of Knowledge-Based Methods In Designing Manufacturing Systems And Materials", Project Mesrssr No. 3933/2010-11, And Project Vega 2/0185/10.
-
-
-
An Arabic edutainment system: Using multimedia and physical activity to enhance the cognitive experience for children with intellectual disabilities
More LessIncreasing attention has recently been drawn in the human-computer interaction community towards the design and development of accessible computer applications for children and youth with developmental or cognitive impairments. Due to better healthcare and assistive technology, the quality of life for children with intellectual disability (ID) has evidently been improved. Many children with ID often have cognitive disabilities, along with overweight problems due to lack of physical activity. This paper introduces an edutainment system specifically designed to help these children have an enhanced and enjoyable learning process, and addresses the need for integrating physical activity into their daily lives. These games are developed with the following pedagogical model in mind: A combination of Mayer's Cognitive Theory of Multimedia Learning with a mild implementation of Skinner's Operant Condition, incorporated with physical activity as part of the learning process. The system proposed consists of a padded floor mat that consists of sixteen square tiles supported by sensors, which are used to interact with a number of software games specifically designed to suit the mental needs of children with ID. The games consist of multimedia technology with a tangible user interface. The edutainment system consists of three games, each with three difficulty levels meant to suit specific needs of different children. The system aims at enhancing the learning, coordination and memorization skills of the children with ID while involving them into physical activities, thus offering both mental and physical benefits. The edutainment system was tested on 100 children with different IDs, half of which have Down syndrome (DS. The children pertain to three disability levels; mildly, moderately and severely disabled. The obtained results depict a high increase in the learning process as the children became more proactive in the classrooms. Assessment methodology took into account the following constraints; disability type, disability level, gender, scoring, timing, motivation, coordination, acceptance levels and relative performance. The following groups, when compared with other groups, achieved best results in terms of scores and coordination: children with DS, mildly disabled children and females. In contrast to children with other IDs, moderately and severely disabled children, and males performed with lower scores and coordination levels, but all the above mentioned groups exhibited high motivation levels. Rejection rate was found to be very low, at 3% of children refusing to participate. When children repeated the games, 92% were noted to achieve significantly higher results. The edutainment is developed with the following aims: helping children with ID have an enhanced cognitive experience, allowing them a learning environment where they can interact with the game and exert physical activities, ensuring a proactive role for all in the classroom, boosting motivation, coordination and memory levels. Results proved that the system had very positive effects on the children, in terms of cognition and motivational levels. Instructors also expressed willingness to incorporate the edutainment system into the classroom on a daily basis, as a complementary tool to conventional learning.
-
-
-
Enhancement of multispectral face recognition in unconstrained environment using regularized linear discriminant analysis (LDA)
More LessIn this paper, Face recognition in unconstrained illumination conditions is investigated. A two manifold contribution is proposed: 1) Firstly, Three sate of the art algorithms, namely Multiblock local Binary pattern (MBLBP), Histogram of Gabor Phase Patterns (HGPP) and Local Gabor binary pattern histogram sequence (LGBPHS) are challenged against The IRIS-M3 multispectral face data base. 2) Secondly, The Performance of the three mentioned algorithms, being drastically decreased due to the non monotonic illumination variation that distinguish the IRIS-M3 face database, This performance is enhanced using multispectral images (MI) captured in the visible spectrum. The use of MI images like near infrared images(NIR), short wave infrared images images (SWIR) or even visible images captured at different wavelengths rather then the usual RGB spectrum, is getting more and more the trust of researcher to solve problems related to uncontrolled imaging conditions that usually affect real world application like securing areas with valuable assets, controlling high traffic borders or law enforcement. However, one weakness of MI is that they may significantly increase the system processing time due to the huge quantity of data to mine (in some cases thousands of MI are captured for each subject). To solve this issue, we proposed to select the optimal spectral bands (channels) for face recognition. Best spectral bands selection will be achieved using linear discriminant analysis (LDA) to increase data variance between images of different subjects(between class variance) while decreasing the variance between images of the same subject(within class variance). To avoid the problem of data overfitting that generally characterize LDA technique, we propose to include a regularization constraints that reduce the solution space of the chosen best spectral bands. Obtained results highlighted further the still challenging problem of face recognition in conditions with high illumination variation, as well as proven the effectiveness of our multispectral images based approach to increase the accuracy of the studied algorithms namely MBLBP, HGPP and LGBPHS of at least 9% upon the proposed database.
-
-
-
Caloric expenditure estimation for health games
By Aiman ErbadWith the decline in physical activities among young people, it is essential to monitor their physical activity and ensure their calorie expenditure is within the range necessary to lead a healthy life style. For many children and young adults, video gaming is one favorable venue for physical activities. A new flavor of video games on popular platforms, such as Wii and Xbox, aim to improve the health of young adults through competing in games which require players to perform various physical activities. These popular platforms detect the user movements, and through an avatar in the game, players can be part of the game activities, such as boxing, playing tennis, dancing, avoiding obstacles, etc. Early studies used self-administered questionnaires or interviews to collect data about the patient's activities. These self-reporting methods ask participants to report their activities on hourly, daily, or weekly basis. But self-reporting techniques suffer from a number of limitations, such as inconvenience in entering data, poor compliance, and inaccuracy due to bias or poor memory. Reporting activities is a sensitive task for overweight/obese individuals with research evidence showing that they tend overestimate the calories they burn. Having a tool to help estimate calories consumption is therefore becoming essential to manage obesity and over-weight issues. We propose providing a calories expenditure estimation service. This service would augment the treatment provided by an obesity clinic or a personal trainer for obese children. Our energy expenditure estimation system consists of two main components: activity recognition and calories estimation. Activity recognition systems have three main components: low-level sensing module to gather sensor data, feature selection module to process raw sensor data and select the features necessary to recognize activities, and a classification module to infer the activity from the captured features. Using the activity type, we can estimate the calories consumption using existing models on energy expenditure developed on the gold standard of respiratory gases measurements. We choose Kinetic as our test platform. The natural user interface in Kinect is the low-level sensing module providing skeleton tracking data. The skeleton positions will be the raw input to our activity recognition module. From this raw data, we define the features that help the classifier, such as the speed of the hands and legs, body orientation, and rate of change in the vertical and horizontal position. These are some features that can be quantified and passed periodically (e.g., every 5 seconds) to the classifier to distinguish between different activities. Other important features might need more processing, such as the standard deviation, the difference between peaks (in periodic activities), and the distribution of skeleton positions. We also plan to build an index for the calories expenditure for game activities using the medical gold standard of oxygen consumption, CO2. Game activities, such as playing tennis, running, and boxing are different from the same real world activities in terms of enegy concumption and it would be useful to quantify the difference in order to answer the question of whether these “health” games are useful for weight loss.
-
-
-
Towards socially-interactive telepresence robots for the 2022 world cup
More LessThe World Cup is the most widely viewed sporting event. The total attendance is in the hundreds of thousands. The first-ever World Cup in an Arab nation will be hosted by Qatar in 2022. As the country prepares for this event, the current paper proposes telepresence robots for the World Cup. Telepresence robots refer to robots that allow a person to work remotely in another place as he or she is physically present there. For such a big event like the World Cup, we can envision that organizers need to monitor the minute-by-minute events as they occur in multiple venues. Likewise, soccer fans who will not be able to attend the events can be present even if they are not physically present there and without the need to travel. Telepresence robots can make the organizers and participants to “be there”. This work describes some of the author's findings involving the interactions of humans and robots. Specifically, the works describe the user's perceptions and physiological data when touch and gestures are passed over the internet. The results show that there is much potential for telepresence robots to enhance the utility and the organizers' and participants' overall experience of the 2022 World Cup.
-
-
-
Assistive technology for improved literacy among the deaf and hard of hearing
More LessWe describe an assistive technology for improved literacy among the Deaf and Hard of Hearing, that is cost-wise and accessible to deaf individuals and their families/service providers (e.g., educators), businesses which employ them or list them as customers and healthcare professionals. The technology functions as (1) A real-time translation system between Moroccan Sign Language (a visual-gestural language) and standard written Arabic. Moroccan Sign Language (MSL) is a visual/gestural language that is distinct from spoken Moroccan Arabic and Modern Standrad Arabic (SA) and has no text representation. In this context, we describe some challenges in SA-to-MSL machine translation. In Arabic, word structure is not built linearly as is the case in concatenative morphological systems, which results in a large space of of morphological variation. The language has a large degree of ambiguity in word senses, and further ambiguity attributable to a writing system that omits diacritics. (e.g. short vowels, consonant doubling, inflection marks). The lack of diacritics coupled with word order flexibility are causes of ambiguity in the syntactic structure of Arabic. The problem is compounded when translating into a visual/gestural language that has far fewer signs than words of the source language. In this presentation, we show how Natural language processing tools are integrated into the system, the architecture of the system and provide a demo of several input examples with different levels of complexity. Our Moroccan Sign Language database has currently 2000 Graphic signs and their corresponding video clips. The database extension is an ongoing process task that is done in collaboration with MSL interpreters, deaf signers and educators in Deaf schools in different regions in Morocco. (2) Instructional tool: Deaf school children, in general, have poor reading skills. It is easier for them to understand text represented in sign language than in print. Several works have demonstrated that a combination of sign language and spoken/written language can significantly improve literacy and comprehension (Singleton, Supalla, Litchfield, & Schley, 1998; Prinz, Kuntz, & Strong, 1997; Ramsey & Padden, 1998). While many assistive technologies have been created for the blind, such as hand-held scanners and screen readers, there are only a few products targeting poor readers who are deaf. An example of such technology is the iCommunicator™ which translates in real time: speech to text, speech/typed text to videos of signs, and speech/typed text to computer generated voice. This tool, however, does not generate text from scans and display them with sign graphic supports that a teacher can print, edit, and use to support reading. It also does not capture screen text. We show a set of tools aiming at improving literacy among the Deaf and Hard-of-Hearing. Our tools offer a variety of input and ouput options, including scanning, screen text transfer, sign graphics and video clips. The technology we have developed is useful to teachers, educators, Health care professionals, speech/language pathologists, etc. who have a need to support understanding of Arabic text with Moroccan Sign Language signs for purposes of literacy improvement, curriculum enhancement, or communication in emergency situations.
-
-
-
Unsupervised Arabic word segmentation and statistical machine translation
More LessWord segmentation is a necessary step for natural language processing applications, such as machine translation and parsing. In this research we focus on Arabic word segmentation to study its impact on Arabic to English translation. There are accurate word segmentation systems for Arabic, such as MADA (Habash, 2007). However, such systems usually need manually-built data and rules of the Arabic language. In this work, we look at unsupervised word segmentation systems to see how well they perform on Arabic, without relying on any linguistic information about the language. The methodology of this research can be applied to many other morphologically complex languages. We focus on three leading unsupervised word segmentation systems proposed in the literature: Morfessor (Creutz and Lagus, 2002), ParaMor (Monson, 2007), and Demberg's system (Demberg, 2007). We also use two different segmentation schemes of the state of the art MADA and compare their precision with the unsupervised systems. After training the three unsupervised segmentation systems, we apply their resulting models to segment the Arabic part of the parallel data for Arabic to English statistical machine translation (SMT) and measure its impact on translation quality. We also build segmentation models using the two schemes of MADA on SMT to compare against the baseline system. The 10-fold cross validation results indicate that unsupervised segmentation systems turn out to be usually inaccurate with a precision that is less than 40%, and hence do not help with improving SMT quality. We also observe both segmentation schemes of MADA have very high precision. We experimented with two MADA schemes. A scheme with a measured segmentation framework improved the translation accuracy. A second scheme which performs more aggressive segmentation, failed to improve SMT quality. We also provide some rule-based supervision to correct some of the errors in our best unsupervised models. While this framework performs better than the baseline unsupervised systems, it still does not outperform the baseline MT quality. We conclude that in our unsupervised framework, the noise by the unsupervised segmentation offsets the potential gains that segmentation could provide to MT. We conclude that a measured supervised word segmentation improves Arabic to English quality. In contrast aggressive and exhaustive segmentation introduces new noise to the MT framework and actually harms its quality. This publication was made possible by the generous support of the Qatar Foundation through Carnegie Mellon University's Seed Research program provided to Kemal Oflazer. The statements made herein are solely the responsibility of the authors.
-
-
-
A services-oriented infrastructure for e-science
By Syed AbidiThe study of complex multi-faceted scientific questions demand innovative computing solutions—solutions that transcend beyond the management of big data to dedicated semantics-enabled, services-driven infrastructures that can effectively aggregate, filter, process, analyze, visualize and share the cumulative scientific efforts and insights of the research community. From a technical standpoint, E-Science purports technology-enabled collaborative research platforms to (i) collect, store and share multi-modal data collected from different geographic sites, (ii) perform complex simulations and experiments using sophisticated simulation models; (iii) design complex experiments by integrating data and models and executing them as per the experiment workflow; (iv) visualize high-dimensional simulation results; and (v) aggregate and share the scientific results (Fig 1). Taking a knowledge management approach, we have developed an innovative E-Science platform— termed Platform for Ocean Knowledge Management (POKM)—that is built using innovative web-enabled services, services-oriented architecture, semantic web, workflow management and data visualization technologies. POKM offers a suite of E-Science services that allow oceanographic researchers to (a) handle large volumes of ocean and marine life data; (b) access, share, integrate and operationalize the data and simulation models; (c) visualization of data and simulation results; (d) multi-site collaborations in joint scientific research experiments; and (e) form a broad, virtual community of national and international researchers, marine resource managers, policy makers and climate change specialists. (Fig 2) The functional objective of our E-Science infrastructure is to establish an online scientific experimentation platform that supports an assortment of data/knowledge access and processing tools to allow a group of scientists to collaborate and conduct complex experiments by sharing data, models, knowledge, computing resources and expertise. Our E-Science approach is to complement data-driven approaches with domain-specific knowledge-centric models in order to establish causal, associative and taxonomic relations between (a) raw data and modeled observations; (b) observations and their causes; and (c) causes and theoretical models. This is achieved by taking a unique knowledge management approach, whereby we have exploited semantic web technologies to semantically describe the data, scientific models, knowledge artifacts and web services. The use of semantic web technologies provides a mechanism for the selection and integration of problem-specific data from large repositories. To define the functional aspects of the e-science services we have developed a services ontology that provides a semantic description of knowledge-centric e-science services. POKM is modeled along a services-oriented architecture that exposes a range of task-specific web services accessible through a web portal. The POKM architecture features five layers—Presentation Layer, Collaboration Layer, Service Composition Layer, Service Layer and Ontology Layer (Fig 3). POKM is applied to the domain of oceanography to understand our changing eco-system and its impact on marine life. POKM helps researchers investigate (a) changes in marine animal movement on time scales of days to decades; (b) coastal flooding due to changes in certain ocean parameters; (c) density of fish colonies and stocks; and (d) time-varying physical characteristics of the oceans (Fig 4 & 5). In this paper, we present the technical architecture and functional description of POKM, highlighting the various technical innovations and their applications to E-Science.
-
-
-
Informatics and technology to address common challenges in public health
By Jamie PinaHealth care systems in countries around the world are focused on improving the health of their populations. Many countries face common challenges related to capturing, structuring, sharing and acting upon various sources of information in service of this goal. Information science, in combination with information and communications technologies (ICT) such as online communities and cloud-based services, can be used to address many of the challenges encountered when developing initiatives to improve population health. This presentation by RTI International will focus on the development of the Public Health Quality Improvement Exchange (www.phqix.org), where informatics and ICT have been used to develop new approaches to public health quality improvement; a challenge common to many nations. The presentation will also identify lessons learned from this effort and the implications for Gulf Cooperation Council (GCC) countries. This presentation addresses two of Qatar's Cross-cutting Research Grand Challenges; "Managing the Transition to a Diversified, Knowledge-based Society," and "Developed modernized Integrated Health Management." The first grand challenged is addressed by our research on the use of social networks and their relationship to public health practice environments. The second is addressed through our research in the development of taxonomies that align with the expectations of public health practitioners to facilitate information sharing [1]. Health care systems aim to have the most effective practices for detecting, monitoring, and responding to communicable and chronic conditions. However, national systems may fail to identify and share lessons gained through the practices of local and regional health authorities. Challenges include having appropriate mechanisms for capturing, structuring, and sharing these lessons in uniform, cost-effective ways. The presentation will explore how a public health quality improvement exchange, where practitioners submit and share best practices through an online portal, help address these challenges. This work also demonstrates the advantages of a user-centered design process to create an online resource that can successfully accelerate learning and application of quality improvement (QI) by governmental public health agencies and their partners. Public health practitioners, at the federal, state, local and tribal levels, are actively seeking to promote the use of quality improvement to improve efficiency and effectiveness. The Public Health Quality Improvement Exchange (PHQIX) was developed to assist public health agencies and their partners in sharing their experiences with QI and to facilitate increased use of QI in public health practice. Successful online exchanges must provide compelling incentives for participation, site design that aligns with user expectations, information that is relevant to the online community and presentation that encourages use. Target audience members (beneficiaries) include public health practitioners, informatics professionals, and officials within health authorities. This discussion aims to help audience members understand how new approaches and Web-based technologies can create highly reliable and widely accessible services for critical public health capabilities including quality improvement and data sharing. 1. Pina, J., et al., Synonym-based Word Frequency Analysis to Support the Development and Presentation of a Public Health Quality Improvement Taxonomy in an Online Exchange. Stud Health Technol Inform, 2013. 192: p. 1128.
-