- Home
- Conference Proceedings
- Qatar Foundation Annual Research Forum Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Forum Volume 2012 Issue 1
- Conference date: 21-23 Oct 2012
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2012
- Published: 01 October 2012
301 - 350 of 469 results
-
-
QR Cache: Linking mLearning theory to practice in Qatar
By Robert PowerBackground and Objectives: Virtually ubiquitous mobile and wireless network coverage combined with high mobile device penetration create an opportunity for mobile learning (mLearning) research to focus on linking theory and practice. The QR Cache project at College of the North Atlantic-Qatar (CNA-Q) evolved from the desire of students to use their own mobile devices, and the need for situated mLearning solutions to training demands of Qatar's technical workforce. QR Cache was developed as a set of exemplars of situated mobile reusable learning objects (RLOs) for students studying introductory computer hardware devices and concepts. The QR Cache project uses a Design-Based Research (DBR) approach to study the development of the RLOs, as well as the link between instructional design and established learning theories. Moore's Transactional Distance Theory and Koole's FRAME model are used to provide theoretical grounding for both design decisions and results interpretation. Methods: Participants used their own mobile devices to scan Quick Response (QR) codes affixed to computer equipment. The QR codes redirected their smartphones to websites with information on the English names and some basic facts about the devices. Participants then completed an online questionnaire about their experiences. Survey responses were analyzed for indicators of transactional distance, as well as the domains of effective mLearning design outlined by the FRAME model. Results: Eight students completed the online questionnaire in the pilot phase. All participants were easily able to access the RLOs using their own mobiles. Responses indicated that they found the situated learning strategy desirable. Students also indicated that they revisited the RLOs several times and that the activities generated interaction in the form of discussions with their peers and instructors. Conclusions: Student experiences with the QR Cache RLOs demonstrate low levels of transactional distance between learners and content, their peers, and instructors. They also show a strong convergence of the learner, social and device usability aspects of the FRAME model required optimizing the mLearning experience. However, the limited number of pilot phase participants makes it difficult to provide generalizations. Expanding the research to include more participants in a subsequent phase would address this limitation.
-
-
-
Multi-modal biometric authentication system using face and online signature fusion
Authors: Youssef Elmir, Somaya Al-Maadeed, Abbes Amira and Abdelaali HassaineBackground and Objectives: There is high requirement of face and signature based multimodal biometric systems in various areas, such as banking, biometric systems and secured mobile phone operating systems. Few studies have been carried out in this area to enhance the performance of identification and authentication based on the fusion of those modalities. In multimodal biometric systems, the most common fusion approach is integration at the matching score level, but it is necessary to compare this strategy of fusion to the other strategies, like fusion at feature level. Our system combines these two biometric traits and provides better recognition performance compared with single biometric systems. Multimodal Authentication Systems: The first monomodal verification system is based on face verification using Gabor filters for feature extraction. The second system is based on online signature verification using Nalwa's method. The classification is released using the Cosine Mahalano-bis distance. Due to its efficiency, we used max-of-scores strategy to fuse face and online signature scores. The second proposed system is based on fusion at the feature level. Results and Conclusions: The performance of feature-level fusion and max-of-scores fusion techniques using face and online signature modalities were evaluated on ORL face database and the QU signature database. The lowest equal error rate is obtained by using a fusion strategy based on max-of-monomodal systems scores. Additionally, feature-level fusion based methods demonstrate a low equal error rate compared with the monomodal systems and have not been affected by the increase in the features vector dimension in term of time of verification; on the contrary, the fusion at the score level is clearly affected and it takes more in-time verification because it's necessary to get scores from each biometric trait before the fusion step.
-
-
-
Automated essay scoring using structural and grammatical features
More LessAutomated essay scoring is a research field which is continuously gaining popularity. Grading essays by hand is expensive and time consuming, automated scoring systems can yield fast, effective and affordable solutions that would make it possible to grade essays and other sophisticated testing tools. This study has been conducted on a dataset of thousands of English essay sets belonging to eight different categories provided by the Hewlett Foundation. Each category corresponds to the same question or problem statement. The score of each essay of the training set is provided in this dataset by human raters. Several features have been determined to predict the final grade. First, the number of occurrences of the 100 most frequent words in English is computed in each essay. Then, the list of average scores associated to each compounding word in the training set is determined. From this list several statistical values are considered as separate feature including the minimum, maximum, mean and median values, variance, skewness and kurtosis. These statistical features are also computed for the list of average scores associated to each compounding bigram (sequence of 2 words). Moreover, each word in the essays has been tagged using the NLTK toolkit into its grammatical role (verb, noun, adverb…etc). The number of occurrences of each grammatical role has also been used as a separate feature. All those features have been combined using different classifiers with random forests generally preferred. This system participated in the Automated Essay Scoring Contest sponsored by the Hewlett Foundation. The results have been evaluated using the quadratic weighted kappa error metric, which measures the agreement between the human rater and the automatic rater. This metric typically varies from 0 (only random agreement) to 1 (complete agreement). This method scored 0.76519 and ranked 13th out of 156 teams: http://www.kaggle.com/c/asap-aes/leaderboard. The proposed system combines structural and grammatical features to automatically grade essays and achieves promising performance. There is ongoing work on the extension of the developed method for short essay scoring as well as grading an unseen category of essays.
-
-
-
Modeling datalog fact assertion and retraction in linear logic
Authors: Edmund Lam and Iliano CervesatoPractical algorithms have been proposed to efficiently recompute the logical consequences of a Datalog program after a new fact has been asserted or retracted. This is essential in a dynamic setting where facts are frequently added and removed. Yet while assertion is logically well understood as incremental inference, the monotonic nature of traditional first-order logic is ill-suited to model retraction. As such, the traditional logical interpretation of Datalog offers at most an abstract specification of Datalog systems, but has tenuous relations to the algorithms that perform efficient assertions and retractions in practical implementations. This work proposes a logical interpretation of Datalog based on linear logic. It not only captures the meaning of Datalog updates, but also provides an operational model that underlies the dynamic changes of the set of inferable facts, all within the confines of logic. We do this specifically by explicitly representing the removal of facts and enriching our linear logic interpretation of Datalog inference rules with embedded retraction rules. These retraction rules are essentially linear implications designed to exercise the retraction of consequences when base facts are targeted for retraction. As such, we can map Datalog assertion and retraction onto the forward-chaining fragment of linear logic proof search. We formally prove the correctness of this interpretation with respect to its traditional counterpart. In the future, we intend to exploit our work here to develop a rich logic programming language that integrates Datalog style assertion and retraction with higher-order multiset rewritings.
-
-
-
Probing equation of state parameter fitting in parallel computers
Authors: Marcelo Castier, Ricardo Figueiredo Checoni and Andre ZuberThe accurate design of chemical processes depends on the availability of models to predict the physical properties of the materials being processed. Thermodynamic properties such as enthalpies, entropies, and fugacities are particularly important in this context. Most of the models to evaluate them have adjustable parameters, fitted to give the best possible representation of the experimental data available for a given substance or mixture. Depending on how much information is available, this may entail the use of hundreds or thousands of data points. As several modern thermodynamic models have intricate mathematical expressions, especially equations of state, using so many data points to fit their parameters leads to substantial computational effort. This makes it difficult to run the parameter fitting problem from different initial estimates. The consequence is that this decreases the likelihood of finding the global minimum of the objective function used for parameter fitting. Despite the fact that current desktops and laptops are capable of parallel computations, little has been done to take advantage of their computational power for equation of state parameter fitting. The authors have recently developed procedures to that end, executed in different desktop and laptop computers, which provided speedups compatible with the number of processors available. One of the procedures is based on the conventional, sequential simplex minimization algorithm with a parallel evaluation of the objective function (SSPO approach). The other procedure is based on a modified, parallel version of the simplex minimization algorithm with a sequential evaluation of the objective function (PSSO approach). In this paper, we extend the evaluation of these procedures, executing them in the Suqoor supercomputer of Texas A&M University at Qatar, using single and multiple nodes. Because of numerical algorithm used, speedups in the PSSO approach are limited by the number of parameters to be fitted, which does not happen in the SSPO approach. On the other hand, the PSSO approach often ends at solutions with smaller objective functions, showing a greater tendency to escape local minima.
-
-
-
An efficient, scalable and high performance cloud monitoring framework
Authors: Suhail Rehman and Majd SakrCloud computing has become a very popular platform to deploy data-intensive scientific applications, but this process faces its own set of challenges. Given the complexity of the application execution environment, routine tasks on the cloud such as monitoring, performance analysis, and debugging of applications become tedious and complex. These routine tasks often require close interaction and inspection of multiple layers in the cloud, which traditional performance monitoring tools fail to account for. In addition, many of these tools are designed for real-time analysis and only provide summaries of historical data. This makes it difficult for a user to trace the runtime performance of an application in the past. We present a new monitoring framework called All-Monitor Daemon (Almond). Almond keeps close tabs on cloud inventory by communicating with a cloud resource manager (such as VMware vCenter for a VMware private cloud). Almond then connects to each individual physical host in the inventory and retrieves performance metrics through the hypervisor. Examples of metrics include CPU, memory, disk and network usage. Almond is also designed to collect performance information from the Guest OS, allowing the retrieval of metrics from the application platform as well. Almond was designed from the ground up for enhanced scalability and performance. The framework uses a Time Series Database (TSD), and a decentralized monitoring architecture allows for fast performance queries while minimizing overhead on the infrastructure. Almond collects performance data from all the layers of the software stack, and collected data remains persistent for future analysis. As a result of our performance enhancements, our preliminary results indicate a 70% improvement in hypervisor query response time through these enhancements as compared to our previous monitoring solution, VOTUS. Almond is a work in progress, and will feature an intuitive web-based interface that allows system administrators and cloud users to view and analyze resources on the cloud. Once completed, Almond promises to be a highly scalable, fast performing and dynamic cloud resource monitor.
-
-
-
Arabic named entity operational recognition system
Authors: Shiekha Ali Karam, Ali Jaoua and Samir ElloumiExtracting named entities is an important step for information extraction from a text, based on a given ontology. Dealing with Arabic language invokes an additional number of challenges compared to English, French and other languages within similar families. The major difficulties involve complex morphological systems, no capitalization, and no standardization of Arabic writing. The Arabic language has a rich and complex morphological landscape due to its highly inflected nature. Usually, any Arabic lemma word can be constructed using different internal structure, prefixes and suffixes. Furthermore, there is no standardization of Arabic writing because of the spelling inconsistency of Arabic words. In this work, we propose an operational hybrid approach combining dictionary-based and rule-based detection for extracting seven categories of named entities which are organization by name, date, interval, price/value, percentage, currency and unit. The dictionary-based approach performs exact or approximate matching of the words with prepared Arabic organization names. In case of non-exact matching with the dictionary words, the approximate matching is an efficient solution for morphological difficulties. Specificities of Arabic language are also processed by rule-based detection, which is based on capturing the entities patterns in terms of regular expressions or patterns provided by experts. We evaluated our Arabic name entity recognition system using financial news articles and we obtained around an 80% of recognition rate.
-
-
-
Scalability evaluation of cluster size for MapReduce applications in elastic compute clouds
Authors: Fan Zhang and Majd F SakrThe MapReduce programming model is a widely accepted solution to address the rapid growth of the so-called big-data processing demands. Various MapReduce applications with a huge volume of input data can run on an elastic compute cloud composed of many computing instances. This elastic compute cloud is best represented by a virtual cluster, such as Amazon EC2. Performance prediction of MapReduce applications would help in understanding their scalability pattern. However, it is challenging due to the complex interaction of the MapReduce framework and the underlying highly-parameterized virtualized resources. Furthermore, MapReduce's high-dimension space of configuration paremeters which adds to the prediction complexity. We have evaluated a series of representative MapReduce applications on Amazon EC2, and identified how the cluster size affects the execution times. The scaling curve of all applications are studied to discover the scalability pattern. Our major findings are as follows: (1) The execution times of MapReduce applications follow a power-law distribution, (2) For map-intensive applications, the power-law scalability starts from a small cluster size, and (3) For reduce-intensive applications, the power-law scalability starts from a lager cluster size. We attempted to fit our scalability performance results using three regression methods: polynomial regression, exponential regression and power regression. By measuring the Root Squared Mean Error (RSME), the power regression performs best at performance prediction compared with the other methods evaluated. This was the case across all the benchmark applications studied. Our performance prediction methods will aid cloud users in choosing appropriate computing resources, both virtual and physical, from small-scale experimental test runs for cost saving.
-
-
-
Design Considerations for Content and Personality of a Multi-lingual Cross-Cultural Robot
Authors: Micheline Ziadee, Nawal El Behih, Lakshmi Prakash and Majd SakrOur aim is to develop a culturally aware robot capable of communicating with people from different ethnic and cultural backgrounds and performing competently in a multi-lingual, cross-cultural context. Our test bed is a female robot receptionist, named Hala, deployed at the reception area in Carnegie Mellon University in Qatar. Hala answers questions in Arabic and English about people, locations of offices, classrooms and other rooms in the building. She also provides information about the weather, Education City, and her personal life. Our first model, Hala 1.0, was a bilingual robot extending an American model whose personality and utterances conform to the American culture. Three years of interaction logs have shown that 89% of Hala 1.0's interactions were in English. We conjecture that this is due to the robot's poor ability to equally portray both Arabic and American cultures and to its limited Arabic content. In order for us to investigate cultural factors that bear on communication significantly, we developed Hala 2.0 which is also a bilingual robot designed to be an Arab-American robot with more Arabic features in appearance, expression and interaction. The robot's personality is constructed taking into account the socio-cultural context in which its interactions will take place. To achieve bilingualism we had to create symmetry between Arabic and English linguistic content. Since the robot's utterances were developed primarily in English we resorted to translating them into Arabic and adapting them to the constraints of our socio-cultural context. Since Arabic is a highly inflected language, we adopted the plural case in formulating the robot's replies so as to avoid gender bias. To improve query coverage, we added word synonyms, including context-related synonyms (exp:هل تحبين عملك؟/ هل يعجبك عملك؟ ) and different formulations for the same question (exp: do you sleep? / do you go to sleep? and هل تنامين؟/ أتنامين؟). Furthermore, based on three years of recorded query logs, we expanded the range of topics that the robot is knowledgeable about by adding 3000 question/answer sentences to increase the robot's capacity for engaging users. All content and utterances were developed to align with the robot's designed personal traits.
-
-
-
Performance of spectrum sharing systems with two-way relaying and multiuser diversity
Authors: Liang Yang, Mohamed-Slim Alouini and Khalid QaraqeIn this paper, we consider a spectrum sharing network with two-way relaying and multiuser diversity. More specifically, one secondary transmitter with the best channel quality is selected and splits its partial power to relay its received signals to the primary users by using the amplify-and-forward relaying protocol. We derive a tight approximation for the resulting outage probability. Based on this formula, the performance of the spectral sharing region and the cell coverage are analyzed. Numerical results are given to verify our analysis and are discussed to illustrate the advantages of our newly proposed scheme.
-
-
-
Identifying stressful and relaxation activities using an ambulatory monitoring device
Authors: Hira Khan, Beena Ahmed, Jongyoon Choi and Ricardo Gutierrez-OsunaBackground and Objective: The Autonomic Nervous System (ANS) regulates physiologic processes autonomously through the sympathetic (SNS) and parasympathetic systems (PNS) with both working in balance e.g. sympathetic input accelerates heart rate and prepares for emergencies while the parasympathetic slows the heart rate and relaxes the body. Stress can lead to imbalances in these two systems which can harm the human body. Persistent imbalances caused by chronic stress may trigger diseases such as hypertension, diabetes, asthma and depression and also lead to social problems. In this paper we discuss the effectiveness of a wearable physiological monitoring device in identifying the response of subjects to stressful and relaxation activities to monitor the long term impact of stress. Methods: To achieve this objective we developed a body sensor network to wirelessly monitor heart rate, respiratory rate and skin conductance. We collected data while subjects performed mental challenges, chosen to measure a range of stress responses interleaved with deep breathing activities, which they also assessed. We examined the data using several measures of heart rate variability--spectral power in the low frequency (HRV-LF) and high frequency range (HRV-HF), mean (AVNN) and standard deviation of successive RR intervals, the portion of RR interval that changes more than 25 msec (pNN25) and the root mean square of successive differences of RR (RMSSD). Respiratory effect was evaluated using normalized respiratory high and low frequency components and their ratio. To assess the impact on the skin conductance, the mean and standard deviation of the slow varying tonic skin conductance level (SCL) and rapidly varying phasic response-skin conductance response (SCR) were computed. Results: An analysis of the computed features indicated that not all features were able to accurately identify the impact of stress on the subjects. The HRV and skin conductance measures were more highly correlated to stress levels with best discrimination obtained using AVNN, RMSSD, PNN25, HRV-HF, SCL mean and SCR standard deviation. Conclusions: This study has shown that it is possible to extract features from physiological signals that can be transformed into meaningful measures of individual stress.
-
-
-
Computational intelligence in power electronics and electric drive control
Authors: Mohammad Jamil, Atif Iqbal and Mohammad Al-NaemiPower Electronics converters are finding growing applications in industries and house hold devices. The growing automation in industrial applications needs highly complex power electronics systems to process the electric power. The controls of power electronic converters are still posing challenges due to their required precision. Several control strategies are developed and reported in the literature including Pulse Width Modulation (PWM), model predictive control, sliding mode control etc. The development of computational intelligence techniques are artificial intelligence, fuzzy logic, adaptive neuro-fuzzy inference system, genetic algorithms etc. The basic idea is to incorporate the human intelligence in the control system so it makes intelligent decisions. The paper presents an over-view of the application of computational intelligence techniques for control of power electronic converters. Computational Intelligence (CI) has been in focus for quite long time, and it is well known that CI techniques can help in solving complex multidimensional problems which are difficult to solve by conventional methods. Computational Intelligence technology is growing rapidly and its applications in various fields is being tested. Power electronics converters is one of the major application areas where this technology can play a vital and decisive role. Recent development of powerful digital signal processors and field programmable gate arrays is making implementation of Computational Intelligence technology economical with improvement of performance, compact and more competitive. Evidently, the future impact of this CI technology on power electronics converter control is very significant and utilizable.
-
-
-
Real-time online stereo camera extrinsic re-calibration
Authors: Peter Hansen, Brett Browning, Peter Rander and Hatem AlismailStereo vision is a common sensing technique for mobile robots and is becoming more broadly used in automotive, industrial, entertainment, and consumer products. The quality of range data from a stereo system is highly dependent on the intrinsic and extrinsic calibration of the sensor head. Unfortunately, for deployed systems, drift in extrinsic calibration is nearly unavoidable. Thermal variation and cycling combined with shock and vibration can cause transitory or permanent changes in extrinsics that are not modeled accurately by a static calibration. As a result the quality of the sensor degrades significantly. We have developed a new approach that provides real-time continuous calibration updates to extrinsic parameters. Our approach optimizes the extrinsic parameters to reduce epipolar errors over one or multiple frames. A Kalman Filter is used to continually refine these parameter estimates and minimize inaccuracies resulting from visual feature noise and spurious feature matches between the left and right images. The extrinsic parameter updates can be used to re-rectify the stereo imagery. Thus, it serves as a pre-processing step for any stereo process ranging, from dense reconstruction to visual odometry. We have validated our system in a range of environments and stereo tasks and demonstrated it at the recent Computer Vision and Pattern Recognition conference. Significant improvements to stereo visual odometry and scene mapping accuracy were achieved for datasets collected using both custom built and commercial stereo heads.
-
-
-
Concurrency characterization of MapReduce applications for improved performance on the cloud
Authors: Mohammad Hammoud and Majd SakrDriven by the increasing and successful prevalence of MapReduce as an analytics engine on the cloud, this work characterizes the Map phase in Hadoop MapReduce to guide its configuration and improve overall performance. MapReduce is one of the most effective realizations of large-scale data-intensive cloud computing platforms. Hadoop is an open source implementation of MapReduce and is currently enjoying wide popularity. Hadoop has a high-dimensional space of configuration parameters (~200 parameters) that poses a burden on practitioners, like computation scientists, system researchers, and business analysts, to set for efficient and cost-effective execution. In this work we observe that MapReduce application performance is highly influenced by Map concurrency, defined in terms of two configurable parameters, the number of available map slots and the number of map tasks running over the slots. As Map concurrency is varied, we show that some inherent MapReduce characteristics allow systematic and well-informed prediction of MapReduce performance response (runtime increase or decrease). We propose Map Concurrency Characterization, MC2, a predictor for MapReduce performance response. MC2 allows for optimized configuration of the Map phase and, consequently, enhanced Hadoop performance. Current related schemes require mathematical modeling, simulation, dynamic instrumentation, static analysis of unmodified MapReduce application code, and/or actual performance measurements. In contrast, MC2 simply bases its decisions on MapReduce characteristics that are affected by Map concurrency. We implemented MC2 and conducted comprehensive experiments on a private cloud and on Amazon EC2 using Hadoop 0.20.2. Our results show that MC2 can correctly predict MapReduce performance response and provide up to 2.3X speedup in runtime for the tested benchmarks. This performance improvement allows MC2 to further serve in reducing cost in a cloud setting. We believe that MC2 offers a timely contribution to the data analytics domain on the cloud, especially as Hadoop usage continues to grow beyond companies like Google, Microsoft, Facebook and Yahoo!.
-
-
-
Performance analysis of distributed beamforming in a spectrum sharing system
Authors: Liang Yang, Mohamed-Slim Alouini and Khalid QaraqeIn this paper, we consider a distributed beamforming scheme (DBF) in a spectrum sharing system where multiple secondary users share the spectrum with the licensed primary users under an interference temperature constraint. We assume that DBF is applied at the secondary users. We first consider optimal beamforming and compare it with the user selection scheme in terms of the outage probability and bit-error-rate performance. Since perfect feedback is difficult to obtain, we then investigate a limited feedback DBF scheme and develop an outage probability analysis for a random vector quantization (RVQ) design algorithm. Numerical results are provided to illustrate our mathematical formalism and verify our analysis.
-
-
-
Use of emerging mobile computer technology to train the Qatar workforce
Authors: Mohamed Ally, Mohammed Samaka, John Impagliazzo and Adnan Abu-DayyaBackground: According to the Qatar National Vision 2030, Qatar residents are encouraged to implement information and communication technology (ICT) initiatives in government, business, and education in pursuit of a knowledge-based society that embraces innovation, entrepreneurship, and excellence in education. This research project, which is funded by the Qatar National Research Fund (QNRF) under the National Priority Research Program (NPRP), is contributing to this vision by investigating the use of innovative training technology to train Qataris so that they are prepared for the 21st century workforce. Specifically, this research project investigates the use of mobile computer technology--such as mobile phones, tablet computers, and handheld computers--to train Qatar residents on workplace English so that they can become more effective when communicating in the workplace. This presentation will share the results of a preliminary study that was conducted. This project will be expanded using the "Framework for the Rational Analysis of Mobile Education" (FRAME) model (Figure 1) that describes the convergence of mobile technologies, human learning capacities, and social interaction. Objectives: The research evaluates the effectiveness of the mobile computer technology training and transferability to the Qatar workplace environment. Methods: A total of 27 trainees participated in this study. They were given a pre-test followed by the mobile learning training and then a post-test. Results: Overall, the learners' performance improved by 16 percent after completing the training with mobile technology. Ninety four percent of subjects said that the quality of the presentation on the mobile technology was either excellent, good, or fair. One hundred percent of subjects reported that the mobile technology helped them learn. Conclusion: The delivery of training using mobile computer technology was well received by learners. They liked the interactive and innovative nature of the training.
-
-
-
Novel applications of optical flow in measuring particle fields in dense crowded scenes at Hajj 2011
Authors: Khurom Hussain Kiyani, Emanuel Aldea and Maria PetrouBackground & Objectives: The Hajj and minor Muslim pilgrimage of Umrah, present some of the most densely crowded human environments in the world, and thus offer an excellent testbed for the study of dense crowd dynamics. Accurate characterisation of such crowds is crucial to improve simulations that are ubiquitously applied to crowded environments such as train stations, and which require a high degree of detailed parameterisation. Accurate measurements of key crowd parameters can also help to develop better strategies for mitigating disasters such as the tragic stampede of 2006 that killed over 300 pilgrims during the Hajj. With Qatar set to be one of the major cultural centres in the region, e.g. hosting 2022 FIFA World Cup, the proper control and management of large singular events is paramount for safety and Qatar's standing on the international stage. We aim to use the unique video data gathered from Hajj 2011, to assess the dynamics of very dense crowded environments with a particular focus on dangerous crowd instabilities and systemic shocks. Methods: We make use of increasingly complex optical flow algorithms (Horn-Schunck, Lucas-Kanade, TV-L1) to extract the instantaneous velocity field between each pair of frames in the videos. From these velocity vector fields we then construct the pedestrian (Lagrangian) flow field by the use of texture advection techniques that initially seed the flow with particles or random noise. Results: We present results of the above application of optical flow and texture advection methods to the data we collected in a field study during Hajj 2011. Particularly, we aim to illustrate the specific flow patterns that arise in such crowded environments. We also aim to present the preliminary results of a pilot multiple camera stereovision study conducted in the London Central Mosque on a Friday when the mosque was particularly crowded.
-
-
-
MetaSimulo: An automated simulation of realistic 1H-NMR spectra
Authors: Zeinab Atieh and halima bensmailQatar is now accumulating important expertise in biomedical data analytics. At QCRI, we are interested in providing for biomedical researchers based on their computational needs and in developing tools for data analytics in biomedical research. When computers extract patterns and classifiers from a body, they are used to predict new data that helps in defining a prior threat or disease. One non-invasive powerful technique for detecting and quantifying bio-markers linked to diseases (metabolites) is Nuclear Magnetic Resonance (NMR) spectroscopy. 1H NMR spectroscopy is commonly used in the metabolic profiling of biofluids. Metabolites in biofluids are in dynamic equilibrium with those in cells and tissues so their metabolic profile reflects changes in the state of an organism due to disease or environmental effects. The analysis of signals obtained from patients may be performed via methods which incorporate prior knowledge about the metabolites that contribute to the 1H NMR spectroscopic signals, recorded in a metabolite dataset. This paper presents a novel model and computationally automated approach that allows for the simulation of datasets of NMR spectra in order to test real data analysis techniques, hypotheses and experimental designs. Unlike others, this model generates NMR spectra of biofluids unlimited by the magnetic field or pH. It is simple to implement, requires small storage, and is easy to compute and compare. Moreover, it can treat metabolites with a relatively high number of 1H due to a special technique in programing based on physical properties. This model can open the door wide to a new technique of metabolite quantification and thus a better determination of metabolite concentrations which is the key of disease identification. The area of NMR expands rapidly and holds great promise in terms of the discovery of potential biomarkers of diseases, such as diabetes, an area of increasing concern in Qatar, and cancer, which is the third cause of death in Qatar.
-
-
-
MegaMorph: Multi-wavelength measurements of nearby galaxy structures
Authors: Marina Vika, Steven P Bamford, Boris Haeussler and Alex RojasFitting an analytic function to the two-dimensional surface brightness profile of a galaxy provides a powerful method of quantifying its internal structure. The resulting parameters reveal the size, shape and luminosity of the galaxy and its separate structural components (e.g., disk and bulge). Current galaxy fitting softwares packages consider only a single waveband image at a time. However, variations in stellar populations between and within galaxy structures mean that their observed properties depend on wavelength. Correctly studying the physical properties of galaxy components requires that these wavelength variations be accounted for. Multi-wavelength studies are presently possible, but require significant compromises: either the fits to each band must be done independently, or, one band must be favored for determining structural parameters, which are then imposed on fits to the other bands. Both of these approaches waste valuable information, and therefore result in suboptimal decompositions. Our project, 'MegaMorph', is developing a next-generation tool for decomposing galaxies, in terms of both their structures and stellar populations. We aim to present a modified version of the two-dimensional galaxy fitting software, GALFIT, developed as part of our MegaMorph project. These new additions enable a galaxy to be fit using images at many different wavelengths simultaneously. All the available data is therefore used to constrain a wavelength-dependent model of the galaxy, resulting in more robust, physically meaningful, component properties. We verify the advantages of our technique by applying it to a sample of 160 well-studied galaxies at redshifts smaller than 0.025, with ugriz imaging from the Sloan Digital Sky Survey to demonstrate how the resulting decompositions allow us to study the links between stellar population and galaxy structure in detail. Furthermore, we illustrate the advantages of our new method with regard to galaxy surveys, by fitting a sample of ~4000 artificially redshifted images of the galaxies described above. Our technique enables physical parameters of galaxy components to be robustly measured at lower signal-to-noise and resolution than would otherwise be possible. This paves the way for detailed statistical studies of the physical properties of galaxy disks and bulges.
-
-
-
Random subcarrier allocation in OFDM-based cognitive radio networks
Authors: Sabit Ekin, Mohamed M. Abdallah, Khalid A. Qaraqe and Erchin SerpedinAdvances in wireless communications technologies (e.g., 3G, 4G and beyond) entail demands for higher data rates. A well-known popular solution to fulfill this requirement was to allocate additional bandwidth, which unfortunately is not anymore viable due to radio-frequency (RF) spectrum scarcity. Nonetheless, spectrum measurements around the globe have revealed the fact that the available spectrum is under-utilized. One of the most remarkable solutions to cope with the under-utilization of spectrum is the concept of cognitive radio (CR). In CR systems, the main implementation issues are spectrum sensing because of the uncertainties in propagation channel, hidden primary user (PU) problem, sensing duration and security issues. Hence, the accuracy and reliability of the spectrum sensing information can be suspicious and questionable inherently. There has been no study to date that investigates the impacts of absence of spectrum sensing information in CR networks. In this work, due to the imprecise and unreliable spectrum sensing information, we investigate the performance of an orthogonal frequency-division multiplexing (OFDM)-based (4G and beyond) CR spectrum sharing communication system that assumes random allocation and absence of the PU's channel occupation information, i.e., no spectrum sensing is employed to acquire information about the availability of unused subcarriers or the PU's activity. The results show that due to the lack of information of the PUs' activities, the SU randomly allocates the subcarriers of the primary network and collides with the PUs' subcarriers with a certain probability. The number of subcarrier collisions is found to be following hypergeometric distribution. The SU's capacity with subcarrier collisions is employed as a performance measure to investigate the proposed random allocation scheme for both general and Rayleigh channel fading models. To avoid the subcarrier collisions at the SUs due to the random allocation and to obtain the maximum sum rate for SUs based on the available subcarriers, an efficient centralized sequential algorithm is proposed and analyzed. The performance of such a communication set-up can provide various insights into the studies in the CR literature, and it these be utilized as a valid candidates for performance comparison benchmarks in CR spectrum sharing systems with the availability of spectrum sensing information.
-
-
-
Analysis of mobility impact on interference for short-range cognitive radio networks
Authors: Ali Riza Ekti, Erchin Serpedin and Khalid A QaraqeBackground & Objectives: Cognitive radios (CRs) strive to utilize the white holes in the radio frequency (RF) spectrum in an opportunistic manner. Because interference is an inherent and a very critical design parameter for all sorts of wireless communication systems, many of the recently emerging wireless technologies prefer smaller size coverage with reduced transmit power in order to decrease interference. Prominent examples of short-range communication systems trying to achieve low interference power levels are CR relays in CR networks and femtocells in next generation wireless networks (NGWNs). It is clear that a comprehensive interference model including mobility is essential especially in elaborating the performance of such short-range communication scenarios. This work focuses on analyzing how interference evolves in time under long and short term fading. Such an analysis is essential, because once the interference behavior is understood, it can be managed in a better way. Also, NGWNs and CRs can be designed in such a way that arduous and expensive planning stage is omitted. This way, deployment costs can be reduced drastically. Methods: It is known that received signal in a general wireless propagation environment includes the effects of both long- and short-term fading. Therefore, a logarithmic transformation reveals the individual impact of each fading phenomenon. The two-dimensional (2D) random walk model is incorporated into the physical layer signal model. Results: The results show that relatively larger displacements in short primary-user-receiver (PU-Rx) and secondary-user-transmitter (SU-Tx) separations lead to drastic power level fluctuations in the observed interference power levels. The impact of path loss is one of the major factors changing the future interference conditions for low-speed mobility scenarios especially within short communication ranges. Conclusions: It is shown that long-term fading plays a crucial role for the temporal evolution of interference for NGWNs and CRs. By using the interference statistics, the design and deployment of future cellular mobile radio systems in Qatar could be optimized. This is crucial, especially for rapidly changing network topographies as is the case with the city of Doha. Even for well-established network topographies, the proposed method provides an analytical way of examining and managing the interference.
-
-
-
Image processing on the Cloud: Characterizing edge detection on biomedical images
Authors: Majd Sakr, Mohammad Hammoud and Manoj Dareddy ReddyIn order to analyze and deduce valuable information from big image data, we have developed a framework for distributed image processing in Hadoop MapReduce. A vast amount of scientific data is now represented in the form of images from sources like medical tomography. Applying algorithms on these images has been continually limited by the processing capacity of a single machine. MapReduce created by Google presents a potential solution. MapReduce efficiently parallelizes computation by distributing tasks and data across multiple machines. Hadoop, an open source implementation of MapReduce, is gaining a widespread popularity due to features such as scalability and fault tolerance. Hadoop is primarily used with text-based input data. Its ability to process image data and its performance behavior with image processing have not been fully explored. We propose a framework that efficiently enables image processing on Hadoop and characterizes its behavior using a state-of-the-art image processing algorithm, Edge Detection. Existing approaches in distributed image processing suffer from two main problems: (1) input images need to be converted to a custom file format and (2) image processing algorithms require adherence to a specific API that might impose some restrictions on applying some algorithms to Hadoop. Our framework avoids these problems by: (1) bundling all small images into one large file that can be seamlessly parsed by Hadoop and (2) relaxing any restriction by allowing a direct porting of any image processing algorithm to Hadoop. A R educe-less job is then launched where the code for processing images and a mechanism to write the images back individually to HDFS are included in Mappers. We have tested the framework using Edge Detection on a dataset of 3760 biomedical images. Besides, we characterized Edge Detection along several dimensions, such as degree of parallelism and network traffic patterns. We observed that varying the number of map tasks has a significant impact on Hadoop's performance. The best performance was obtained when the number of map tasks equals the number of available slots as long as the application resource demand is satisfied. Compared to the default Hadoop configuration, a speedup of 2.1X was achieved.
-
-
-
Analysing existing set theoretic and program specification diagrams
Authors: Noora Fetais and Peter ChengThis research is aimed at evaluating the usability of notational systems that are used for specifying programs. We provide a conceptual analysis of the development of constraint diagrams (CD) as a diagrammatic notation which developed to support program specification. A detailed analysis of multi-case comparisons of formal languages and graphical systems for expressing logic and program constructs was conducted in order to trace how the development of the notations overcame the limitations of earlier generations of notations. By following the evolution of logic diagrams, we consider why and how they have been successively revised to increase their expressivity or their ease of comprehensibility and use. Visualizations of logic were proposed over the centuries. Leonhard Euler presented Euler diagrams, John Venn generalized Euler diagrams and presented Venn diagrams, and Charles Peirce extended Venn diagrams by increasing their expressiveness and presented Venn-Peirce diagrams. The modifications from Peirce to Shin concentrate on restoring visual clarity, but without loss of expressive power. Based on that, Constraint Diagrams were presented by Stuart Kent for constraint specification, behavioural specification, and relational navigation. We found that the gradual changes in diagrams from Euler Circles through Venn, Peirce, Shin to Constraint Diagrams share three complementary common themes: (1) to increase the expressiveness, (2) to increase the logical power of the formality of the system, and (3) to enhance the visual clarity. Depending on the purpose of designing a diagram, the priority of a theme over other themes is granted. For example, both Venn and Peirce adopted the same kind of solution in order to achieve these improvements: to introduce new syntactic objects, that is, shadings by Venn, and x's, o's, and lines by Peirce. However, on the negative side, these revised systems suffer from a loss of visual clarity, mainly because of the introduction of more arbitrary conventions. The modifications from Peirce to Shin concentrate on restoring visual clarity, but without loss of expressive power. The extension from these diagrams resulted in CD, which allows relational navigation (expressions involving two place predicates), is more expressive than previous diagrams, and has higher visual clarity and logical power.
-
-
-
Empirical evaluation of constraint diagrams notation
Authors: Noora Fetais and Peter ChengAn empirical evaluation of constraint diagrams (CD) as a program specification language is conducted by comparing it to natural language (NL) with computer science students included two experiments, one on the interpretation of CD to evaluate the comprehension of notational system, and the other on the construction of program specifications. The first experiment took the form of a web-based competition in which 33 participants were given instructions and training either on CD or on equivalent NL specification expressions. After each example, they responded to three multiple-choice questions requiring the interpretation of expressions in their particular notation. Although the CD group spent more time on the training and had less confidence, they obtained comparable interpretation scores as the NL group and took less time to answer the questions, although they had no prior experience of CD notation. In the second experiment, which focused on the construction of CD, 20 participants were given instructions and training either on CD or on equivalent NL specification expressions. After each example, they responded to three questions requiring the construction of expressions in their particular notation. We built a mini-editor to allow the construction of the two notations, which automatically logged their interactions. Although the CD group supplied more accurate answers, they spent more time in answering the questions. The NL group supplied answers that were partially correct but with some missing information. Moreover, the CD group had spent more time in training, but their returns to the training examples were fewer than the NL group.
-
-
-
Minimum-selection maximum ratio transmission schemes in underlay cognitive radio systems
Authors: Zied Bouida, Ali Ghrayeb and Khalid QaraqeUnder the scenario of an underlay cognitive radio network, we introduce the concept of minimum-selection maximum ratio transmission (MS-MRT). Inspired by the mode of operation of the minimum-selection generalized selection combining (MS-GSC) technique, the main idea behind MS-MRT is to present an adaptive variation of the existing maximum ratio transmission (MRT) technique. While in the MRT scheme, all the transmit antennas are used for transmission, and only a subset of antennas verifying the interference constraint to the primary receiver in MS-MRT are adaptively selected and optimally beamformed in order to meet a given modulation requirement. The main goal of these schemes is to maximize the capacity of the secondary link while satisfying the bit error rate (BER) requirement and a peak interference constraint to the primary link. The performance of the proposed schemes is analyzed in terms of the average spectral efficiency, the average number of antennas used for transmission, the average delay, and the average BER performance. These results are then compared to the existing bandwidth efficient and switching efficient schemes (BES and SES, respectively). The obtained analytical results are then verified with selected numerical examples obtained via Monte-Carlo simulations. We demonstrate through these examples that the proposed schemes improve the spectral and the delay performance of the SES and BES schemes and fit better to delay sensitive applications. The proposed schemes also offer better processing-power consumption than the MRT schemes since a minimum number of antennas is used for communication in the MS-MRT schemes. The MS-MRT techniques represent power and spectral efficient schemes that can be extended to more practical scenarios. As an example, these schemes can be studied in the context of Long term Evolution (LTE) Networks where adaptive modulation, beamforming, and interference management are of the major enabling Techniques.
-
-
-
Wireless smart home monitoring and surveillance system
More LessWireless technology such as GSM/GPRS and Wi-Fi has taken on an important role in our daily life. The noticeable improvement of these technologies has led to advancement in home automation, where people can enjoy comfort and secure living places. This project proposes a ZigBee-based cost effective solution to control and monitor home appliances remotely and to enable home security against intrusion in the absence of a homeowner using the wireless sensor network kit from national instruments. A wireless sensor network, connected to a GSM modem, can send and receive alarming and controlling SMSs and remotely control the system using an internet web page. Also a security camera is embedded within the system to send alarm SMS and snap an image in case of a visitor or intruder. The system is implemented and evaluated among different sample applications spanning home security, climate control, hazard alarming, and appliance control with remote access and control features to demonstrate the feasibility and effectiveness of the proposed system. Moreover a small-scale live demo model of the smart house is constructed to proof the concept
-
-
-
Malware.inc: analyzing the security of popular web/mobile frameworks
Authors: Talal Al Haddad, Manoj Reddy, Fahim Dalvi, Baljit Singh, Ossama Obeid, Rami Al Rihawi, Omar Abou Selo, Aly Elgazar and Thierry SansBackground and Objectives: A new generation of software emerged with mobile devices, cloud computing and the web. New usages come with new security threats, and a new generation of malware (malicious software) is emerging as well. Recent security reports show that these malware are on the increase. The goal of this project is to evaluate the risk of exposure to malware in popular app ecosystems such as Apple iOS, Google Android, Facebook, Google App Engine, Mozilla Firefox and Google Chrome. Methods: Eight students from Carnegie Mellon Qatar participated in this project. Each looked at a specific technology (either iOS, Android, Facebook, Google App, Mozilla Firefox or Google Chrome). The researchers learned how to develop applications and, as proof of concept, developed several malware apps that were able to steal user's personal information. One collects logins, passwords and credit card numbers from user's Gmail. One collects user's private information on Facebook and propagates through the victim's friends. One records the "clicking" passwords that users enter on online banking websites. One records keystrokes made on the computer without being detected by existing antivirus. One is an Android app that records people conversations while the phone is on standby mode. Results: Based on these experiments, we were able to assess the risks and analyze the security issues of these popular apps that we use everyday. These preliminary results were presented at 6th INTERPOL's Group meeting-MENA Region conference in Doha (March 22nd). We plan to publish the scientific results during the Fall of 2012. As future work, the security expertise gained during this project will allow us to design new security tools to protect users against these new kinds of malware. Conclusion: Qatar offers many services related to e-government, e-business, e-education and e-health through web portals and mobile applications. Deploying such a global infrastructure requires a strong security assurance. This project contributes to this vision by developing a local expertise on cyber security.
-
-
-
Variations in giving directions across arabic and English native speakers
Authors: Huda Gedawy, Micheline Ziadee and Majd SakrThis work explores the differences in direction-giving strategies between two groups, native Arabic and native English speakers. This study will help influence design decisions for multi-lingual, cross-cultural human robot interaction. There are clear cultural influences on modes of communication. Previous research studies found that direction-giving techniques and strategies vary between different cultural groups. Burhanudeen compared Japanese and English native speakers and found that locator remarks are more frequently used by Japanese natives, while the use of directives is more common with English natives. In this work, we examine the discourse for navigation instructions of members of two target groups, Arabic native speakers and English native speakers. We address the following questions: How do languages and strategies used for providing directions vary between these two groups? What are the differences and what are the similarities? Are there any possible gender-related differences in giving directions? We recorded 56 participants giving oral direction instructions for three specific locations at the Carnegie Mellon Qatar campus, and 33 participants giving oral direction instructions for three different locations at the Student Center. We transcribed the audio recordings and annotated our transcriptions. We categorized the spatial perspectives used by participants into route perspective which involves the use of left/right turns and landmarks, and survey perspective, which involves the use of units (time and distance) and cardinals (north, south, east and west). Our analysis also included number of pauses, repetitions, error corrections, number of words and intermediate details. Our results showed that the way-finding strategy favored by English natives and Arab natives is the landmark-based navigation strategy. However, English natives had a higher frequency of using cardinals, pauses and intermediate information while Arab natives used units of distance, left/right turns and error corrections more frequently than English natives. Male participants from both groups are more likely to rely on survey perspective than female participants. Based on these results, we conclude that culture, language, and gender influence a speakers discourse and strategy for giving directions.
-
-
-
Enhanced formulations for the arrival-departure aircraft scheduling problem
Authors: Sara Yousef Al-Haidous, Ameer Al-Salem, Mohamed Kharbeche and Fajr Al-AnsariBackground and Objectives: The target of this work is to construct a mathematical model to help resolve aircraft scheduling over multiple runways. This problem is considered a hard topic in transportation research due to the constantly increasing aviation traffic volume around the world. Surprisingly, although there exists an impressive amount of literature for the landing and arrival cases, there is no proposed exact solution to solve this problem. Therefore, the main contribution of this work is to present exact methods involving exact procedures specifically related to this complex arrival-departure variant. This method will solve the problem optimally based on mixed-integer linear formulation with the constraints of limited time windows and separation constraints. The project was funded by Qatar Foundation. Our objective for the investigation stems from its practical relevance to airports where good scheduling increases the airport capacity, maintains a good level of safety and reduces the controller's workload. Methods: We formulated a basic aircraft sequencing model using a mixed-integer linear formulation. Then, we proposed another model by adding valid inequalities, combining constraints and removing some variables. All these mathematical models are based on linear ordering feature. These models were solved using the mathematical programming language AMPL. Then, we solved them using a professional solver CPLEX. Results: We investigated the efficacy of reformulation arrival-departure scheduling problem over multiple runways. The results show that the solver is very effective in obtaining optimal solutions. In fact, the experimental tests reveal that most of instances are solved to optimality within a short time. In addition, we realized that adding the valid inequality constraint among the model yields less CPU ( Central Processing Unit) time and less number of nods. Conclusion: We proposed linear ordering formulations for the problem of minimizing total weighted tardiness to sequence arrival-departure aircraft over multiple runways. We presented the results of computational study that were carried out on a large variety of random instances and that shows the importance of reformulating the same problem. Interestingly, we observed that the proposed models enable us to optimally solve problems with up 15 aircraft and four runways.
-
-
-
Performance analysis of cognitive radio multiple-access channels over dynamic fading environments
Authors: Sabit Ekin, Khalid A. Qaraqe and Erchin SerpedinDue to the requirement of high data rates and broad utilization of wireless technologies (e.g., 3G, 4G and beyond), the radio frequency (RF) spectrum has become a very limited resource for modern wireless communication systems. Studies indicate that the spectrum is being under-utilized. As a promising solution, cognitive radio (CR) is an encouraging candidate to achieve more efficient RF spectrum utilization. The previous studies motivate us to utilize the dynamic channel fading model (hyper-fading) to perform a unified analysis for CR multiple-access channel (CR-MAC) networks. Since the nature of the CR networks is multiuser communication, deliberating CR-MAC is more pertinent than point-to-point communication systems. The objective is to maximize the capacity of CR-MAC network over hyper-fading channels under both secondary user's (SU's) transmit power (TP) and interference temperature (IT) constraints. Multiple SUs transmit to the secondary base station under the TP and IT constraints. In order to perform a general analysis, a theoretical dynamic fading model termed hyper-fading model, which is suitable to the dynamic nature of cognitive radio channel, is considered. The optimal power allocation method (water-filling) is employed to maximize the capacity of CR-MAC for hyper-fading channel with TP and IT constraints. Throughout the results, the capacity of the hyper-fading channels are compared with that of other channel fading models such as Rayleigh, Nakagami-2, and with an AWGN channel. Furthermore, the impact of the number of SUs on capacity is investigated. Numerical results along with relevant discussions for capacity measure under AWGN, and Rayleigh, Nakagami-2 and hyper-fading channel models are provided to compare the behavior of CR-MAC in these environments. The results reveal that in the case of very strict IT constraint the water-filling method gives good capacity improvements. This benefit is lost when the IT constraint is relaxed. Through comparison of hyper-fading with other fading environments, it is found that the hyper-fading channel fills the gap in the capacity profiles obtained from the other channel fading types. Further, a study of such a CR-MAC system that undergoes a hyper-fading model can provide unified and comprehensive insights on performance analysis of the CR networks.
-
-
-
New perspectives, extensions and applications of De Bruijn identity
Authors: Sangwoo Park, Erchin Serpedin and Khalid QaraqeTwo significant identities-de Bruijn and Stein-were independently studied in information theory and statistics. De Bruijn identity shows a connection between two fundamental concepts in information theory and signal processing: differential entropy and Fisher information. On the other hand, Stein identity represents a relationship between the expectation of a function and its first-order derivative. Due to their several applications in statistics, information theory, probability theory, and economics, de Bruijn and Stein identities have attracted a lot of interest. In this study, two different extensions of de Bruijn identity and its relationship with Stein identity will be established. In addition, a number of applications using de Bruijn identity and its extensions will be introduced. The main theme of this study is to prove the equivalence between de Bruijn identity and Stein identity, in the sense that each identity can be derived from the other one. In a particular case, not only are de Bruijn and Stein identities equivalent, but they are also equivalent to the heat equation identity, which is another important result in statistics. The second major goal of this study is to extend de Bruijn identity in two distinctive ways. Given an additive non-Gaussian noise channel, the first-order derivative of differential entropy of the output signal is expressed as a function of the posterior mean, and the second-order derivative of differential entropy of the output signal is represented in terms of Fisher information. The third most important result is to introduce practical applications based on the results mentioned above. First, two fundamental bounds-the Bayesian Cramér-Rao lower bound (BCRLB) and the Cramér-Rao lower bound (CRLB)-in statistical signal processing, and a novel lower bound, tighter than BCRLB, are presented. Second, Costa's entropy power inequality is proved in two distinctive ways. Finally, min-max optimal training sequences for channel estimation and synchronization in the presence of unknown noise distribution are designed.
-
-
-
Performance of digitally modulated RFID energy detector for moisture sensing applications for oil and gas quality monitoring in Qatar
Authors: Adnan Nasir, Ali Riza Ekti, Khalid A Qaraqe and Erchin SerpedinBackground & Objectives: Advances in the radio frequency identification (RFID) technology have made it ubiquitously present. Due to the recently emerging applications, RFIDs have been used in a plethora of different scenarios. Oil and gas industry is no exception; developments in the RFID technology enabled it to be used in oil and gas quality and pipeline infrastructure monitoring through low frequency RFID tags. The presence of moisture in liquefied petroleum gas (LPG) / liquefied natural gas (LNG) pipes can create two main problems. It can degrade the heating ability of the fuel and it can also react with the refrigerant of the liquefied gas to create hydrates, which further lower the quality of the gas. A wireless monitoring system using low frequency RFIDs as moisture sensors can prevent such hazards by detecting the received energy of the transmitted signal to determine the presence of moisture. Method: A simple energy detector concept was utilized to exploit the well-known behavior of the RFID's signal reception and energy absorption with varying environments. A decision for the presence of moisture was made using the threshold values of the energy detector. Results: The experimental results show that the energy detector approach detects the presence of moisture in the oil and gas system. As another remark, when we decrease the distance between the receiver and transmitter antennas of RFID sensors, one can easily notice the increase on the energy detection performance. Different power levels, modulation schemes and frequency ranges have been analyzed to better understand the energy detection output of RFID system. An optimized solution can be looked at for using the best frequency and input power to maximize the distance between the two antennas. Conclusion: In this study, one can see that the RFID system can be used to send information regarding the presence of moisture for quality monitoring of liquefied gas pipes in Qatar. By using this information, we can detect and make decisions on the basis of the energy detection output from the RFID antenna system. This study can be used in a wireless cyber-physical moisture detection system targeted at Qatar's needs.
-
-
-
Outage and SER performance of an opportunistic multi-user underlay cognitive network
Authors: Fahd Ahmed Khan, Kamel Tourki, Mohamed-Slim Alouini and Khalid QaraqeHaving multiple users gives rise to multi-user diversity which can be exploited to give good quality-of-service to each user in the network and also increase the overall capacity of the network. In a spectrum-sharing setting, the multi-user diversity can be exploited; however, this is different from the traditional multi-user case because of the interference power constraints imposed on the secondary users. In this work, we consider a multi-user underlay cognitive network, where multiple cognitive users concurrently share the spectrum with a primary network, and a single secondary user is selected for transmission. The channel is assumed to have independent but not identical Nakagami-m fading. Considering an interference power constraint and a maximum transmit power constraint on the secondary user, a power allocation policy is derived based on the peak interference power constraint. For this policy the secondary user transmitter (SU-Tx) requires the instantaneous channel state information (CSI) of the link between the SU-Tx and the primary user receiver (PU-Rx). The advantage of this scheme is that the interference constraint is never violated and there is no loss of performance of the primary network. The user is selected for transmission based on a greedy scheduling scheme where the user with the highest instantaneous signal-to-noise ratio is chosen for transmission. For this user scheduling scheme, we analyze the uplink performance of the multi-user underlay secondary network in terms of outage probability and symbol error rate (SER). Exact closed-form expressions for the outage performance, moment-generating-function and SER performance of a multi-user cognitive network are derived. These expressions are obtained for an independent but non-identical distributed (i.n.i.d) Nakagami-m fading channel which is a more generic fading model and can cover a variety of fading environments including Rayleigh fading. Numerical results based on Monte-Carlo simulations are presented to verify the derived results. It is shown that the SER reduces as the peak interference power constraint is relaxed. Furthermore, as the number of users increases the SER reduces. If the interference power constraint is relaxed the power allocated becomes constant depending on the peak transmit power and thus the SER also becomes constant.
-
-
-
Experimental analysis of energy detection for digitally modulated signals: Indoor measurements
Authors: Ali Riza Ekti, Erchin Serpedin and Khalid A QaraqeBackground & Objectives: Spectrum sensing is a feature of cognitive radio (CR) systems which is proposed to improve spectral utilization of wireless signals. One of the sensing methods is the non-coherent energy detector, and even though it is computationally more effective than coherent methods, it has critical drawbacks, e.g. requirement of certain signal-to-noise ratio and number of samples. Moreover, type of digital modulation employed also affects the performance of the energy detectors. Therefore, the performance of the energy detectors is investigated for phase shift keying (PSK) and quadrature amplitude modulated (QAM) signals. Such an analysis is essential, because once the performance of the different modulated signals are understood; next generation wireless networks (NGWNs) and CRs can be designed in such a way that the arduous and expensive planning stage is omitted. This way, a higher data rate can be achieved by using the proper modulation type and/or order for indoor CRs and NGWNs. Methods: Instead of false alarm and missed detection analysis, probability mass function vs. energy detection statistics are introduced to better understand the effect of modulation type, order and wireless channel. A measurement setup is developed to consider line-of-sight and non-line-of-sight conditions. Signals are constructed, transmitted and recorded based on the signal model provided. All experiments took place in the Wireless Research Laboratory of the department of ECEN at Texas A&M University at Qatar. Results: The results show that performance of the energy detector changes drastically with the digital modulation scheme employed at the transmitter side. Another interesting point is the impact of the used energy detector samples (N). As expected, with the increase in N, the impact of central limit theorem (CLT) can be felt in a clearer way as well. Conclusions: By using the experimental results, the design and deployment of future cellular mobile radio systems such as femtocells in Qatar could be optimized. This is crucial, since most communications such as voice/internet/text traffic occur inside buildings especially for rapidly changing network topographies as is the case with the city of Doha. This way, data rate and speed can be optimized for the indoor environments.
-
-
-
Efficient parallel implementation of the SHRiMP sequence alignment tool using MapReduce
Authors: Rawan AlSaad, Qutaibah Malluhi and Mohamed AbouelhodaWith the advent of ultra high-throughput DNA sequencing technologies used in Next-Generation Sequencing (NGS) machines, we are facing a daunting new era in petabyte scale bioinformatics data. The enormous amounts of data produced by NGS machines lead to storage, scalability, and performance challenges. At the same time, cloud computing architectures are rapidly emerging as robust and economical solutions to high performance computing of all kinds. To date, these architectures have had limited impact on the sequence alignment problem, whereby sequence reads must be compared to a reference genome. In this research, we present a methodology for efficient transformation of one of the recently developed NGS alignment tools, SHRiMP, into the cloud environment based on the MapReduce programming model. Critical to the function and performance of our methodology is the implementation of several techniques and mechanisms for facilitating the task of porting the SHRiMP sequence alignment tool into the cloud. These techniques and mechanisms allow the "cloudified" SHRiMP to run as a black box within the MapReduce model, without the need for building new parallel algorithms or recoding this tool from scratch. The approach is based on the MapReduce parallel programming model, its open source implementation Hadoop, and its underlying distributed file system (HDFS). The deployment of the developed methodology utilizes the cloud infrastructure installed at Qatar University. Experimental results demonstrate that multiplexing large-scale SHRiMP sequence alignment jobs in parallel using the MapReduce framework dramatically improves the performance when the user utilizes the resources provided by the cloud. In conclusion, using cloud computing for NGS data analysis is a viable and efficient alternative to analyzing data on in-house compute clusters. The efficiency and flexibility of the cloud computing environments and the MapReduce programming model provide a powerful version of the SHRiMP sequence alignment tool with a considerable boost. Using this methodology, ordinary biologists can perform the computationally demanding sequence alignment tasks without the need to delve deep into server and database management, without the complexities and hassles of running jobs on grids and clusters, and without the need to modify the existing code in order to adapt it for parallel processing.
-
-
-
Incremental pseudo-conceptual organization of information relative to a domain
Authors: Sahar Ismail and Ali JaouaInformation resources over the World Wide Web are increasingly growing in size and creating immense demand for relevant and context-sensitive information. Thus, information retrieval and knowledge management systems need to exhibit high capacity for the mass of information available and most importantly should be able to handle the constant and rapid changes in information in order to produce results in a reasonable time. In this presentation we would like to highlight the research work done for building a new system for incremental information management relative to a domain of knowledge. The new system (named IPS) utilizes new conceptual methods developed using new formal concept analysis constructs called pseudo concepts for managing incremental information organization and structuring in a dynamic environment. The research work in hand focuses on managing changes in an information store relevant to a specific domain of knowledge attempted through addition and deletion of information. The incremental methods developed in this work should support scalability in change-prone information stores and be capable of producing updates to end users in an efficient time. We will also discuss practical aspects related to the macro and micro information organization built using the new system. These include handling incremental corpus organization for a specific domain, performing context-sensitive text summarization of news articles and news articles features extraction. In addition, initial evaluation results will be also discussed showing the improvement in execution time and time complexity while maintaining reasonable comparable quality of incremental structures produced.
-
-
-
Binary consensus in sensor motes
Authors: Noor Al-Nakhala, Abderrazak Abdaoui, Ryan Riley and Tarek El-FoulyBackground and Objectives: In this work, we explore the implementation of the binary consensus algorithm in wireless sensor networks. Binary consensus is used to allow a collection of distributed entities to reach consensus regarding the answer to a binary question. Existing work on the algorithm focuses on simulation of the algorithm under the assumption of a fully connected network topology and unlimited messaging capabilities. In this new work, we adapt the algorithm to function in wireless sensor networks where the topology might not be fully connected and the number of messages sent should be minimized in order to save power. Methods: We are deploying and testing our implementation in a hardware embedded systems, mainly IRIS Motes. Our implementation of the binary consensus algorithm is written in NesC and runs on the Tiny Operating System (TinyOS). The implementation was tested on 11 sensor nodes, with current plans to test it on far more. Results: To support our hardware implementation results, a simulation using the Tiny Operating System SIMulator (TOSSIM) was done. Our results in hardware implementation and simulation are consistent with the original binary consensus algorithm. Conclusion: In this work, we adapted the binary consensus algorithm for use in wireless sensor networks. Our implementation for a small number of IRIS motes shows correct results consistent with those of simulation. In future work we will evaluate the effectiveness of the algorithm when scaled to hundreds of sensor motes.
-
-
-
Qatar Carbonates and Carbon Storage Research Centre: Status update after three years of fundamental research
Authors: Iain Macdonald and Geoffrey MaitlandThere are still specific areas where our knowledge of carbon storage is in need of improvement, particularly in carbonate reservoirs, since currently we extrapolate data from limited sources and the predictive modelling technologies employed have a level of uncertainty that needs to be addressed. We will highlight our efforts through the Qatar Carbonates and Carbon Storage Research Centre (a $70 million, 10 year research programme with currently 20 PhD students and 10 postdoctoral researchers along with 14 faculty members) to investigate the underlying science and engineering concerning carbonate reservoir characterisation, rock-fluid-CO₂ interactions and multiphase flow experiments under reservoir conditions linked to complimentary simulation and modelling advances, including the rapidly developing field of digital rocks. This has involved developing unique HTHP experimental rigs and pioneering new modelling techniques, enhancing the toolbox available to engineers and geoscientists to select suitable reservoirs and optimally design CO₂ storage processes. These capabilities extend over molecular-pore-core-field scales. We have four focused research laboratories (Qatar Stable Isotope Lab; Qatar Thermophysical Property Lab; Qatar Complex Fluids Lab; Qatar CCS Multiscale Imaging Lab) and will discuss the highlights of major research findings to date in the context of carbon storage in the Middle East.
-
-
-
Artificial ground water recharge using treated wastewater effluents
Authors: Mohamed Hamoda, Mohamed Daerish and Rabi MohtarWater-related problems are increasingly recognized as one of the most immediate and serious environmental threats to mankind. In particular, all the GCC countries being located in an arid region, suffer from lack of natural freshwater resources. Groundwater is the major source of water for irrigation in these countries. The groundwater aquifers contain either fresh or brackish waters. In countries like Kuwait and Qatar the groundwater available is mostly brackish. Agricultural development has put great pressure on groundwater resources and resulted in varying degrees of depletion and contamination as the demand for water has been increasing due to population growth and economic development. Over-pumping of groundwater has compounded water quality degradation caused by salts and other pollutants. In addition, saltwater intrusion is caused by over-abstraction of coastal aquifers. Meanwhile, the GCC countries are also facing changes in climatic conditions, such as rainfall patterns which affect the water cycle and limit natural groundwater recharge. The states of Kuwait and Qatar share almost similar problems and adopt the same approach in the management of their water resources under a severe stress of absence of natural freshwater resources. In these countries, wastewater collection serves almost all the population and tertiary wastewater treatment has been the common practice. Treated wastewater reuse is considered with proper attention to sanitation, public health and environmental protection. This paper will present a detailed evaluation of groundwater recharge using tertiary-treated or advanced (reverse osmosis) -treated wastewater. Recent advances, challenges, and future arrangements are discussed. A case study in Kuwait and an ongoing study in Qatar will be presented which includes advanced wastewater treatment comprising ultrafiltration and reverse osmosis systems, followed by the artificial recharge of the treated water into a groundwater lens. A simulation model is developed based on hydrogeological studies in which the augmentation of groundwater resources would provide water storage, and prevent depletion and deterioration of the groundwater. Hence, long-term sustainable groundwater management could be achieved.
-
-
-
Breakthrough coastal research of Qatar as input to geological and reservoir modeling
Authors: Sabla Y Alnouri and Patrick LinkeMaximizing recovery in oil and gas fields relies on geological models that realistically portray the spatial complexity, composition, and properties of reservoir units. Present day arid climate coastal systems, like the coastline of Qatar provide analogues for depositional and diagenetic processes that control reservoir quality in ancient reservoirs. Many major reservoirs in Qatar are formed under conditions that are remarkably similar to those shaping the coastlines of today. Among the major controls on coastal sedimentation patterns are: 1) wind, wave and tidal energy, 2) coastline orientation, 3) relative sea level, 4) depositional relief and 5) sediment sources. Strong NW prevailing winds (shamal winds) drive shallow marine circulation patterns, creating four very distinct coastal profiles: windward, leeward, oblique, and protected. In addition, winds supply quartz sand to the leeward coast, as the dune fields of Khor Al-Adaid are blown into the sea. Elsewhere, carbonate sands are formed by wave breakdown of skeletal material in the shallow marine environment. These sands are washed ashore to form beaches. The grain size, composition, and dimensions of coastal sands vary due to wave energy. Coastal deposits are equally affected by high frequency oscillations in sea level. Approximately 8,000 years ago, the sea level was about 3 meters higher than it is currently and the Qatari coastline was up to 15 km inland. Most coastal deposits and sabkhas are relicts of this ancient highstand in sea level. Punctuated sea level drops to present day levels have led to the formation of seaward-stepping spit systems. Understanding these coastal and near coastal areas, the processes that form them, and developing geologic models based on this understanding, is a focus of the Qatar Center for Coastal Research (QCCR) within ExxonMobil Research Qatar. The observed spatial complexity and heterogeneity of modern coastal systems are important aspects to be considered for conditioning three-dimensional geological models. The studied modern outcrops along the Qatar coastline are particularly useful as analogs for conditioning subsurface data sets in geologic (static) and reservoir (dynamic) models.
-
-
-
Total and QP's joint acid stimulation research program to improve productivity from Qatar's oil and gas fields
In a first ever joint venture initiative, Qatar Petroleum has joined forces with Total in an effort to improve acid stimulation programs. Acid stimulation in carbonates can greatly increase well productivity. Near-wellbore impairment or formation damage is typically analysed by a term called skin factor. It is this 'skin' that is removed during an acidizing operation in a well. Typically, reducing the skin factor by a factor of 5 can increase a well's productivity by up to 50 percent (Furui et al. 2003). Acid stimulations performed in Qatar on 23 offshore wells in 2008-2009, increased oil production by 100 percent while at the same time reducing the water cut by 10 percent. In this joint venture project conducted by researchers and engineers from Total and Qatar Petroleum, the study is divided into three phases which also includes knowledge transfer and training. Phase 1 consists of core-flooding under reservoir conditions using standard acid recipes on outcrop and field cores. In Phase 2, improved or novel acidizing systems will be tested using a dual core setup, allowing the study of acid diversion from high permeability zones to low permeability zones. The objective here is not only to improve acidizing efficiency but also to mitigate the water production from heavily watered-out zones. Modeling activities will be undertaken to design acid stimulation treatments using results from the laboratory experiments. Phase 3 involves knowledge sharing and training on mud cake removal treatments. Mud cake is the damage caused to the near-wellbore, i.e., the interface between the reservoir matrix and the well, during the drilling of open hole wells. The knowledge gained will be implemented in both onshore and offshore fields as part of acid stimulation field trials.
-
-
-
Multiscale investigations leading to the design of a novel Fischer-Tropsch reactor for gas-to-liquid processes
Authors: Nimir Elbashir, Layal Bani Nassr, Elfatih Elmalik, Jan Blank and Rehan HussainGas-to-liquid (GTL) projects form an important part of Qatar's energy industry due to the country's extensive natural gas reserves. At present, commercial GTL plants in Qatar account for 36% of the total worldwide GTL production, but suffer from high operational costs due to limitations in the existing Fischer-Tropsch synthesis (FTS) reactor technologies, which are at the heart of the GTL process. Of the two FTS reactor types currently in use commercially, fixed-bed reactors (i.e. gas-phase FTS) offer poor temperature control while slurry-bed reactors (i.e. liquid-phase FTS) suffer from difficult catalyst separation and other challenges. The utilization of supercritical fluids (SCF) as solvents in FTS (SCF-FTS) provide several advantages over the existing commercial technologies. SCF-FTS can improve the heat transfer properties relative to fixed-bed reactors, while also offering high diffusivity of the reactants relative to slurry-bed reactors. The results presented here summarize multidisciplinary research activities, led by our research team at Texas A&M University at Qatar, in collaboration with top scientists from institutions around the word and supported by an industrial advisory board. The work was funded by different agencies and combined several projects, which have been undertaken over the past four years. This work was unique in that it focused on understanding both the micro- and macro-scale behaviours of the FTS chemistry and reactor. The micro-scale studies enabled better understanding of the reaction mechanism and kinetics, FTS thermodynamics and phase behavior (via experimental and modeling studies), and intra-particle catalyst effectiveness factor. The macro-scale investigations covered: 1) identifying the overall (heat/mass/hydrodynamic) profile inside the reactor, 2) selecting an appropriate supercritical solvent, and 3) building a lab scale reactor unit. The outcome of these studies is that we were able to identify the most applicable solvent(s) while providing a detailed techno-economic and safety evaluation of this process. Furthermore, the overall structure of the separation process for solvent recovery and recycle has been completed based on energy optimization. Currently, we are at the stage of developing an upgraded design for this technology based on the data to be generated from our demo-scale FTS reactor unit.
-
-
-
Laboratory cultivation of Qatari Acropora: Studying dynamic factors that influence coral growth and photosynthetic efficiency
Authors: Nayla Mohammed Al-Naema, Cecile Richard, Suhur Saeed and Eric FebboBackground: The coral ecosystem in Qatar is very important as it provides a foundation habitat for many aquatic species. An extensive two-year field study was conducted to evaluate the effectiveness of pulse amplitude modulation (PAM) fluorometry in monitoring the health of sensitive ecosystems such as coral reefs along the coast of Qatar. The study demonstrated that PAM fluorometry can provide reliable and objective information on coral health in advance of visual signs of stress. The scope has now been expanded to include laboratory-based research. Objectives: The objectives of this research are: a) to establish a viable laboratory-based Qatari coral (Acropora sp.) culture system and b) to utilize laboratory-based imaging-PAM fluorometry to compile baseline data, and gain an understanding of environmental parameters that affect the health of the Qatari coral. Methods: Laboratory studies were initiated in December 2011; Acropora samples were collected from mother colonies in Umm Al-Arshan (north of Qatar); the 'nubbins' were cultured in pre-acclimatized laboratory aquaria. Imaging-PAM fluorometer was used to measure photosynthetic processes that were correlated to laboratory culture conditions. A wide range of water quality parameters have been measured, including: temperature, salinity, ammonia, nitrate, nitrite, phosphate, calcium and pH. Results: This research showed that it is possible to successfully culture Acropora coral; the initial colonies have grown to the point that several subsequent colonies have been produced to initiate laboratory assay development. The results of the imaging-PAM also show good correlation with the data obtained using the instrument used in the field. Conclusion: This study demonstrated for the first time the successful culture of Qatari Acropora in a laboratory setting in Qatar. The imaging-PAM fluorometer was also used to obtain detailed visual images of photosynthesis processes. Future studies include Acroproa eco-toxicological experiments to study contaminants that could affect the health of the corals around the Qatari coastal area.
-
-
-
Industrial low grade heat: A useful underused energy source
Authors: Farid Benyahia, Majeda Khraisheh, Samer Adham, Yahia Menawy and Ahmad FardThe process industry utilizes thermal energy on a massive scale and rejects a significant proportion into the environment as a low grade heat. The definition of low grade heat is fuzzy and is somewhat related to the temperature of the stream carrying such thermal energy. Estimates of low grade heat emissions are hard to compile accurately on a global scale but these are likely to be of the order of thousands of trillions of BTUs. In some cases, up to 50% of thermal energy consumed is eventually rejected as low grade heat. This waste is not only uneconomical but also environmentally damaging since it carries a carbon footprint. Modern process plants reduced a great deal of thermal energy losses through heat integration and energy recovery. However, due to process temperature requirements, a vast amount of thermal energy denoted as low grade heat is still rejected. The objectives of this work include evaluating the possibility of utilizing the low grade heat outside the process generating, in a useful manner that has both economic and environmental benefits. In the Middle East where the oil and gas industry rejects vast amounts of low grade heat, recovery and utilization for desalination is becoming a serious option. This work proposes utilization of low grade heat in membrane distillation for desalination and establishes a balance between capital and operating costs as well as carbon footprint reduction. The work is based on a couple of case studies involving well established processes, namely the vinyl chloride monomer and gas-to-liquids processes. The recovery of low grade heat will be coupled with seawater cooling thus providing a warm source of salty water feed to the membrane distillation system. The work indicated that quality potable water may be produced for the petrochemical plants and neighboring living quarters at a reasonable cost. This approach may reduce the demand for fresh water from desalination plants in major industrial complexes making these self-sufficient in fresh water. Benefits are both economic and environmental.
-
-
-
Room temperature ammonia gas sensor based on different acid-doped polyaniline-polyvinyl alcohol blends
Authors: Nabil Kassem Madi, Jolly Bhadra, Noora Al-Thani and Mariam A. Al-MaadeedIn the present work, we reported the performance of the gas sensor based on polyaniline-polyvinyl alcohol (PANI-PVA) thin-film to develop a usable sensor. The PANI-PVA were doped with camphorsulphonic acid (CSA), naphtalenesulfonic acid (NSA), dodecyl benzene sulfonic acid (DBSA) and p-toluene sulfonic acid (PTSA). CSA doped PANI nanocomposite sensors were fabricated on glass substrates by dripping and their gas sensing characteristics for ammonia (NH₃) were investigated at room temperature. PANI was prepared by the dispersion polymerization method. An appropriate amount of PANI and acid were mixed in a mortar and pestle. The mixture was dissolved in 100 mL of water, stirring at room temperature for 3 hours. The blend solution was then used to cast films on glass slides. PANI-PVA blend films are characterized for surface as well as structural morphology SEM and XRD. The morphological analysis shows nanoparticle formation of different shapes depending upon the dopant types. The XRD pictures show some shorts of crystallinity in the blend films. The FTIR spectra show chemical crosslinking between the polymers. The thermal study reveals three steps of degradation of the polymer blends. The electrical properties studies are conducted by in-plane I-V characteristics, and four probe conductivity. We used our blend as an ammonia gas sensor. Among all four sensors the blend film doped with DBSA had good sensitivity and reversibility. This might be because of its enhanced surface morphology that facilitates good adsorption and desorption of ammonia gas on the surface and high conductivity. In this study, ammonia gas sensors based on PANI-PVA composite films were prepared by a solution casting method. The composite films have been characterized by XRD, FTIR and SEM measurements. The SEM images have shown that PANI-PVA film has a different morphology based on the types of doping acids. The film presents significant resistivity upon exposure to ammonia gas at room temperature. It was found that these sensors are sensitive, stable, fast in response and easy to regenerate at room temperature. The advantages of this composite sensor compared to the pure PANI sensor are its fast regeneration associated with improved mechanical properties and chemical stability.
-
-
-
Development of laboratory flow-through system for Arabian Killifish embryo toxicity test
Authors: Suhur Saeed, Nayla Al-Naema and Eric FebboBackground: The use of fish embryos for toxicity testing (FET) is under consideration as an alternative to traditional acute fish toxicity tests. For the past two years, a marine fish embryo test (mFET) has been under development in our laboratory as a routine ecotoxicological test for risk assessment of potential contaminants around the Qatari coastal area. Objective: The objectives of this study were to: a) develop and use a flow-through system to optimize the mFET test conditions to maintain stable concentration of volatile compounds; b) correlate the flow-through mFET to the conventional acute fish test; c) investigate changes in sensitivity of Arabian Killifish embryos to toxicity of chlorine-produced oxidants under flow-through conditions compared to the previous static mFET. Methods: The flow-through system was carried out using custom designed glass chambers. Peristaltic pumps were used to ensure constant flow conditions. To investigate the effect of the flow-through mFET on toxicity of chlorine, fertilized eggs were exposed to aqueous concentrations of calcium hypochlorite for up to 240 hours. The investigated endpoints included; coagulated eggs, somite development, heartbeat, tail detachment, hatchability and post-hatch mortality. Results: The present investigation demonstrated that the custom designed flow-through system enhanced the FET conditions compared to the static FET. The flow-through system stabilized chlorine concentration and provided a larger volume which allowed an increase in the number of test embryos and sufficient test media for chemical analysis. Conclusions: Our data showed that the flow-through system improved the mFET assay for conditions like control survivability and for the main goal of bringing the sensitivity of the embryos into alignment with published data on the effects of chlorine-produced oxidants. This dataset, in conjunction with our previous work on static test conditions provides a wider range of applicability for the assay. In order to further support the mFET as an alternative to acute fish testing, the flow-through FET is currently being extended to other potential compounds of interest.
-
-
-
Kinetic modeling of GTL product distribution over a promoted cobalt catalyst
Authors: Branislav Todic, Wenping Ma, Gary Jacobs, Burtron Davis and Dragomir BukurQatar is the world leader in fuel production from gas-to-liquid (GTL) technology and home of the largest GTL plant in the world (Pearl GTL, a joint development by Qatar Petroleum and Shell). In the GTL process natural gas is converted into liquid fuels and waxes. Fischer-Tropsch synthesis (FTS) is the key part of that process. FTS is a heterogeneously catalyzed reaction in which a mixture of CO and H₂ is converted into a wide range of hydrocarbon products. Advanced design and optimization of large scale FTS reactors requires a detailed knowledge of reaction chemistry. Kinetic models used for this application need to be robust, physically reasonable and fundamental. This study will present one such a model. Experiments were conducted in a 1-L slurry reactor over 25% Co/0.48% Re/Al₂O₃ catalyst. A broad range of operating conditions was achieved (i.e., temperatures of 478, 493 and 503 K, pressures 1.5 and 2.5 MPa, H2/CO feed ratio 1.4 and 2.1 and gas space velocities of 1.0-22.5 NL/g-cat/h). Rate laws for the kinetic model have been derived using the CO-insertion mechanism and chain-length-dependent 1-olefin desorption concept. The model accounts for the formation of n-paraffins and 1-olefins. CO hydrogenation and insertion of CO into the growing chain are considered to be rate determining, as well as the chain termination steps. Non-isothermal model parameters are estimated by minimization of a multi-response objective function. A global minimum is obtained with the hybrid genetic algorithm and a total of 696 experimental responses used in the estimation. Estimated model parameters are meaningful, considering physicochemical tests and statistical tests. They are also in a good agreement with previously reported values for activation energies. The model fit is in good agreement with experimental data and the mean absolute relative residual (MARR) was 24%. The model also provides a good prediction of CO and H₂ rates of consumption, with a MARR of 17.7 and 16.1%, respectively. The main advantage of the proposed model is its ability to explain and predict the main features of GTL product distribution in a physically meaningful and fundamental way over a wide range of industrially relevant process conditions.
-
-
-
Role of aromatics and paraffinic hydrocarbons on synthetic jet fuels properties
Authors: Maria Orillano, Ibrahim Al-Nuaimi, Dhabia Al-Mohandi, Samah Warrag and Nimir ElbashirWith sponsorship from Qatar Science and Technology Park to support Qatar Airways' vision as a world leader in alternative fuels, our research team started work in this field in 2009 as part of a unique academia-industry collaboration model. The undergraduate student researchers are funded by Qatar National Research Fund and play a major role in this project, participating in all its experimental, computational, and theoretical phases. Phase I of this work covers the development of correlations between the Gas-to-Liquid (GTL) synthetic jet fuels' building blocks (paraffinic hydrocarbons) and their physical properties (i.e. density, viscosity, flash point, freezing point, heat content, etc.). The objective of this phase was to identify optimum fuel characteristics and to meet aviation industry standards (e.g. ASTM D1655 & D7566). In Phase II, the experimental data were analyzed using sophisticated statistical techniques (i.e. Artificial Neural Network) to accurately describe the (non-)linear trends for all properties. In Phase III, we investigated the role of aromatics in improving certain properties of GTL jet fuels, such as density and elastomer compatibility (which is essential for fuel tank sealing). Analogous to the investigations conducted in Phase I, visualization models were developed to identify the optimum GTL jet fuel composition formulated by normal-, iso-, cyclic-paraffins and mono-aromatics. Currently, we are working on Phase IV which involves expanding our model to include new additives and component families in order to optimize the blending strategy for Qatar's GTL products and to increase their market value. The success in this direction could provide cheaper and more environmentally friendly synthetic jet fuels derived from natural gas, compared to the current oil-derived Jet A-1 fuels. In addition to the technical results, our fuel characterization lab acts as a training ground for young and talented scientists in order to develop their technical and soft skills. Students get the opportunity to work in a professional environment with strict safety and quality regulations on par with industrial standards, to report scientific data and to draw conclusions from this information in order to make decisions on the next course of research activities.
-
-
-
An energy integration approach for gas-to-liquid process
Authors: Ibrahim Al-Nuaimi, Ahmed AlNouss and Layal Bani NasserGas-to-liquid (GTL) products have increasingly become a promising energy resources over the past two decades. Qatar possesses the third largest proven reserve of natural gas in the world, with a net capacity approaching 900 tcf (trillion cubic foot). This has motivated Qatar to develop a long term vision, involving the investment of huge expenditures into world-class commercial plants that convert natural gas into value-added liquid hydrocarbon products. This vision was translated into the Oryx GTL plant in late 2006 and the Shell Pearl GTL plant reported to be the largest in the world, which began operations officially at the end of 2011, leading Qatar to be described as the world capital of GTL. The substantial usage of energy in Fischer-Tropsch (FT) GTL processes and the complexity of energy distribution throughout the process offer opportunities for heat integration and waste heat recovery. The objective of this paper is to carry out an energy integration analysis for a typical GTL process. The approach was started with process simulation to develop the base-case data for the process. Next, energy integration tools were used to optimize energy distribution, heat exchange, and waste heat recovery. Finally, simulation and techno-economic analysis were utilized to assess the performance of the proposed design changes and their economic viability. The resultant pinch diagram showed that a single pinch case was faced with a fixed driving force of 10 oC, in which both external cooling and heating utilities were required to satisfy energy needs. Meanwhile, the Grand Composite Curve (GCC) showed that flue gases cover most of the heating utility while cooling water covers all the required cooling utility. Moreover, the waste heat recovery study supported by HYSYS software illustrated considerable recoveries in steam qualities from discharged flue gases within the FT reactor section. In conclusion, energy integration on a GTL process was realized to be a promising one as the targets for net energy savings were found to be close to 40%. Additionally, generation of various qualities of steam can be obtained in a cost-effective manner. At the top of it, most of the recommended projects have attractive payback periods, below six years.
-