- Home
- Conference Proceedings
- Qatar Foundation Annual Research Forum Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Forum Volume 2012 Issue 1
- Conference date: 21-23 Oct 2012
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2012
- Published: 01 October 2012
301 - 320 of 469 results
-
-
QR Cache: Linking mLearning theory to practice in Qatar
By Robert PowerBackground and Objectives: Virtually ubiquitous mobile and wireless network coverage combined with high mobile device penetration create an opportunity for mobile learning (mLearning) research to focus on linking theory and practice. The QR Cache project at College of the North Atlantic-Qatar (CNA-Q) evolved from the desire of students to use their own mobile devices, and the need for situated mLearning solutions to training demands of Qatar's technical workforce. QR Cache was developed as a set of exemplars of situated mobile reusable learning objects (RLOs) for students studying introductory computer hardware devices and concepts. The QR Cache project uses a Design-Based Research (DBR) approach to study the development of the RLOs, as well as the link between instructional design and established learning theories. Moore's Transactional Distance Theory and Koole's FRAME model are used to provide theoretical grounding for both design decisions and results interpretation. Methods: Participants used their own mobile devices to scan Quick Response (QR) codes affixed to computer equipment. The QR codes redirected their smartphones to websites with information on the English names and some basic facts about the devices. Participants then completed an online questionnaire about their experiences. Survey responses were analyzed for indicators of transactional distance, as well as the domains of effective mLearning design outlined by the FRAME model. Results: Eight students completed the online questionnaire in the pilot phase. All participants were easily able to access the RLOs using their own mobiles. Responses indicated that they found the situated learning strategy desirable. Students also indicated that they revisited the RLOs several times and that the activities generated interaction in the form of discussions with their peers and instructors. Conclusions: Student experiences with the QR Cache RLOs demonstrate low levels of transactional distance between learners and content, their peers, and instructors. They also show a strong convergence of the learner, social and device usability aspects of the FRAME model required optimizing the mLearning experience. However, the limited number of pilot phase participants makes it difficult to provide generalizations. Expanding the research to include more participants in a subsequent phase would address this limitation.
-
-
-
Multi-modal biometric authentication system using face and online signature fusion
Authors: Youssef Elmir, Somaya Al-Maadeed, Abbes Amira and Abdelaali HassaineBackground and Objectives: There is high requirement of face and signature based multimodal biometric systems in various areas, such as banking, biometric systems and secured mobile phone operating systems. Few studies have been carried out in this area to enhance the performance of identification and authentication based on the fusion of those modalities. In multimodal biometric systems, the most common fusion approach is integration at the matching score level, but it is necessary to compare this strategy of fusion to the other strategies, like fusion at feature level. Our system combines these two biometric traits and provides better recognition performance compared with single biometric systems. Multimodal Authentication Systems: The first monomodal verification system is based on face verification using Gabor filters for feature extraction. The second system is based on online signature verification using Nalwa's method. The classification is released using the Cosine Mahalano-bis distance. Due to its efficiency, we used max-of-scores strategy to fuse face and online signature scores. The second proposed system is based on fusion at the feature level. Results and Conclusions: The performance of feature-level fusion and max-of-scores fusion techniques using face and online signature modalities were evaluated on ORL face database and the QU signature database. The lowest equal error rate is obtained by using a fusion strategy based on max-of-monomodal systems scores. Additionally, feature-level fusion based methods demonstrate a low equal error rate compared with the monomodal systems and have not been affected by the increase in the features vector dimension in term of time of verification; on the contrary, the fusion at the score level is clearly affected and it takes more in-time verification because it's necessary to get scores from each biometric trait before the fusion step.
-
-
-
Automated essay scoring using structural and grammatical features
More LessAutomated essay scoring is a research field which is continuously gaining popularity. Grading essays by hand is expensive and time consuming, automated scoring systems can yield fast, effective and affordable solutions that would make it possible to grade essays and other sophisticated testing tools. This study has been conducted on a dataset of thousands of English essay sets belonging to eight different categories provided by the Hewlett Foundation. Each category corresponds to the same question or problem statement. The score of each essay of the training set is provided in this dataset by human raters. Several features have been determined to predict the final grade. First, the number of occurrences of the 100 most frequent words in English is computed in each essay. Then, the list of average scores associated to each compounding word in the training set is determined. From this list several statistical values are considered as separate feature including the minimum, maximum, mean and median values, variance, skewness and kurtosis. These statistical features are also computed for the list of average scores associated to each compounding bigram (sequence of 2 words). Moreover, each word in the essays has been tagged using the NLTK toolkit into its grammatical role (verb, noun, adverb…etc). The number of occurrences of each grammatical role has also been used as a separate feature. All those features have been combined using different classifiers with random forests generally preferred. This system participated in the Automated Essay Scoring Contest sponsored by the Hewlett Foundation. The results have been evaluated using the quadratic weighted kappa error metric, which measures the agreement between the human rater and the automatic rater. This metric typically varies from 0 (only random agreement) to 1 (complete agreement). This method scored 0.76519 and ranked 13th out of 156 teams: http://www.kaggle.com/c/asap-aes/leaderboard. The proposed system combines structural and grammatical features to automatically grade essays and achieves promising performance. There is ongoing work on the extension of the developed method for short essay scoring as well as grading an unseen category of essays.
-
-
-
Modeling datalog fact assertion and retraction in linear logic
Authors: Edmund Lam and Iliano CervesatoPractical algorithms have been proposed to efficiently recompute the logical consequences of a Datalog program after a new fact has been asserted or retracted. This is essential in a dynamic setting where facts are frequently added and removed. Yet while assertion is logically well understood as incremental inference, the monotonic nature of traditional first-order logic is ill-suited to model retraction. As such, the traditional logical interpretation of Datalog offers at most an abstract specification of Datalog systems, but has tenuous relations to the algorithms that perform efficient assertions and retractions in practical implementations. This work proposes a logical interpretation of Datalog based on linear logic. It not only captures the meaning of Datalog updates, but also provides an operational model that underlies the dynamic changes of the set of inferable facts, all within the confines of logic. We do this specifically by explicitly representing the removal of facts and enriching our linear logic interpretation of Datalog inference rules with embedded retraction rules. These retraction rules are essentially linear implications designed to exercise the retraction of consequences when base facts are targeted for retraction. As such, we can map Datalog assertion and retraction onto the forward-chaining fragment of linear logic proof search. We formally prove the correctness of this interpretation with respect to its traditional counterpart. In the future, we intend to exploit our work here to develop a rich logic programming language that integrates Datalog style assertion and retraction with higher-order multiset rewritings.
-
-
-
Probing equation of state parameter fitting in parallel computers
Authors: Marcelo Castier, Ricardo Figueiredo Checoni and Andre ZuberThe accurate design of chemical processes depends on the availability of models to predict the physical properties of the materials being processed. Thermodynamic properties such as enthalpies, entropies, and fugacities are particularly important in this context. Most of the models to evaluate them have adjustable parameters, fitted to give the best possible representation of the experimental data available for a given substance or mixture. Depending on how much information is available, this may entail the use of hundreds or thousands of data points. As several modern thermodynamic models have intricate mathematical expressions, especially equations of state, using so many data points to fit their parameters leads to substantial computational effort. This makes it difficult to run the parameter fitting problem from different initial estimates. The consequence is that this decreases the likelihood of finding the global minimum of the objective function used for parameter fitting. Despite the fact that current desktops and laptops are capable of parallel computations, little has been done to take advantage of their computational power for equation of state parameter fitting. The authors have recently developed procedures to that end, executed in different desktop and laptop computers, which provided speedups compatible with the number of processors available. One of the procedures is based on the conventional, sequential simplex minimization algorithm with a parallel evaluation of the objective function (SSPO approach). The other procedure is based on a modified, parallel version of the simplex minimization algorithm with a sequential evaluation of the objective function (PSSO approach). In this paper, we extend the evaluation of these procedures, executing them in the Suqoor supercomputer of Texas A&M University at Qatar, using single and multiple nodes. Because of numerical algorithm used, speedups in the PSSO approach are limited by the number of parameters to be fitted, which does not happen in the SSPO approach. On the other hand, the PSSO approach often ends at solutions with smaller objective functions, showing a greater tendency to escape local minima.
-
-
-
An efficient, scalable and high performance cloud monitoring framework
Authors: Suhail Rehman and Majd SakrCloud computing has become a very popular platform to deploy data-intensive scientific applications, but this process faces its own set of challenges. Given the complexity of the application execution environment, routine tasks on the cloud such as monitoring, performance analysis, and debugging of applications become tedious and complex. These routine tasks often require close interaction and inspection of multiple layers in the cloud, which traditional performance monitoring tools fail to account for. In addition, many of these tools are designed for real-time analysis and only provide summaries of historical data. This makes it difficult for a user to trace the runtime performance of an application in the past. We present a new monitoring framework called All-Monitor Daemon (Almond). Almond keeps close tabs on cloud inventory by communicating with a cloud resource manager (such as VMware vCenter for a VMware private cloud). Almond then connects to each individual physical host in the inventory and retrieves performance metrics through the hypervisor. Examples of metrics include CPU, memory, disk and network usage. Almond is also designed to collect performance information from the Guest OS, allowing the retrieval of metrics from the application platform as well. Almond was designed from the ground up for enhanced scalability and performance. The framework uses a Time Series Database (TSD), and a decentralized monitoring architecture allows for fast performance queries while minimizing overhead on the infrastructure. Almond collects performance data from all the layers of the software stack, and collected data remains persistent for future analysis. As a result of our performance enhancements, our preliminary results indicate a 70% improvement in hypervisor query response time through these enhancements as compared to our previous monitoring solution, VOTUS. Almond is a work in progress, and will feature an intuitive web-based interface that allows system administrators and cloud users to view and analyze resources on the cloud. Once completed, Almond promises to be a highly scalable, fast performing and dynamic cloud resource monitor.
-
-
-
Arabic named entity operational recognition system
Authors: Shiekha Ali Karam, Ali Jaoua and Samir ElloumiExtracting named entities is an important step for information extraction from a text, based on a given ontology. Dealing with Arabic language invokes an additional number of challenges compared to English, French and other languages within similar families. The major difficulties involve complex morphological systems, no capitalization, and no standardization of Arabic writing. The Arabic language has a rich and complex morphological landscape due to its highly inflected nature. Usually, any Arabic lemma word can be constructed using different internal structure, prefixes and suffixes. Furthermore, there is no standardization of Arabic writing because of the spelling inconsistency of Arabic words. In this work, we propose an operational hybrid approach combining dictionary-based and rule-based detection for extracting seven categories of named entities which are organization by name, date, interval, price/value, percentage, currency and unit. The dictionary-based approach performs exact or approximate matching of the words with prepared Arabic organization names. In case of non-exact matching with the dictionary words, the approximate matching is an efficient solution for morphological difficulties. Specificities of Arabic language are also processed by rule-based detection, which is based on capturing the entities patterns in terms of regular expressions or patterns provided by experts. We evaluated our Arabic name entity recognition system using financial news articles and we obtained around an 80% of recognition rate.
-
-
-
Scalability evaluation of cluster size for MapReduce applications in elastic compute clouds
Authors: Fan Zhang and Majd F SakrThe MapReduce programming model is a widely accepted solution to address the rapid growth of the so-called big-data processing demands. Various MapReduce applications with a huge volume of input data can run on an elastic compute cloud composed of many computing instances. This elastic compute cloud is best represented by a virtual cluster, such as Amazon EC2. Performance prediction of MapReduce applications would help in understanding their scalability pattern. However, it is challenging due to the complex interaction of the MapReduce framework and the underlying highly-parameterized virtualized resources. Furthermore, MapReduce's high-dimension space of configuration paremeters which adds to the prediction complexity. We have evaluated a series of representative MapReduce applications on Amazon EC2, and identified how the cluster size affects the execution times. The scaling curve of all applications are studied to discover the scalability pattern. Our major findings are as follows: (1) The execution times of MapReduce applications follow a power-law distribution, (2) For map-intensive applications, the power-law scalability starts from a small cluster size, and (3) For reduce-intensive applications, the power-law scalability starts from a lager cluster size. We attempted to fit our scalability performance results using three regression methods: polynomial regression, exponential regression and power regression. By measuring the Root Squared Mean Error (RSME), the power regression performs best at performance prediction compared with the other methods evaluated. This was the case across all the benchmark applications studied. Our performance prediction methods will aid cloud users in choosing appropriate computing resources, both virtual and physical, from small-scale experimental test runs for cost saving.
-
-
-
Design Considerations for Content and Personality of a Multi-lingual Cross-Cultural Robot
Authors: Micheline Ziadee, Nawal El Behih, Lakshmi Prakash and Majd SakrOur aim is to develop a culturally aware robot capable of communicating with people from different ethnic and cultural backgrounds and performing competently in a multi-lingual, cross-cultural context. Our test bed is a female robot receptionist, named Hala, deployed at the reception area in Carnegie Mellon University in Qatar. Hala answers questions in Arabic and English about people, locations of offices, classrooms and other rooms in the building. She also provides information about the weather, Education City, and her personal life. Our first model, Hala 1.0, was a bilingual robot extending an American model whose personality and utterances conform to the American culture. Three years of interaction logs have shown that 89% of Hala 1.0's interactions were in English. We conjecture that this is due to the robot's poor ability to equally portray both Arabic and American cultures and to its limited Arabic content. In order for us to investigate cultural factors that bear on communication significantly, we developed Hala 2.0 which is also a bilingual robot designed to be an Arab-American robot with more Arabic features in appearance, expression and interaction. The robot's personality is constructed taking into account the socio-cultural context in which its interactions will take place. To achieve bilingualism we had to create symmetry between Arabic and English linguistic content. Since the robot's utterances were developed primarily in English we resorted to translating them into Arabic and adapting them to the constraints of our socio-cultural context. Since Arabic is a highly inflected language, we adopted the plural case in formulating the robot's replies so as to avoid gender bias. To improve query coverage, we added word synonyms, including context-related synonyms (exp:هل تحبين عملك؟/ هل يعجبك عملك؟ ) and different formulations for the same question (exp: do you sleep? / do you go to sleep? and هل تنامين؟/ أتنامين؟). Furthermore, based on three years of recorded query logs, we expanded the range of topics that the robot is knowledgeable about by adding 3000 question/answer sentences to increase the robot's capacity for engaging users. All content and utterances were developed to align with the robot's designed personal traits.
-
-
-
Performance of spectrum sharing systems with two-way relaying and multiuser diversity
Authors: Liang Yang, Mohamed-Slim Alouini and Khalid QaraqeIn this paper, we consider a spectrum sharing network with two-way relaying and multiuser diversity. More specifically, one secondary transmitter with the best channel quality is selected and splits its partial power to relay its received signals to the primary users by using the amplify-and-forward relaying protocol. We derive a tight approximation for the resulting outage probability. Based on this formula, the performance of the spectral sharing region and the cell coverage are analyzed. Numerical results are given to verify our analysis and are discussed to illustrate the advantages of our newly proposed scheme.
-
-
-
Identifying stressful and relaxation activities using an ambulatory monitoring device
Authors: Hira Khan, Beena Ahmed, Jongyoon Choi and Ricardo Gutierrez-OsunaBackground and Objective: The Autonomic Nervous System (ANS) regulates physiologic processes autonomously through the sympathetic (SNS) and parasympathetic systems (PNS) with both working in balance e.g. sympathetic input accelerates heart rate and prepares for emergencies while the parasympathetic slows the heart rate and relaxes the body. Stress can lead to imbalances in these two systems which can harm the human body. Persistent imbalances caused by chronic stress may trigger diseases such as hypertension, diabetes, asthma and depression and also lead to social problems. In this paper we discuss the effectiveness of a wearable physiological monitoring device in identifying the response of subjects to stressful and relaxation activities to monitor the long term impact of stress. Methods: To achieve this objective we developed a body sensor network to wirelessly monitor heart rate, respiratory rate and skin conductance. We collected data while subjects performed mental challenges, chosen to measure a range of stress responses interleaved with deep breathing activities, which they also assessed. We examined the data using several measures of heart rate variability--spectral power in the low frequency (HRV-LF) and high frequency range (HRV-HF), mean (AVNN) and standard deviation of successive RR intervals, the portion of RR interval that changes more than 25 msec (pNN25) and the root mean square of successive differences of RR (RMSSD). Respiratory effect was evaluated using normalized respiratory high and low frequency components and their ratio. To assess the impact on the skin conductance, the mean and standard deviation of the slow varying tonic skin conductance level (SCL) and rapidly varying phasic response-skin conductance response (SCR) were computed. Results: An analysis of the computed features indicated that not all features were able to accurately identify the impact of stress on the subjects. The HRV and skin conductance measures were more highly correlated to stress levels with best discrimination obtained using AVNN, RMSSD, PNN25, HRV-HF, SCL mean and SCR standard deviation. Conclusions: This study has shown that it is possible to extract features from physiological signals that can be transformed into meaningful measures of individual stress.
-
-
-
Computational intelligence in power electronics and electric drive control
Authors: Mohammad Jamil, Atif Iqbal and Mohammad Al-NaemiPower Electronics converters are finding growing applications in industries and house hold devices. The growing automation in industrial applications needs highly complex power electronics systems to process the electric power. The controls of power electronic converters are still posing challenges due to their required precision. Several control strategies are developed and reported in the literature including Pulse Width Modulation (PWM), model predictive control, sliding mode control etc. The development of computational intelligence techniques are artificial intelligence, fuzzy logic, adaptive neuro-fuzzy inference system, genetic algorithms etc. The basic idea is to incorporate the human intelligence in the control system so it makes intelligent decisions. The paper presents an over-view of the application of computational intelligence techniques for control of power electronic converters. Computational Intelligence (CI) has been in focus for quite long time, and it is well known that CI techniques can help in solving complex multidimensional problems which are difficult to solve by conventional methods. Computational Intelligence technology is growing rapidly and its applications in various fields is being tested. Power electronics converters is one of the major application areas where this technology can play a vital and decisive role. Recent development of powerful digital signal processors and field programmable gate arrays is making implementation of Computational Intelligence technology economical with improvement of performance, compact and more competitive. Evidently, the future impact of this CI technology on power electronics converter control is very significant and utilizable.
-
-
-
Real-time online stereo camera extrinsic re-calibration
Authors: Peter Hansen, Brett Browning, Peter Rander and Hatem AlismailStereo vision is a common sensing technique for mobile robots and is becoming more broadly used in automotive, industrial, entertainment, and consumer products. The quality of range data from a stereo system is highly dependent on the intrinsic and extrinsic calibration of the sensor head. Unfortunately, for deployed systems, drift in extrinsic calibration is nearly unavoidable. Thermal variation and cycling combined with shock and vibration can cause transitory or permanent changes in extrinsics that are not modeled accurately by a static calibration. As a result the quality of the sensor degrades significantly. We have developed a new approach that provides real-time continuous calibration updates to extrinsic parameters. Our approach optimizes the extrinsic parameters to reduce epipolar errors over one or multiple frames. A Kalman Filter is used to continually refine these parameter estimates and minimize inaccuracies resulting from visual feature noise and spurious feature matches between the left and right images. The extrinsic parameter updates can be used to re-rectify the stereo imagery. Thus, it serves as a pre-processing step for any stereo process ranging, from dense reconstruction to visual odometry. We have validated our system in a range of environments and stereo tasks and demonstrated it at the recent Computer Vision and Pattern Recognition conference. Significant improvements to stereo visual odometry and scene mapping accuracy were achieved for datasets collected using both custom built and commercial stereo heads.
-
-
-
Concurrency characterization of MapReduce applications for improved performance on the cloud
Authors: Mohammad Hammoud and Majd SakrDriven by the increasing and successful prevalence of MapReduce as an analytics engine on the cloud, this work characterizes the Map phase in Hadoop MapReduce to guide its configuration and improve overall performance. MapReduce is one of the most effective realizations of large-scale data-intensive cloud computing platforms. Hadoop is an open source implementation of MapReduce and is currently enjoying wide popularity. Hadoop has a high-dimensional space of configuration parameters (~200 parameters) that poses a burden on practitioners, like computation scientists, system researchers, and business analysts, to set for efficient and cost-effective execution. In this work we observe that MapReduce application performance is highly influenced by Map concurrency, defined in terms of two configurable parameters, the number of available map slots and the number of map tasks running over the slots. As Map concurrency is varied, we show that some inherent MapReduce characteristics allow systematic and well-informed prediction of MapReduce performance response (runtime increase or decrease). We propose Map Concurrency Characterization, MC2, a predictor for MapReduce performance response. MC2 allows for optimized configuration of the Map phase and, consequently, enhanced Hadoop performance. Current related schemes require mathematical modeling, simulation, dynamic instrumentation, static analysis of unmodified MapReduce application code, and/or actual performance measurements. In contrast, MC2 simply bases its decisions on MapReduce characteristics that are affected by Map concurrency. We implemented MC2 and conducted comprehensive experiments on a private cloud and on Amazon EC2 using Hadoop 0.20.2. Our results show that MC2 can correctly predict MapReduce performance response and provide up to 2.3X speedup in runtime for the tested benchmarks. This performance improvement allows MC2 to further serve in reducing cost in a cloud setting. We believe that MC2 offers a timely contribution to the data analytics domain on the cloud, especially as Hadoop usage continues to grow beyond companies like Google, Microsoft, Facebook and Yahoo!.
-
-
-
Performance analysis of distributed beamforming in a spectrum sharing system
Authors: Liang Yang, Mohamed-Slim Alouini and Khalid QaraqeIn this paper, we consider a distributed beamforming scheme (DBF) in a spectrum sharing system where multiple secondary users share the spectrum with the licensed primary users under an interference temperature constraint. We assume that DBF is applied at the secondary users. We first consider optimal beamforming and compare it with the user selection scheme in terms of the outage probability and bit-error-rate performance. Since perfect feedback is difficult to obtain, we then investigate a limited feedback DBF scheme and develop an outage probability analysis for a random vector quantization (RVQ) design algorithm. Numerical results are provided to illustrate our mathematical formalism and verify our analysis.
-
-
-
Use of emerging mobile computer technology to train the Qatar workforce
Authors: Mohamed Ally, Mohammed Samaka, John Impagliazzo and Adnan Abu-DayyaBackground: According to the Qatar National Vision 2030, Qatar residents are encouraged to implement information and communication technology (ICT) initiatives in government, business, and education in pursuit of a knowledge-based society that embraces innovation, entrepreneurship, and excellence in education. This research project, which is funded by the Qatar National Research Fund (QNRF) under the National Priority Research Program (NPRP), is contributing to this vision by investigating the use of innovative training technology to train Qataris so that they are prepared for the 21st century workforce. Specifically, this research project investigates the use of mobile computer technology--such as mobile phones, tablet computers, and handheld computers--to train Qatar residents on workplace English so that they can become more effective when communicating in the workplace. This presentation will share the results of a preliminary study that was conducted. This project will be expanded using the "Framework for the Rational Analysis of Mobile Education" (FRAME) model (Figure 1) that describes the convergence of mobile technologies, human learning capacities, and social interaction. Objectives: The research evaluates the effectiveness of the mobile computer technology training and transferability to the Qatar workplace environment. Methods: A total of 27 trainees participated in this study. They were given a pre-test followed by the mobile learning training and then a post-test. Results: Overall, the learners' performance improved by 16 percent after completing the training with mobile technology. Ninety four percent of subjects said that the quality of the presentation on the mobile technology was either excellent, good, or fair. One hundred percent of subjects reported that the mobile technology helped them learn. Conclusion: The delivery of training using mobile computer technology was well received by learners. They liked the interactive and innovative nature of the training.
-
-
-
Novel applications of optical flow in measuring particle fields in dense crowded scenes at Hajj 2011
Authors: Khurom Hussain Kiyani, Emanuel Aldea and Maria PetrouBackground & Objectives: The Hajj and minor Muslim pilgrimage of Umrah, present some of the most densely crowded human environments in the world, and thus offer an excellent testbed for the study of dense crowd dynamics. Accurate characterisation of such crowds is crucial to improve simulations that are ubiquitously applied to crowded environments such as train stations, and which require a high degree of detailed parameterisation. Accurate measurements of key crowd parameters can also help to develop better strategies for mitigating disasters such as the tragic stampede of 2006 that killed over 300 pilgrims during the Hajj. With Qatar set to be one of the major cultural centres in the region, e.g. hosting 2022 FIFA World Cup, the proper control and management of large singular events is paramount for safety and Qatar's standing on the international stage. We aim to use the unique video data gathered from Hajj 2011, to assess the dynamics of very dense crowded environments with a particular focus on dangerous crowd instabilities and systemic shocks. Methods: We make use of increasingly complex optical flow algorithms (Horn-Schunck, Lucas-Kanade, TV-L1) to extract the instantaneous velocity field between each pair of frames in the videos. From these velocity vector fields we then construct the pedestrian (Lagrangian) flow field by the use of texture advection techniques that initially seed the flow with particles or random noise. Results: We present results of the above application of optical flow and texture advection methods to the data we collected in a field study during Hajj 2011. Particularly, we aim to illustrate the specific flow patterns that arise in such crowded environments. We also aim to present the preliminary results of a pilot multiple camera stereovision study conducted in the London Central Mosque on a Friday when the mosque was particularly crowded.
-
-
-
MetaSimulo: An automated simulation of realistic 1H-NMR spectra
Authors: Zeinab Atieh and halima bensmailQatar is now accumulating important expertise in biomedical data analytics. At QCRI, we are interested in providing for biomedical researchers based on their computational needs and in developing tools for data analytics in biomedical research. When computers extract patterns and classifiers from a body, they are used to predict new data that helps in defining a prior threat or disease. One non-invasive powerful technique for detecting and quantifying bio-markers linked to diseases (metabolites) is Nuclear Magnetic Resonance (NMR) spectroscopy. 1H NMR spectroscopy is commonly used in the metabolic profiling of biofluids. Metabolites in biofluids are in dynamic equilibrium with those in cells and tissues so their metabolic profile reflects changes in the state of an organism due to disease or environmental effects. The analysis of signals obtained from patients may be performed via methods which incorporate prior knowledge about the metabolites that contribute to the 1H NMR spectroscopic signals, recorded in a metabolite dataset. This paper presents a novel model and computationally automated approach that allows for the simulation of datasets of NMR spectra in order to test real data analysis techniques, hypotheses and experimental designs. Unlike others, this model generates NMR spectra of biofluids unlimited by the magnetic field or pH. It is simple to implement, requires small storage, and is easy to compute and compare. Moreover, it can treat metabolites with a relatively high number of 1H due to a special technique in programing based on physical properties. This model can open the door wide to a new technique of metabolite quantification and thus a better determination of metabolite concentrations which is the key of disease identification. The area of NMR expands rapidly and holds great promise in terms of the discovery of potential biomarkers of diseases, such as diabetes, an area of increasing concern in Qatar, and cancer, which is the third cause of death in Qatar.
-
-
-
MegaMorph: Multi-wavelength measurements of nearby galaxy structures
Authors: Marina Vika, Steven P Bamford, Boris Haeussler and Alex RojasFitting an analytic function to the two-dimensional surface brightness profile of a galaxy provides a powerful method of quantifying its internal structure. The resulting parameters reveal the size, shape and luminosity of the galaxy and its separate structural components (e.g., disk and bulge). Current galaxy fitting softwares packages consider only a single waveband image at a time. However, variations in stellar populations between and within galaxy structures mean that their observed properties depend on wavelength. Correctly studying the physical properties of galaxy components requires that these wavelength variations be accounted for. Multi-wavelength studies are presently possible, but require significant compromises: either the fits to each band must be done independently, or, one band must be favored for determining structural parameters, which are then imposed on fits to the other bands. Both of these approaches waste valuable information, and therefore result in suboptimal decompositions. Our project, 'MegaMorph', is developing a next-generation tool for decomposing galaxies, in terms of both their structures and stellar populations. We aim to present a modified version of the two-dimensional galaxy fitting software, GALFIT, developed as part of our MegaMorph project. These new additions enable a galaxy to be fit using images at many different wavelengths simultaneously. All the available data is therefore used to constrain a wavelength-dependent model of the galaxy, resulting in more robust, physically meaningful, component properties. We verify the advantages of our technique by applying it to a sample of 160 well-studied galaxies at redshifts smaller than 0.025, with ugriz imaging from the Sloan Digital Sky Survey to demonstrate how the resulting decompositions allow us to study the links between stellar population and galaxy structure in detail. Furthermore, we illustrate the advantages of our new method with regard to galaxy surveys, by fitting a sample of ~4000 artificially redshifted images of the galaxies described above. Our technique enables physical parameters of galaxy components to be robustly measured at lower signal-to-noise and resolution than would otherwise be possible. This paves the way for detailed statistical studies of the physical properties of galaxy disks and bulges.
-
-
-
Random subcarrier allocation in OFDM-based cognitive radio networks
Authors: Sabit Ekin, Mohamed M. Abdallah, Khalid A. Qaraqe and Erchin SerpedinAdvances in wireless communications technologies (e.g., 3G, 4G and beyond) entail demands for higher data rates. A well-known popular solution to fulfill this requirement was to allocate additional bandwidth, which unfortunately is not anymore viable due to radio-frequency (RF) spectrum scarcity. Nonetheless, spectrum measurements around the globe have revealed the fact that the available spectrum is under-utilized. One of the most remarkable solutions to cope with the under-utilization of spectrum is the concept of cognitive radio (CR). In CR systems, the main implementation issues are spectrum sensing because of the uncertainties in propagation channel, hidden primary user (PU) problem, sensing duration and security issues. Hence, the accuracy and reliability of the spectrum sensing information can be suspicious and questionable inherently. There has been no study to date that investigates the impacts of absence of spectrum sensing information in CR networks. In this work, due to the imprecise and unreliable spectrum sensing information, we investigate the performance of an orthogonal frequency-division multiplexing (OFDM)-based (4G and beyond) CR spectrum sharing communication system that assumes random allocation and absence of the PU's channel occupation information, i.e., no spectrum sensing is employed to acquire information about the availability of unused subcarriers or the PU's activity. The results show that due to the lack of information of the PUs' activities, the SU randomly allocates the subcarriers of the primary network and collides with the PUs' subcarriers with a certain probability. The number of subcarrier collisions is found to be following hypergeometric distribution. The SU's capacity with subcarrier collisions is employed as a performance measure to investigate the proposed random allocation scheme for both general and Rayleigh channel fading models. To avoid the subcarrier collisions at the SUs due to the random allocation and to obtain the maximum sum rate for SUs based on the available subcarriers, an efficient centralized sequential algorithm is proposed and analyzed. The performance of such a communication set-up can provide various insights into the studies in the CR literature, and it these be utilized as a valid candidates for performance comparison benchmarks in CR spectrum sharing systems with the availability of spectrum sensing information.
-