Qatar Foundation Annual Research Forum Volume 2012 Issue 1
- تاريخ المؤتمر: 21-23 Oct 2012
- الموقع: Qatar National Convention Center (QNCC), Doha, Qatar
- رقم المجلد: 2012
- المنشور: ٠١ أكتوبر ٢٠١٢
281 - 300 of 469 نتائج
-
-
RFID distributed antenna systems for intelligent airport applications
المؤلفون: Abdelmoula Bekkali and Abdullah KadriRecent developments in radio frequency identification (RFID) technology have enabled the aviation industry to benefit from its huge potential to solve issues related to baggage mishandling and to improve passenger journeys. The International Air Transport Association (IATA) estimates that more than $733 million savings by airlines alone can be realized through RFID adoption when fully implemented in the top 200 airports. Despite the obvious benefits of RFID technology in airport industry service applications, potential implementation obstacles and technical deployment challenges have to be overcome for effective, low cost and reliable passive RFID-based baggage handling and passenger asset monitoring systems. In the existing RFID system, where the RFID readers are usually placed on the conveyors, the reliable read range is often limited to a few meters. Location is then inferred from the last portal a tag is read at. In addition, the RFID tags reading accuracy varies from 97-99% in most implemented systems. This work aims at improving airport efficiency and security through real-time locating and tracking of both passengers and baggage within airport facilities. We propose to apply the concept of optical distributed antenna systems (DAS) to RFID, to develop an intelligent, adaptive, and self-organizing passive RFID real-time locating system (RTLS), suitable for deployment in airports. This system can provide reliable coverage over a wide area using only few RFID antennas. Our system will have the following characteristics and advantages. Firstly, the RFID DAS system will allow users to rapidly identify all tags and collect information with a reading tag accuracy of 100% compared to ~90-99% for the existing implemented RFID system. And thus, it can eliminate the error of manual operation. Secondly, the interrogation area or the reader coverage is greatly expanded up to 20 meters compared to 3 meters range of the existing system. Thirdly, in addition to baggage handling, the system can also track passengers and airport staff and devices. Finally, this system will provide a glimpse of the great potential returns that will support the smart airport vision.
-
-
-
Everything happens somewhere: The role of geospatial analyses in government, environmental planning and research
المؤلفون: Robert Arden Ross, Jurgen Foeken, Jeremy Jameson, Christian Strohmenger and Eric FebboWith expansion in the fields of urbanization, technical development and industrialization, Qatar faces the challenge of advancing urban and industrial development while ensuring effective safety and environmental management. Whether assessing environmental quality, the impact of a metro system, or emergency response planning, decision making relies on the integration of disparate sources of data. Geospatial referencing provides a unifying framework from which civil engineering, public policy and research decisions can be made within a web-based environment. The foundation of a geospatial reference system should be the surface geology of the country and its marine habitats. In addition to bedrock geology, the database should include features such as lineaments, drainages, land movement studies, karst features and soil types. Marine data include biota, sediment types, water quality, hydrodynamic data, and ecological sensitivity. In an arid climate, where water resources are key considerations, it is important to have a consistent geological framework for both surface and near surface geology. Accordingly, the near surface, aquifers should have a sequence stratigraphic analysis. Within a web-based infrastructure, geospatial data can be accessible to many users with access controls set by data owners and can be appropriated to user needs. Such a geodatabase mapping tool may combine 3 functions: 1) database/resource-a query tool for government, industry and the public, 2) an evaluation tool allowing the end user forward modeling/impact assessment and cause/effect analysis capability, 3) a communication tool for decision making--where results from a query may be circulated to a linked forum of specialists for consideration. The emergence of web-based, spatial referencing GIS tools shows great promise in many key aspects of public policy and research, including data storage, data analysis, and decision making. The vision is the implementation of integrated diverse multi-scale, multi-disciplinary spatial data from geological rock samples to cutting edge multi-spectral satellite imagery for use in solving complex geological, geophysical, geotechnical and environmental problems.
-
-
-
Dynamic Health Level Seven Packetizer for on-the-fly integrated healthcare enterprises in disaster zones
المؤلفون: Uvais Qidwai, Junaid Chaudhry and Malrey LeeThe advent of standards such as IEEE 11073 for device connectivity, Health Level Seven (HL7) etc., provide an assimilating platform for medical devices and seamless dataflow among modern health information systems (HIS). However, to date, these standards are either not widely accepted or lack the support of 'on-the-fly' formation of HIS in a disaster zone. In a situation where hybrid medical (standard compliant and the otherwise) devices are in operation, incomplete and ambiguous data can lead to fatal misconduct on the part of technology. In order to eliminate this problem, we propose a HL7 compliant policy engine in support of HL7 Reference Information Model (RIM). The policy engine is used for rich policy expression, vivid XML policies for HL7 compliant devices, and performance enhancement. Due to the dynamic nature of on-the-fly HIS in a disaster zone, it is very costly to manage the change and keep track of authentic HIS devices. We use Java language to extend the HL7 RIM in order to create/modify policies instead of scripting languages to overcome the complexity and interoperability.
-
-
-
Distributed rendering of computer-generated images on commodity compute clusters
المؤلفون: Othmane Bouhali and Ali SheharyarRendering of computer-generated imagery (CGI) is a very compute-intensive process. Rendering time of individual frames may vary from a few seconds to several hours depending on the scene complexity, output resolution and quality. For example, a short animation project may be about two minutes in length. It comprises 3600 frames at 30 frames per second (fps). An average rendering time for a fairly simple frame can be approximately 2 minutes, resulting in a total of 120 hours to render a simple 2-minute animation. Fortunately, animation rendering is a highly parallelizable activity as frames can be computed independently of each other. A typical animation studio has a render farm, a sophisticated cluster of special computers (nodes) used to render 3D graphics. By spreading the rendering of individual frames across hundreds of machines, the overall render time is reduced significantly. Researchers and students in a university do not usually have a render farm available. Some universities have general-purpose compute clusters but these are used mainly for running complex numerical simulations. Although rendering on these clusters is doable but it usually involves using generic queue manager (e.g. Condor, Portable Batch System PBS) rather than specialized render queue manager (e.g. DrQueue, Qube!), due to which rendering workflow becomes tedious. This paper presents a solution to create a render frame-like environment using compute-cluster resources and most importantly by using the existing render queue manager. This way, researchers and students can be presented with a rendering environment similar to any animation studio. Several benchmarking results will be presented to prove the potential benefit of this method in terms of execution time, simplicity and portability.
-
-
-
A new generic approach for information extraction
المؤلفون: Samir Elloumi, Ali Jaoua, Fethi Ferjani, Nasredine Semmar, Jihad Al-Jaam and Helmi HammamiAutomatic Information Extraction (IE) is a challenging task because it involves experts' skills and requires well developed Natural Language Processing (NLP) algorithms. Moreover, IE is domain dependent and context sensitive. In this research, we present a general learning approach that may be applied for different types of events. As a matter of fact, we observed that even if a natural language text containing a target event is apparently unstructured, it may contain a segment that we can map automatically into a structured form. Segments representing the same kind of events have a similar structure or pattern. Each pattern is composed of an ordered sequence of named entities, keywords and articulation words. Some generic named entities like organizations, persons, locations, dates, and grammatical annotations are generated by an automatic part of speech identification tool. During the learning step, each relevant segment is manually annotated with respect to the targeted entities (roles) structuring an event of the ontology. IE is processed by associating a role with a specific entity. By alignment of generic entities to specific entities, some strings of a text are automatically annotated. The alignment between patterns and a new text is not often guaranteed because of the writing styles diversity that may be detected in the news. For that reason, we have proposed soft matching between reduced formats with the objective of maximal utilization of pattern expressiveness. In several cases, this reduced format successfully allows the assignment of the same role to similar entities cited in the same side, with respect to some keywords or cue words. The experiment results are very promising since we've obtained 76.90 % as an average recognition rate.
-
-
-
A fairness-based preemption algorithm for LTE-Advanced
المؤلفون: Mehdi Khabazian, Osama Kubbar and Hossam HassaneinOne of the radio resource management (RRM) functionalities in LTE systems, call admission control (CAC) is employed to control the number of LTE bearer requests in order to maintain the quality of service (QoS) of the admitted bearers. However, no quality guarantee can be provided due to the inherently dynamic nature of wireless communication. For example, during congestion periods when several communications experience poor channel quality or high mobility, it is highly possible that the network cannot maintain its bearers' QoS requirements. Thus, preemption schemes may be employed to alleviate the situation. As a result, resource preemption mechanism and its fairness are prominent issues as they may directly affect applications' QoS in the higher layers, as well as other network attributes such as generated revenue. In general, preemption is unavoidable in two circumstances, namely, to manage the resources among bearers when the network is overloaded as a congestion control mechanism, or, to allocate a high-priority bearer request while sufficient resources are not available. In this study, we propose a preemption technique by which each an established bearer may be preempted according to its priority as well as the amount of extra allocated resources compared to its basic data rate. We define the contribution of each bearer in the preemption through a contribution metric with tuning parameters which is presented in the form of Cobb-Douglas production function. We compare the fairness of the proposed scheme with a traditional preemption scheme by means of two well-known fairness indices, i.e. Jain's index and min-max index. We also discuss its effect on bearers' blocking and dropping probability and the total generated revenue. Through a simulation approach, we conclude substantial improvements in preemption fairness when compared to the traditional approach. We discuss that the proposed scheme does not affect the main performance measurements of the network, i.e. the bearers' blocking and dropping probabilities due to congestion. We also show that the preemption contribution metric can be effectively used by the service providers to vary the total generated revenue.
-
-
-
Using information extraction to dynamically create multimedia tutorials
المؤلفون: Amal Dandashi, Jihad Mohamad Aljaam, Massoud Mwinyi, Sami Elzeiny and Ali M. JaouaBackground and Objectives: Multimedia tutorials are often useful for children with disabilities as they are better able to understand new concepts with the use of lessons that engage their senses. These tutorials should include images, sounds and videos. We propose a system to dynamically generate multimedia tutorials that can easily be customized by the instructor, with the use of domain specific information extraction. Methods: Text processing is performed with a stemming algorithm, after which formal concepts analysis is used to extract pre-specified keywords. A formal concept is represented as a hierarchical lattice structure and in this study is applied to the animal kingdom domain. Ontology-based information extraction is then performed, where multimedia elements are extracted online and mapped, by querying the Google image database. Results: The proposed system allows for automated, speedy and efficient dynamic generation of multimedia customized tutorials. Conclusions: This system automates the tutorial generation process and gives disabled children the opportunity to learn with tutorials designed to suit their intellectual needs.
-
-
-
Information aspects of quantum systems
المؤلفون: Mahmoud Abdel-AtyInformation dynamics of different quantum systems under influence of both a phonon bath in contact with the resonator and irreversible decay of the qubits is considered. The focus of our analysis is devoted to multilevel atoms and the effects arising from the coupling to the reservoir. Even in the presence of the reservoirs, the inherent entanglement is found to be rather robust. Due to this fact, together with control of system parameters, the system may therefore be especially suited for quantum information processing. Information entropy, entropy squeezing and wehrl entropy are discussed as indicators of entanglement. Our findings also shed light on the evolution of open quantum many-body systems.
-
-
-
Interference-aware random beam selection schemes for spectrum sharing systems
المؤلفون: Mohamed Abdallah, Khalid Qaraqe and Mohamed-Slim AlouiniSpectrum sharing systems have been recently introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this work, we develop interference-aware random beam selection schemes that provide enhanced performance for the secondary network under the condition that the interference observed by the receivers of the primary network is below a predetermined/acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a primary link composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes select a beam, among a set of power-optimized random beams, that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the SINR statistics as well as the capacity and bit error rate (BER) of the secondary link.
-
-
-
Implementation and evaluation of binary interval consensus on the TinyOS and TOSSIM simulator
المؤلفون: Abderrazak Abdaoui, Tarek mohamed El-Fouly and Moez DraiefThis work considers the deployment of the binary consensus algorithm in wireless sensor networks (WSN). This algorithm is applied for the evaluation of the average measurement with the presence of a faulty/attacked node. As this algorithm has been tested theoretically, we deploy it in real life, including its distributed and routing features. In this paper, we propose the development, under simulation environment, of a distributed binary consensus algorithm. We formulate the algorithm into nesC derived from C language and running over the tiny operating system (TinyOS). The implementation was tested on sensor nodes using the TinyOSSimulator (Tossim) for a large number of nodes N and a testbed with a limited number of nodes. In performance evaluation, we considered the analysis of the average convergence time for node states to a consensus value. As in analytical results, in the simulations we applied the distributed algorithm for fully connected, paths, cycles Erdos Reny random, and starr-shaped graph topologies. Our achieved results of simulation and hardware implementations are consistent with theory.
-
-
-
A neural network based lexical stress pattern classifier
المؤلفون: Mostafa Shahin, Beena Ahmed and Kirrie BallardBackground and Objectives: In dysprosodic speech, the prosody does not match the expected intonation pattern and can result in robotic-like speech, with each syllable produced with equal stress. These errors are manifested through inconsistent lexical stress as measured by perceptual judgments and/or acoustic variables. Lexical stress is produced through variations in syllable duration, peak intensity and fundamental frequency. The presented technique automatically evaluates the unequal lexical stress patterns Strong-Weak (SW) and Week-Strong (WS) in American English continuous speech production based upon a multi-layer feed forward neural network with seven acoustic features chosen to target the lexical stress variability between two consecutive syllables. Methods: The speech corpus used in this work is the PTDB-TUG. Five females and three males were chosen to form a training set and one female and one male for testing. The CMU pronouncing dictionary with lexical stress levels marked was used to assign stress levels to each syllable in all words in the speech corpus. Lexical stress is phonetically realized through the manipulation of signal intensity, the fundamental frequency (F0) and its dynamics and the syllable/vowel duration. The nucleus duration, syllable duration, mean pitch, maximum pitch over nucleus, the peak-to-peak amplitude integral over syllable nucleus, energy mean and maximum energy over nucleus were calculated for each syllable in the collected speech. As lexical stress errors are identified by evaluating the variability between consecutive syllables in a word, we computed the pairwise variability index ("PVI") for each acoustic measure. The PVI for any acoustic feature f_i is given by: PVI_i= (f_i1-f_i2)/(( f_i1+f_i2)/2)(1), where f_i1,f_i2 are the acoustic features of the first and second syllables consecutively. A multi-layer feed forward neural network which consisted of input, hidden and output layers was used to classify the stress patterns in the words in the database. Results: The presented system had an overall accuracy of 87.6%. It correctly classified 92.4% of the SW stress patterns and 76.5% of the WS stress pattern. Conclusions: A feed-forward neural network was used to classify between the SW and WS stress patterns in American English continuous speech with overall accuracy around 87 percent.
-
-
-
Accessibility research efforts needed in Qatar
المؤلفون: David Banes and Erik ZetterströmBackground and Objectives: Mada, Qatar's Center for Assistive Technology, is a non-profit organization that strives to empower and enable people with disabilities through ICT. During late 2011 and early 2012, Mada conducted research to identify barriers to accessibility in Qatar Methods: In the survey, the key groups of respondents were identified including disabled people themselves. A combination of quantitative and qualitative methods were applied. The results were validated through the Assistive Technology Research Forum formed by Mada, which analyzed the survey results and gave additional input. These were further validated in discussions with disabled people's organizations as part of the input to the Mada future strategy. Results: The survey indicated the following major results: *People with visual impairment had both a high awareness and usage rate. *People with physical and hearing disabilities were very aware of available assistive technology but the usage rate was still low. *People with learning disabilities were not aware of available assistive technology and hence the usage rate was unsurprisingly low. Based on these findings, the AT Research Forum identified the following priority areas, which could potentially have a very high impact on barriers to accessibility in Qatar: *Arabic crowd sourced free symbol set *Improved statistical source on needs and a registry for easy user communication *Development of a free Arabic text to speech. *Wayfinding technologies for the blind incorporating enhanced technologies in emergency situations *Research into enhanced literacy amongst the deaf community when provided with text based communication solutions In addition, it was possible to highlight some crucial issues that which impeded uptake of technology, including: *The lack of accessible Arabic digital content *The limitations of current OCR technologies *The lack of useful Arabic continuous speech *Arabic word prediction and the lack of a significant available corpus Conclusions: The areas identified are fundamental projects with a very high impact in Qatar. They would best be addressed through collaboration and funding. Such collaborations bridge the private and academic sectors with specialist input from an organization supporting disabled people such as Mada.
-
-
-
Energy saving mechanism in multiantenna LTE systems
المؤلفون: Reema Imran, Zorba Nizar, Osama Kubbar and Christos VerikoukisLong Term Evolution (LTE) supports closed-loop MIMO techniques to improve its performance; however, in order to exploit the multiuser MIMO channel capabilities, the design of an efficient MAC scheme that supports MU-MIMO is essential and is still an open issue in the literature. This work proposes a novel, energy-efficient MAC scheme for LTE, which aims to achieve simultaneous downlink transmissions to multiple users through the deployment of a low-complexity beamforming technique at the physical layer. Our proposed scheme benefits from the multiuser gain of the MIMO channel and the multiplexing gain of the Multibeam Opportunistic Beamforming (MOB) technique, not only to improve the system throughput but also to provide an energy efficient wireless network. We show that our proposed scheme can provide good energy saving performance at the eNB, where the mathematical expressions for performance evaluation in terms of throughput and saved energy are also presented.
-
-
-
RAFNI: Robust analysis of functional neuroimages with non-normal alpha-stable errors
المؤلفون: Mohammed El Anbari, Halima Bensmail, Samreen Anjum and Othamne BouhaliOne of the most ambitious goals of Qatar in the next few years is to become a country based on scientific and technical researches instead of being dependent on hydrocarbons. To this end, Qatar Foundation has established a number of high caliber universities and institutes. In particular, Qatar Computing Research Institue (QCRI) is forming a scientific computing multidisciplinary group with a focus on data mining machine learning, statistical modeling and bioinformatics. We now are able to satisfy the computational statistics needs of a variety of fields, especially of biomedical researchers in Qatar. Functional magnetic resonance imaging (fMRI) is a noninvasive neuroimaging method that is widely used in cognitive neuroscience. It relies on the measurement of changes in the blood oxygenation level resulting from neural activity. The technique is widely used in cognitive neuroscience. fMRI is known to be contaminated by artifacts. Artifacts are known to have fat tailed distributions and are often skewed therefore modeling the error using a Gaussian distribution is not enough. In this paper we introduce RAFNI, an extension of AFNI, which is an fMRI open source software for the analysis of functional neuroimages. We are model the error introduced by artifacts using alpha-stable distribution. We demonstrate the applicability and efficiency of stable distributions on real fMRI. We show that the alpha-stable estimator gives better results than the OLS-based estimators.
-
-
-
Model-free fuzzy intervention in biological phenomena
المؤلفون: Hazem Nounou, Mohamed Numan Nounou, Nader Meskin and Aniruddha DattaAn important objective of modeling biological phenomena is to develop therapeutic intervention strategies to move an undesirable state of a diseased network towards a more desirable one. Such transitions can be achieved by the use of drugs to act on some genes/metabolites that affect the undesirable behavior. Biological phenomena are complex processes with nonlinear dynamics that cannot be perfectly described by a mathematical model due to several challenges such as the scarcity of biological data. Therefore, the need for model-free nonlinear intervention strategies that are capable of guiding the target variables to their desired values often arises. Addressing such a need is the main focus of this work. In many applications, fuzzy systems have been found to be very useful for parameter estimation, model development and control design of nonlinear processes. In this work, a model-free fuzzy intervention strategy (that does not require a mathematical model of the biological phenomenon) is proposed to guide the target variables of biological systems to their desired values. The proposed fuzzy intervention strategy is applied to two biological models: a glycolytic-glycogenolytic pathway model and a purine metabolism pathway model. The simulation results of the two case studies show that the fuzzy intervention schemes are able to guide the target variables to their desired values. Moreover, sensitivity analyses are conducted to study the robustness of the fuzzy intervention algorithm to variations in model parameters, and contamination due to measurement noise, in the two case studies, respectively.
-
-
-
Multiscale denoising of biological data: A comparative analysis
المؤلفون: Mohamed Nounou, Hazem Nounou, Nader Meskin and Aniruddha DattaWith the advancements in computing and sensing technologies, large amounts of data are collected from various biological systems. These data are a rich source of information about the biological systems they represent. For example, time-series metabolic data can be used to construct dynamic genetic regulatory network models, which can be used not only to better understand the interactions among different genes inside a cell, but also to design intervention strategies that can cure or manage major diseases. Also, copy number data can be used to determine the locations and extent of aberrations in chromosome sequences which are associated with many diseases such as cancer. Unfortunately, measured biological data are usually contaminated with errors that mask the important features in the data. Therefore, noisy biological measurements need to be filtered to enhance their usefulness in practice. Conventional linear low-pass filtering techniques are widely used because they are computationally efficient and can be implemented online. However, they are not effective because they operate on a single scale, meaning that they define a specific frequency, above which all features are considered noise. Real biological data possesses multiscale characteristics, i.e., may contain important features having high frequencies (such as sharp changes) or noise occurring at low frequencies (such as correlated or colored noise). Filtering such data requires multiscale filtering techniques that can account for the multiscale nature of the data. In this work, different batch as well as online multiscale filtering techniques are used to denoise biological data. These techniques include standard multiscale (SMS) filtering, online multiscale (OLMS) filtering, translation invariant (TI) filtering, and boundary corrected TI (BCTI) filtering. The performances of these techniques are demonstrated and compared to those of some conventional low-pass filters (such as the mean filter and the exponentially weighted moving average filter) using two case studies. The first case study uses simulated dynamic metabolic data, while the second case study uses real copy number data. Simulation results show that significant improvement can be achieved using multiscale filtering over conventional filtering techniques.
-
-
-
A hybrid word alignment approach to build bilingual lexicons for English-Arabic machine translation
In this paper we propose a hybrid approach to align single words, compound words and idiomatic expressions from English-Arabic parallel corpora. The objective is to develop, improve and maintain automatically translation lexicons. This approach combines linguistic and statistical information in order to improve word alignment results. The linguistic improvements taken into account refer to the use of an existing bilingual lexicon, named entity recognition, grammatical tag matching and detection of syntactic dependency relation between words. Statistical information refers to the number of occurrences of repeated words, their positions in the parallel corpus and their lengths in terms of number of characters. Single-word alignment uses an existing bilingual lexicon, named entities and cognate detection and grammatical tag matching. Compound word alignment consists of establishing correspondences between the compound words of the source sentence and the compound words of the target sentences. A syntactic analysis is applied to the source and target sentences in order to extract dependency relations between words and to recognize compound words. Idiomatic expression alignment starts with a monolingual term extraction for each of the source and target languages, which provides a list of sequences of repeated words and a list of potential translations. These sequences are represented with vectors which indicate their number of occurrences and the number of segments in which they appear. Then, translation relations between the source and target expressions are evaluated with a distance metric. We have evaluated the single and multiword expression aligners using two methods: A manual evaluation of the alignment quality on 1000 pairs of English-Arabic sentences and an evaluation of the impact of this alignment on the translation quality of a machine translation system. The obtained results showed that these aligners, on the one hand, generate a translation lexicon with around 85% precision, and on the other hand, report a gain in BLEU score of 0.20 for the translation quality.
-
-
-
Feature-based method for offline writer identification
المؤلفون: Somaya Al-Maadeed and Abdelaali HassaineWriter identification consists of identifying the writer of a certain handwritten document and is of high importance in forensic document examination. Indeed, numerous cases over the years have dealt with evidence provided by handwritten documents such as wills and ransom notes. Automatic methods for writer identification can be divided into codebook-based and feature-based approaches. In codebook-based approaches, the writer is assumed to act as a stochastic generator of graphemes. The probability distribution of grapheme usage is used to distinguish between writers. Feature-based approaches compare the handwritings according to some geometrical, structural or textural features. Feature-based approaches prove to be efficient and are generally preferred when there is a limited amount of available handwriting. Therefore, we are more interested in this study in this category of approaches. Writer identification is performed by comparing a query document to a set of known documents and assigning it to the closest document in term of similarity of handwriting. This is done by extracting characterizing features from the documents including: directions, curvatures, tortuosities (or smoothness), chain codes distributions and edge based directional features. These features correspond to probability distribution functions (PDF) extracted from the handwriting images to characterize writer individuality. When matching a query document against any other document, the ¬χ2 distance between their corresponding features is computed. The smaller the distance, the more likely the two documents are written by the same writer. Therefore, the identified writer is the one of the document with the smallest distance to the query document. The writer is said to be correctly identified when the identified writer corresponds to the actual writer of the query document. The performance has been evaluated on the IAM handwriting dataset, with chain code based features generally outperforming the other features reaching 71% correct identification rate. The combination of all the features lead to 76% correct identification rate. The proposed system also won the music scores writer identification contest reaching 77% identification rate. The proposed method automatically extracts features used by forensic experts in order to identify writers of handwritten document. The results show that the method is efficient and language-independent.
-
-
-
e-Security: Methodologies and approaches for the United Arab Emirates Online Business
المؤلفون: Fahim AkhterBackground and Objectives As e-commerce functions in a more perplex and conjoint environment than traditional businesses, a higher degree of trust is required between the users and online businesses. Uncertainties inherent to the current e-commerce environment give rise to a lack of trust and reliability in e-commerce partnerships, thereby reducing confidence and creating barriers to trade. The reason why most users and businesses in United Arab Emirates (U.A.E) are still skeptical about e-commerce involves perceived security risks associated with conducting online business. Online users consciously or subconsciously analyze the provided level of security based on their experience in order to decide whether to conduct business with the specific company or else to move on to the next company. There is a need for a better understanding of hostile environments fueled by financially-motivated, targeted cyber threats that affect consumer's decision-making behavior. The purpose of the study is to identify the factors that support the implementation and acceptance of security in e-commerce among corporations in throughout the United Arab Emirates. The study will explored the common cyber attacks that threaten the U.A.E. online businesses and will describe methodologies and approaches that can be developed to respond to those threats. Methods: A descriptive web-based survey will be adopted as an appropriate method to collect initial data from users due to its cost effectiveness, rapid turnaround, high response volume, and ability to cover a large geographical area. The combination of questions both close- and open-ended will be selected. The URL of the survey will be electronically distributed among participant using mailing lists from the Dubai chamber of commerce. Results and Conclusions: Statistics of participants from seven states of the U.A.E will be accessed and discussed here. Complete responses will be chosen out of anonymous responses for further analysis. Quantitative data will be fed into a statistical framework for researchers to understand and analyze the relationship among different responses.
-
-
-
Detecting forgeries and disguised signatures in online signature verification
المؤلفون: Abdelaali Hassaine and Somaya Al-MaadeedOnline signatures are acquired using a digital tablet which provides all the trajectories of the signature, as well as the variation in pressure with respect to time. Therefore, online signature verification achieves higher recognition rates than offline signature verification. Nowadays, forensic document examiners distinguish between forgeries, in which an impostor tries to imitate a given signature of another person and disguised signatures, in which the authentic author deliberately tries to hide his/her identity with the purpose of denial at a later stage. The disguised signatures play an important role in real forensic cases but are not considered in recent literature. In this study, we used online signatures acquired using a WACOM Intuos4 digitizing tablet with a sampling rate of 200Hz, a resolution of 2000 lines/cm and a precision of 0.25mm. The pressure information is available in 1024 levels. Online signatures contain a set of samples, each sample corresponds to the point coordinates on the digitizing tablet along with the corresponding pressure (Xt; Yt; Pt) where t corresponds to time. From those three basic signals, four other are extracted: distances, angles, speeds and angular speed. In order to compare the questioned signature with the reference signature, the differences between their corresponding features are computed at both the signal level and the histogram level. This study has been evaluated on ICDAR2009 signature verification competition dataset and a new dataset of online signatures collected at Qatar University (QU-dataset). This dataset contains signatures of genuine signatures, forgeries and disguised of 194 persons. To the extent of our knowledge, this dataset is the only one that contains disguised online signatures. The best individual performing feature for the ICDAR2009 dataset is the pressure histogram difference which reaches 8% equal error rate (EER). The pressure signal difference is the best individual performing feature for the QU-dataset (29% EER). The combination of features led to 7% EER on the ICDAR2009 dataset and 22% EER on the QU-dataset. This online signature verification system deals with both forgeries and disguised signatures. Several features have been proposed and combined using different classifiers reaching promising performance for disguised signatures detection.
-