- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2016 Issue 1
- Conference date: 22-23 Mar 2016
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2016
- Published: 21 March 2016
541 - 560 of 656 results
-
-
A Distributed and Adaptive Graph Simulation System
Authors: Pooja Nilangekar and Mohammad HammoudLarge-scale graph processing is becoming central to our modern life. For instance, graph pattern matching (GPM) can be utilized to search and analyze social graphs, biological data and road networks, to mention a few. Conceptually, a GPM algorithm is typically defined in terms of subgraph isomorphism, whereby it seeks to find subgraphs in an input data graph, G, which are similar to a given query graph, Q. Although subgraph isomorphism forms a uniquely important class of graph queries, it is NP-complete and very restrictive in capturing sensible matches for emerging applications like software plagiarism detection, protein interaction networks, and intelligence analysis, among others. Consequently, GPM has been recently relaxed and defined in terms of graph simulation. As opposed to subgraph isomorphism, graph simulation can run in quadratic time, return more intuitive matches, and scale well with modern big graphs (i.e., graphs with billions of vertices and edges). Nonetheless, the current state-of-the-art distributed graph simulation systems still rely on graph partitioning (which is also NP-complete), induce significant communication overhead between worker machines to resolve local matches, and fail to adapt to various complexities of query graphs.
In this work, we observe that big graphs are not big data. That is, the largest big graph that we know of can still fit on a single physical or virtual disk (e.g., 6TB physical disks are cheaply available nowadays and AWS EC2 instances can offer up to 24 × 2048GB virtual disks). However, since graph simulation requires exploring the entire input big graph, G, and naturally lacks data locality, existing memory capacities can get significantly dwarfed by G's size. As such, we propose GraphSim, a novel distributed and adaptive system for efficient and scalable graph simulation. GraphSim precludes graph partitioning altogether, yet still exploits parallel processing across cluster machines. In particular, GraphSim stores G at each machine but only matches an interval of G's vertices at the machine. All machines are run in parallel and each machine simulates its interval locally. Nevertheless, if necessary, a machine can inspect remaining dependent vertices in G to fully resolve its local matches without communicating with any other machine. Hence, GraphSim does not shuffle intermediate data whatsoever. In addition, it attempts not to overwhelm the memory of any machine via employing a mathematical model to predict the best number of machines for any given query graph, Q, based on Q's complexity, G's size and the memory capacity of each machine. Subsequently, GraphSim renders adaptive as well. We experimentally verified the efficiency and the scalability of GraphSim over private and public clouds using real-life and synthetic big graphs. Results show that GraphSim can outperform the current fastest distributed graph simulation system by several orders of magnitude.
-
-
-
A BCI m-Learning System
Authors: AbdelGhani Karkar and Amr MohamedMobile learning can help in evolving students learning strength and comprehension skills. A connection is required to enable such devices communicate between each other. Brain-Computer Interface (BCI) can read brain signals and transform them into readable information. For instance, an instructor can use such device to track interest, stress level, and engagement of his students. We propose in this paper a mobile learning system that can transpose text-to-picture (TTP) to illustrate the content of Arabic stories and synchronize information with connected devices in a private wireless mesh network. The device of the instructor can connect the internet to download further illustrative information. It shares Wireless and Bluetooth connection with at least one student. Therefore, students can share the connection between each other to synchronize information on their devices. BCI devices are used to navigate, answer questions, and get to track students' performance. The aim of our educational system is to establish a private wireless mesh network that can operate in a dynamic environment.
Keywords
Mobile Learning, Arabic Text Processing, Brain Computer Interface, Engineering Education, Wireless Mesh Network.
I. Introduction
Nowadays mobile devices and collaborative work opened a new horizon for collaborative learning. As most people own handheld private portable smart phones, this has become the main means of connectivity and communication between people. Using smart phones for learning is beneficial and more attractive as learners can access educational resources at any time. Different eLearning systems available provide different options for collaborative classroom environment. However, they do not consider effective needs to adapt learning performance. They do not provide dynamic communication, automatic feedback, and other required classroom events.
II. Backgrounds
There are several collaborative learning applications have been proposed. BSCW [1] enables for online sharing of workspaces between distant people. Lotus Sametime Connect [2] also provides services for collaborative multiway chat, web conferencing, location awareness, and so. Saad et al. [3] proposed an intelligent collaborative system that can enable small range of mobile devices to communicate using WLAN, and uses the Bluetooth in case of power outage. The architecture of the proposed system is central where client can connect the server. Saleem [4] proposed a Bluetooth Assessment System (BAS) to use Bluetooth as an alternative to transfer questions and answers between the instructor and students. As many systems have been proposed in the domain of collaborative learning, several of them support mobile technology while others do not. BCI is not considered as part of the mobile educational system that can surround the environment with reading mental signals.
III. The proposed system
Our proposed system provides educational content and and synchronize it in a Wireless and Bluetooth wireless mesh network. It can be used in classrooms independent from the public network. The primary device broadcasts messages to enable users follow the explanation of the instructor on their mobile devices. The proposed system covers: 1) the establishment of wireless mesh network between mobile devices, 2) reading BCI data, 3) message communications, and 4) performance analysis between Wireless and Bluetooth technologies through device-to-device communication.
References
[1] K. Klö, “BSCW: cooperation support for distributed workgroups”, Parallel, Distributed and Network-based Processing. In 10th Euromicro Workshop, pp. 277–282, 2002.
[2] Lotus Sametime Connect. (2011, Feb. 17). Available: http://www.lotus.com/sametime.
[3] T. Saad, A. Waqas, K. Mudassir, A. Naeem, M. Aslam, S. Ayesha, A. Martinez-Enriquez, and P.R. Hedz, “Collaborative work in class rooms with handheld devices using bluetooth and WLAN,” IEEE 27th Canadian Conference on Electrical and Computer Engineering (CCECE), 2014, pp. 1–6.
[4] N.H., Saleem, “Applying Bluetooth as Novel Approach for Assessing Student Learning,” Asian Journal of Natural & Applied Sciences, vol 4, no. 2, 2015.
-
-
-
Qurb: Qatar Urban Analytics
Doha is one of the fastest growing cities of the world with a population that has increased by nearly 40% in the last five years. There are two significant trends that are relevant to our proposal. First, the government of Qatar is actively engaged in embracing the use of fine-grained data to “sense” the city for maintaining current services and future planning to ensure a high standard of living for its residents. In this line, QCRI has initiated several research projects related to urban computing to better understand and predict traffic mobility patterns in the city of Doha [1]. Second trend is the high degree of social media participation of the populace, providing a significant amount of time-oriented social sensing of the all types of events unfolding in the city. A key element of our vision is to integrate data from physical and social sensing, into what we call socio-physical sensing. Another key element of our vision is to develop novel analytics approaches to mine this cross-modal data to make various applications for residents smarter than they could be with a single mode of data. The overall goal is to help citizens in their every-day life in urban spaces, and also help transportation experts and policy specialists to take a real time data-driven approach towards urban planning and real time traffic planning in the city.
Fast growing cities like Doha encounter several problems and challenges that should be addressed in time to ensure a reasonable quality of life for its population. These challenges encompass good transportation networks, sustainable energy sources, acceptable commute times, etc. and go beyond physical data acquisition and analytics.
In the era of Internet of Things [5], it has become commonplace to deploy static and mobile physical sensors around the city in order to capture indicators about people's behaviour related to driving, polluting, energy consumption, etc. The data collected from physical as well as social sensors has to be processed using advanced exploratory data analysis, cleaned and consolidated to remove inconsistent, outlying and duplicate records before statistical analysis, data mining and predictive modeling can be applied.
Recent advances in social computing have enabled scientists to study and model different social phenomena using user generated content shared on social media platforms. Such studies include the spread of diseases on social media [3] and studying food consumption in Twitter [4]. We envision a three layered setting: the ground, physical sensing layer, and social sensing layer. The ground represents the actual world (e.g., a city) with its inherent complexity and set of challenges. We aim at solving some of these problems by combining two data overlays to better model the interactions between the city and its population.
QCRI vision is twofold:
From a data science perspective: Our goal is to take a holistic cross-modality view of urban data acquired from disparate urban/social sensors in order to (i) design an integrated data pipeline to store, process and consume heterogeneous urban data, and (ii) develop machine learning tools for cross-modality data mining which aids decision making for the smooth functioning of urban services;
From a social informatics perspective: Use social data generated by users and shared via social media platforms to enhance smart city applications. This could be achieved by adding a semantic overlay to data acquired through physical sensors. We believe that combining data from physical sensors with user generated content potentially leads to the design of better and smarter lifestyle applications such as “evening out experience” recommenders that optimize for the whole experience including driving, parking and restaurant quality; Cab finder that takes into account the current traffic status, etc.
Figure 1. Overview of Proposed Approach.
In Fig. 1 we provide a general overview of our cross-modality vision. While most of the effort toward building applications assisting people in their everyday life has focused on only one data overlay, we claim that combining the two overlays of data could generate a significant added value to applications on both sides.
References
[1] Chawla, S., Sarkar, S., Borge-Holthoefer, J., Ahamed, S., Hammady, H., Filali, F., Znaidi, W., “On Inferring the Time-Varying Traffic Connectivity Structures of an Urban Environment”, Proc. of the 4th International Workshop on Urban Computing (UrbComp 2015) in conjunction with KDD 2015, Sydney, Australia.
[2] Sagl, G., Resch, B., Blaschke, T., “Contextual Sensing: Integrating Contextual Information with Human and Technical Geo-Sensor Information for Smart Cities”. Sensors 2015, 15, 17013–17035.
[3] Sadilek, A., Kautz, H. A., Silenzio, V. “Modeling Spread of Disease from Social Interactions.” ICWSM. 2012.
[4] Sofiane Abbar, Yelena Mejova, and Ingmar Weber. 2015. You Tweet What You Eat: Studying Food Consumption Through Twitter. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ‘15). ACM, New York, NY, USA, 3197–3206.
[5] Atzori, L., Iera, A., Morabito, G. “The internet of things: A survey.” Computer networks 54.15 (2010): 2787–2805.
-
-
-
Detecting Chronic Kidney Disease Using Machine Learning
Authors: Manoj Reddy and John ChoMotivation Chronic kidney disease (CKD) refers to the loss of kidney functions over time which is primarily to filter blood. Based on its severity it can be classified into various stages with the later ones requiring regular dialysis or kidney transplant. Chronic kidney disease mostly affects patients suffering from the complications of diabetes or high blood pressure and hinders their ability to carry out day-to-day activities. In Qatar, due to the rapidly changing lifestyle there has been an increase in the number of patients suffering from CKD. According to Hamad Medical Corporation [2], about 13% of Qatar's population suffers from CKD, whereas the global prevalence is estimated to be around 8–16% [3]. CKD can be detected at an early stage and can help at-risk patients from a complete kidney failure by simple tests that involve measuring blood pressure, serum creatinine and urine albumin [1]. Our goal is to use machine learning techniques and build a classification model that can predict if an individual has CKD based on various parameters that measure health related metrics such as age, blood pressure, specific gravity etc. By doing so, we shall be able to understand the different signals that identify if a patient at risk of CKD and help them by referring to preventive measures.
Dataset Our dataset was obtained from the UCI Machine Learning repository, which contains about 400 individuals of which 250 had CKD and 150 did not. The dataset was obtained from a hospital in southern India over a period of two months. In total there are 24 fields, of which 11 are numeric and 13 are nominal i.e. can take on only one of many categorical values. Some of the numerical fields include: blood pressure, random blood glucose level, serum creatinine level, sodium and potassium in mEq/L. Similarly, examples of nominal fields are answers to yes/no type questions such as whether the patient suffers from hypertension, diabetes mellitus, coronary artery disease. There was missing data values in a few rows which was addressed by imputing them with the mean value of the respective column feature. This ensures that the information in the entire dataset is leveraged to generate a model that best explains the data.
Approach We use two different machine learning tasks to approach this problem, namely: classification and clustering. In classification we built a model that can accurately classify if a patient has CKD based on their health parameters. And in order to understand if people can be grouped together based on the presence of CKD we have performed clustering on this dataset. Both these approaches provide good insights into the patterns present in the underlying data. Classification This problem can be modeled as a classification task in machine learning where the two classes are: CKD and not CKD which represents if a person is suffering from chronic kidney disease or not respectively. Each person is represented as a set of features provided in the dataset described earlier. We also have ground truth as to if a patient has CKD or not, which can be used to train a model that learns how to distinguish between the two classes. Our training set consists of 75% of the data and the remaining 25% is used for testing. The ratio of CKD to non-CKD persons in the test dataset was maintained to be approximately the similar to the entire dataset to avoid the problems of skewness. Various classification algorithms were employed such as logistic regression, Support Vector Machine (SVM) with various kernels, decision trees and Ada boost so as to compare their performance. While training the model, a stratified K-fold cross validation was adopted which ensures that each fold has the same proportion of labeled classes. Each classifier has a different methodology for learning. Some classifiers assign weights to each input feature along with a threshold that determines the output and updates them accordingly based on the training data. In the case of SVM, kernels map input features into a different dimension which might be linearly separable. Decision tree classifiers have the advantage that it can be easily visualized since it is analogous to a set of rules that need to be applied to an input feature vector. Each classifier has a different generalization capability and the efficiency depends on the underlying training and test data. Our aim is to discover the performance of each classifier on this type of medical information.
Clustering Clustering involves organizing a set of items into groups based on a pre-defined similarity measure. This is an unsupervised learning method that doesn't use the labeled information. There are various popular clustering algorithms and we use k-means and hierarchical clustering to analyze our data. K-means involves specifying the number of classes and the initial class means which are set to random points in the data. We vary the number of groups from 2 to 5 to figure out which maximizes the quality of clustering. Clustering with more than 2 groups also might allow to quantify the severity of Chronic Kidney Disease (CKD) for each patient instead of the binary notion of just having CKD or not. In each iteration of k-means, each person is assigned to a nearest group mean based on the distance metric and then the mean of each group is calculated based on the updated assignment. After a few iterations, once the means converge the k-means is stopped. Hierarchical clustering follows another approach whereby initially each datapoint is an individual cluster by itself and then at every step the closest two clusters are combined together to form a bigger cluster. The distance metric used in both the methods of clustering is Euclidean distance. Hierarchical clustering doesn't require any assumption about the number of clusters since the resulting output is a tree-like structure that contains the clusters that were merged at every time-step. The clusters for a certain number of groups can be obtained by slicing the tree at the desired level. We evaluate the quality of the clustering based on a well known criteria known as purity. Purity measures the number of data points that were classified correctly based on the ground truth which is available to us [5].
Principal Component Analysis Principal Component Analysis (PCA) is a popular tool for dimensionality reduction. It reduces the number of dimensions of a vector by maximizing the eigenvectors of the covariance matrix. We carry out PCA before using K-Means and hierarchical clustering so as to reduce it's complexity as well as make it easier to visualize the cluster differences using a 2D plot.
Results Classification In total, 6 different classification algorithms were used to compare their results. They are: logistic regression, decision tree, SVM with a linear kernel, SVM with a RBF kernel, Random Forest Classifier and Adaboost. The last two classifiers fall under the category of ensemble methods. The benefit of using ensemble methods is that it aggregates multiple learning algorithms to produce one that performs in a more robust manner. The two types of ensemble learning methods used are: Averaging methods and Boosting methods [6].
The averaging method typically outputs the average of several learning algorithms and one such type we used is random forest classifier. On the other hand, a boosting method “combines several weak models to produce a powerful ensemble” [6]. Ada boost is an example of boosting method that we have used.
We found that the SVM with linear kernel performed the best with 98% accuracy in the prediction of labels in the test data. The next best performance was by the two ensemble methods: Random Forest Classifier with 96% and Adaboost 95% accuracy. The next two classifiers were: Logistic regression with 91% and Decision tree with 90%. The classifier with the least accuracy was SVM with a RBF kernel which has about 60% accuracy. We believe that RBF gave lower performance because the input features are already high dimensional and don't need to be mapped into a higher dimensional space by RBF or other non-linear kernels. A Receiver Operating Characteristic (ROC) curve can also be plotted to compare the true positive rate and false positive rate. We also plan to compute other evaluation metrics such as precision, recall and F-score. The results are promising as majority of the classifiers have a classification accuracy of above 90%.
After classifying the test dataset, feature analysis was performed to compare the importance of each feature. The most important features across the classifiers were: albumin level and serum creatinine. Logistic regression classifier also included the ‘pedal edema’ feature along with the previous two features mentioned. Red blood cell feature was included as an important feature by Decision tree and Adaboost classifier.
Clustering After performing clustering on the entire dataset using K-Means we were able to plot it on a 2D graph since we used PCA to reduce it to two dimensions. The purity score of our clustering is 0.62. A higher purity score (max value is 1.0) represents a better quality of clustering. The hierarchical clustering plot provides the flexibility to view more than 2 clusters since there might be gradients in the severity of CKD among patients rather than the simple binary representation of having CKD or not. Multiple clusters can be obtained by intersecting the hierarchical tree at the desired level.
Conclusions We currently live in the big data era. There is an enormous amount of data being generated from various sources across all domains. Some of them include DNA sequence data, ubiquitous sensors, MRI/CAT scans, astronomical images etc. The challenge now is being able to extract useful information and create knowledge using innovative techniques to efficiently process the data. Due to this data deluge phenomenon, machine learning and data mining have gained strong interest among the research community. Statistical analysis on healthcare data has been gaining momentum since it has the potential to provide insights that are not obvious and can foster breakthroughs in this area.
This work aims to combine work in the field of computer science and health by applying techniques from statistical machine learning to health care data. Chronic kidney disease (CKD) affects a sizable percentage of the world's population. If detected early, its adverse effects can be avoided, hence saving precious lives and reducing cost. We have been able to build a model based on labeled data that accurately predicts if a patient suffers from chronic kidney disease based on their personal characteristics.
Our future work would be to include a larger dataset consisting of of thousands of patients and a richer set of features that shall improve the richness of the model by capturing a higher variation. We also aim to use topic models such as Latent Dirichlet Allocation to group various medical features into topics so as to understand the interaction between them. There needs to be a greater encouragement for such inter-disciplinary work in order to tackle grand challenges and in this case realize the vision of evidence based healthcare and personalized medicine.
References
[1] https://www.kidney.org/kidneydisease/aboutckd
[3] http://www.ncbi.nlm.nih.gov/pubmed/23727169
[4] https://archive.ics.uci.edu/ml/datasets/Chronic_Kidney_Disease
[5] http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html
-
-
-
Legal Issues in E-Commerce: A Case Study of Malaysia
More LessElectronic Commerce is the process of buying, selling, tranferring or exchanging products, services and/or information via computer network, including the Internet. In e-commerce environment, just like traditional paper-based commercial transaction, seller present their products, prices and terms to potential buyers. The buyers will consider their options, negotiate prices and terms if (necessary), place order and make payment. E-commerce is growing at a significant rate all over the world due to its efficient business transaction. Despite the fact of the development of e-commerce, there is uncertainty whether the traditional principles of contract law applicable to electronic contract. In formation of e-contract, the parties might disagree to what point and in which country an e-contract is formed. Malaysia as well as other country has enacted legislation on e-commerce in compliance with international organization i.e United Nations Commission on International Trade Law (UNCITRAL). The aim and objective of this paper is to identify the adequacy of existing legislation in Malaysia on e-commerce. This paper will also examines the creation of legally enforceable agreement with regard to e-commerce in Malaysia, digital signature and the uncertainty of where and when the e-contract is formed.
-
-
-
Latest Trends in Twitter from Arab Countries and the World
Authors: Wafa Waheeda Syed and Abdelkader Lattab1. Introduction
Twitter is the micro blogging social media platform which has at most variety of content. The open access to Twitter data with the usage of Twitter APIs has made it a important area of research. Twitter has a useful feature called “Trends” which displays Hot topics or trending information differing for every location. This trending information is evolved by tweets being shared on Twitter in a particular location. But Twitter limits the trending information to current tweets, as the algorithm for finding trends is concentrated on generating trends in real time, rather than trends summarization of hot topics on daily basis. Thus a clear summarization of the contemporary trending information is missing and is much needed. Latest Twitter Trends - our application discussed in this paper, is built to get the aggregate of hot topics on Twitter for Arab Countries and the World. This is a real time application with summarization of Hot topics over time. It enables users to study the summarization of twitter trends by location with the help of a Word Cloud. The tool also enables the user to click on the particular trend, which will allow the user to navigate and search through Twitter Search - also in real time. This tool also overcomes a drawback of Twitter trending information, in addition to the Twitter trend algorithm. The trends differ for different languages in different locations and are often mixed. For eg, if #Eid-ul-Adha is trending in Arab countries, عيد الأضحى# is also trending. This application focuses on consolidating the trends in Arabic and English, which have the same meaning and display only one trending topic, instead of two same topics in different language. This application also gives an estimation of the different kind of Twitter users, analyzing the percentage of tweets made by Male and Female in that location.
2. Trends data gathering
Twitter APIs give developers access to the real time data comprising of tweets & trends. The Twitter REST API is used by the tool - Latest Twitter Trends, to connect and get trending data from Twitter. The API is used to authenticate and establish connection with Twitter and also returns Twitter trending data in JSON format. Python programming language is used to write scripts to gather data from Twitter. A Data Crawling Script is developed for connecting with Twitter API by authenticating the credentials generated by twitter on creating an application from the app.twitter.com. The Customer Key, Customer Secret, Access Token, Access Token Secret are the credentials used to perform authentication by Twitter. The data returned by Twitter is in JSON (JavaScript Object Notation) format and the Python Data Crawling Script is commanded to handle the JSON files and create a CSV database. This High Level Gathering of Data comprises of the following: Python data crawling script connects and authenticates with Twitter API and gets trending places data in JSON format from Twitter. The data in JSON format is stored in to our tool database as a CSV file. The Twitter data gathered is all the trending location/places with the WOIED (Where On Earth ID). The WOEID is used as a key to get Twitter trending topics location by location - in real time using the Twitter REST API. The trends for every location are also returned to the tool in JSON format, which is again changed converted to CSV for saving in the tool database. This CSV file for Trends is appended every time a new trending data is collected from twitter. Another CSV file is maintained in the Database which holds only the current information for all trending places - for later use. Natural language processing is done on trends by location CSV data, dictionary, to consolidate and consider Arabic and English Trending topics as one. The results are stored in CSV file and will be used for the hot topic identification.
3. Hot topic identification
After the High level Data Gathering, CSV files containing data are used as a Database for generating Word Cloud using D3.js. This trending data is processed by calculating the number of occurrences to give an estimate of which trending topic was trending for a long time. The frequency is taken as the count value for trending topic and a word cloud is generated. This algorithm for calculating frequency is a python script, written mainly for Word Cloud Data Crawling. This word cloud data crawling script takes the Trends by Location data as input and generates a huge database of trends by cities in JSON files. This word cloud crawling script gives output in the JSON files to be stored with key as the trend topic and value as the frequency of the trend occurrence.
4. Architecture
Figure 1: Latest Trends in Twitter Application Architecture The python scripts for data crawling and word cloud crawling are sued to connect with Twitter, gather data, process and store in a database. The D3.js and Google fusion table API are used for displaying the application results. Google Fusion Table API is used to create a Map containing current trends by location – geo-tagged on the map. Java program is used as a dedicated project to connect & authenticate with Google API and clear old fusion table data to import new updated rows in to the Google Fusion Table. Python script Tagcloud.py is used to generate cities. JSON with trending topics from the Trends.csv file. These files from the database for generating word cloud using D3.js, individually for every city/location. Fusion table is used to visualize the trending information from Twitter. A java program along with Google API is used to authenticate and connect. Also to delete previous information in fusion table and update/import new records of data.
5. Results
The data crawling script establishes connection with Twitter and returns a JSON format as in Fig. 2. This data is processed and saved as a CSV in to our application database for later use. Figure 2: Trends Data Output from Twitter in JSON format The word cloud crawling script generates key value pairs of processed trending data from the database. The key containing trending topic and the value containing the frequency of the trending topic's occurrence. The Fig. 3 displays the JSON dataset used for generating word cloud. Figure 3: JSON data of the processed trending data The word cloud is generated using the D3.js library and is used to display summarized trending data to the user. Figure 4 shows the word cloud result for London country. Figure 4: Word cloud for trending data.
-
-
-
Mitigation of Traffic Congestion Using Ramp Metering on Doha Expressway
Authors: Muhammad Asif Khan, Ridha Hamila and Khaled Salah ShaabanRamp metering is the most effective and widely implemented strategy for improving traffic flow on freeways by restricting the number of vehicles entering a freeway using ramp meters. A ramp meter is a traffic signal programmed with a much shorter cycle time in order to allow a single vehicle or a very small platoon of vehicles (usually two or three) per green phase. Ramp metering algorithms defines the underlying logic that calculate the metering rate. Ramp meters are usually employed to control vehicles at the on-ramp to enter freeway (mainline) to mitigate the impact of the ramp traffic on the mainline flow. However ramp meters can also be used to control traffic flow from freeway to freeway. The selection of appropriate ramp metering strategy is based on the needs and goals of the regional transportation agency. Ramp meters can be controlled either locally (isolated) or system-wide (coordinated). Locally controlled or isolated ramp meters control vehicles access based on the local traffic conditions on single ramp or freeway segment to reduce congestion locally near the local ramp. System-wide or coordinated controlled ramp meters are used to improve traffic conditions on a freeway segment or the entire freeway corridor. Ramp Meters can be programmed either fixed time or traffic responsive. Fixed metering uses pre-set metering rates with a defined schedule based on some historical traffic data. Fixed or pre-time metering addresses the recurring congestion problem, but fails in case of non-recurring congestion. Traffic responsive metering uses present traffic conditions to adjust its metering rate. Traffic data is collected using loop detectors or any other surveillance system on real time. Traffic responsive control can be implemented in both isolated and coordinated ramp meters. Some known traffic responsive algorithms include Asservissement Linéaire d'Entrée Auotroutière (ALINEA), Heuristic Ramp Metering Coordination (HERO), System Wide Adaptive Ramp Metering (SWARM), fuzzy logic, Stratified zone algorithm, Bottleneck algorithm, Zone algorithm, HELPER algorithm and Advanced Real Time Metering (ARM algorithm). These various algorithms are developed in various regions of the world and some of them are evaluated for quite a long period of time. However the difference in traffic parameters, driver's behaviors, road geometries and other parameters can affect the performance of the algorithm when implemented in a new location. Hence it is necessary to investigate the performance of the ramp metering strategy prior to physical deployment. In this work, we chose Doha Expressway to deploy ramp metering for improvement in traffic conditions. Doha Expressway is a six-lane highway in Qatar that link the North of Doha to the South. The highway can be accessed through several on-ramps at different locations. The merging of ramp traffic to the freeway often causes congestion on the highway in several ways. It increases traffic density on the highway, reduce vehicles speed and causes vehicles to change lanes in the merging area. Hence in this research we first investigated the impact of ramp traffic on the mainline flow and identified the potential bottlenecks. Then ramp meters were installed at some of the on-ramps to evaluate the performance of each to improve the traffic flow on the mainline. The outcome of this study is to select the optimum metering strategy for each on-ramp with proposed modifications if required. Extensive simulations are carried out in PTV VISSIM traffic micro simulation software. The simulator is calibrated using real time traffic data and geometrical information.
-
-
-
Distributed Multi-Objective Resource Optimization for Mobile-Health Systems
Authors: Alaa Awad Abdellatif and Amr MohamedMobile-health (m-health) systems leverage wireless and mobile communication technologies to promote new ways to acquire, process, transport, and secure the raw and processed medical data. M-health systems provide the scalability needed to cope with the increasing number of elderly and chronic disease patients requiring constant monitoring. However, the design and operation of such pervasive health monitoring systems with Body Area Sensor Networks (BASNs) is challenging in twofold. First, limited energy, computational and storage resources of the sensor nodes. Second, need to guarantee application level Quality of Service (QoS). In this paper, we integrate wireless network components, and application-layer characteristics to provide sustainable, energy-efficient and high-quality services for m-health systems. In particularly, we propose an Energy-Cost-Distortion (E-C-D) solution, which exploits the benefits of medical data adaptation to optimize transmission energy consumption and cost of using network services. However, at large scale networks and due to heterogeneity of wireless m-health systems, centralized approach becomes less efficient. Therefore, we present a distributed cross-layer solution, which is suitable for heterogeneous wireless m-health systems and scalable with the network size. Our scheme leverages Lagrangian duality theory and enables us to find efficient trade-off among energy consumption, network cost, and vital signs distortion, for delay sensitive transmission of medical data over heterogeneous wireless environment. In this context, we propose a solution that enables energy-efficient high-quality patient health monitoring to facilitate remote chronic disease management. We propose a multi-objective optimization problem that targets different QoS metrics, namely, signal distortion, delay, and Bit Error Rate (BER), as well as monetary cost and transmission energy. In particularly, we aim to achieve the optimal trade-off among the above factors, which exhibit conflicting trends.
The main contributions of our work can be summarized as follows:
- (1) We design a system for EEG health monitoring systems that achieves high performance by properly combining network functionalities and EEG application characteristics.
- (2) We formulate a cross-layer multi-objective optimization model that aims at adapting and minimizing, at each PDA, the encoding distortion and monetary cost at the application layer, as well as the transmission energy at the physical layer, while meeting the delay and BER constraints.
- (3) We use geometric program transformation to convert the aforementioned problem into a convex problem, for which an optimal, centralized solution can be obtained.
- (4) By leveraging Lagrangian duality theory, we then propose a scalable distributed solution. The dual decomposition approach enables us to decouple the problem into a set of sub-problems that can be solved locally, leading to a scalable distributed algorithm that converges to the optimal solution.
- (5) The proposed distributed algorithm for EEG based m-health systems is analyzed and compared to the centralized approach.
Our results show the efficiency of our distributed solution, its ability to converge to the optimal solution and to adapt to varying network conditions. In particular, simulation results show that the proposed scheme achieves the optimal trade-off between energy efficiency and QoS requirements of health monitoring systems. Moreover, it offers significantly savings in the objective function (i.e., E-C-D utility function), compared to solutions based on equal bandwidth allocation.
-
-
-
Multimodal Interface Design for Ultrasound Machines
Authors: Yasmin Halwani, Tim S.E. Salcudean and Sidney S. FelsSonographers, radiologists and surgeons use ultrasound machines on a daily basis to acquire images for interventional procedures, scanning and diagnosis. The current interaction with ultrasound machines relies completely on physical keys and touch-screen input. In addition to not having a sterile interface for interventional procedures and operations, using the ultrasound machine requires a physical nearby presence of the clinician to use the keys on the machine, which restricts the clinician's free movement and natural postures to apply the probe on the patient and often forces uncomfortable ergonomics for prolonged periods of time. According to surveys being continuously conducted on the incidence of work-related musculoskeletal disorders (WRMSDs) for the past decade, up to 90% of sonographers experience WRMSDs across North America during routine ultrasonography. Repetitive motions and prolonged static postures are among the risk factors of WRMSDs, which both can be significantly reduced by introducing an improved interface for ultrasound machines that does not completely rely on direct physical interaction. Furthermore, the majority of physicians who perform ultrasound-guided interventions hold the probe with one hand while inserting a needle with the other, which makes ultrasound machine parameters adjustment unreachable without external assistance. Similarly, surgeons' hands are typically occupied with sterile surgical tools and are unable to control ultrasound machine parameters independently. The need for an assistant is suboptimal as it sometimes difficult for the operator or surgeon to communicate a specific intent during a procedure. Introducing a multimodal interface for ultrasound machine parameters that improves the current interface and is capable of hands-free interaction can bring an unprecedented benefit to all types of clinicians who use ultrasound machines, as it will contribute in reducing strain-related injuries and cognitive load experienced by sonographers, radiologists and surgeons and introduce a more effective, natural and efficient interface.
Due to the need for sterile, improved and efficient interaction and the availability of low-cost hardware, multimodal interaction with medical imaging tools is an active research area. There have been numerous studies that explored speech, vision, touch and gesture recognition to interact with both pre-operative and interventional image parameters during interventional procedures or during surgical operations. However, research that target multimodal interaction with ultrasound machines has not been sufficiently explored and is mostly limited to augmenting one interaction modality at a time, such as the existing commercial software and patents on enabling ultrasound machines with speech recognition. Given the wide range of settings and menu navigation required for ultrasound image acquisition, there is potential improvement in the interaction by expanding the existing physical interface with hands-free interaction methods such as voice, gesture, and eye-gaze recognition. Namely, it will simplify the image settings menu navigation required to complete a scanning task by the system's ability to recognize the user's context through the additional interaction modalities. In addition, the user will not be restricted by a physical interface and will be able to interact with the ultrasound machines completely hands-free using the added interaction modalities, as explained earlier in the case of sterile environments in interventional procedures.
Field studies and interviews with sonographers and radiologists have been conducted to explore the potential areas of improvement of current ultrasound systems. Typical ultrasound machines used by sonographers for routine ultrasonography tend to have an extensive physical interface with keys and switches all co-located in the same area as the keyboard for all possible ultrasonography contexts. Although the keys are distributed based on their typical frequency of use in common ultrasonography exams, sonographers tend to glance at the keys repeatedly during a routine ultrasound session, which takes away from their uninterrupted focus on the image. Although it varies based on the type of the ultrasound exam, typically an ultrasound exam takes an average of 30 minutes, requiring a capture of multiple images. For time-sensitive tasks, such as imaging anatomical structures in constant motion, the coordination between the image, keys selection, menu navigation and probe positioning can be both time-consuming and distracting. Interviewed sonographers also reported their discomfort with repeated awkward postures and their preference for a hands-free interface in cases where they have to position the ultrasound probe at a faraway distance from where the ultrasound physical control keys are located, as in the case with immobile patients or patients with high BMI.
Currently, there exist available commercial software that addresses the repeated physical keystrokes issue and the need for a hands-free interface. Some machines provide a context-aware solution in a form of customizable software to automate steps in ultrasound exams, which reported to have significantly decreased keystrokes by 60% and exam time by 54%. Other machines provide voice-enabled interaction with ultrasound machines to reduce uncomfortable postures by sonographers trying to position the probe and reach for the physical keys on the machine. Interviewed sonographers frequently used the context-aware automated interaction software system with ultrasound machines during their ultrasound exams, which shows a potential for the context-aware feature that multimodal interaction systems can offer. On the other hand, sonographers did not prefer using voice commands as a primary interaction modality with ultrasound machines in addition to the existing physical controls, as an ultrasound exam involves a lot of communication with the patient and relying on voice input might cause misinterpretations of sonographer-patient conversations to be commands directed to the machine instead. This also leads to the conclusion that there is a need for voice-enabled systems augmented with other interaction modalities to be efficiently used when needed and not be confused with external voice interaction.
This study aims to explore interfaces for controlling ultrasound machine settings during routine ultrasonography and interventional procedures through multi-modal input. The main goal is to design an efficient timesaving and cost-effective system that minimizes the amount of repetitive physical interaction with the ultrasound machine in addition to providing a hand-free mode to reduce WRMSDs and allow direct interaction with the machine in sterile conditions. Achieving that will be done through additional field studies and prototyping followed by user studies to assess the developed system.
-
-
-
Pokerface: The Word-Emotion Detector
Authors: Alaa Khader, Ashwini Kamath, Harsh Sharma, Irina Temnikova, Ferda Ofli and Francisco GuzmánEvery day, humans interact with text spanning from different sources such as news, literature, education, and even social media. While reading, humans process text word by word, accessing the meaning of a particular word from the lexicon, and when needed, changing its meaning to match the context of the text (Harley, 2014). The process of reading can induce a range of emotions, such as engagement, confusion, frustration, surprise or happiness. For example, when readers come across unfamiliar jargon, this may confuse them, as they try to understand the text.
In the past, scientists have addressed the emotion in text from a writer's perspective. For example the field of Sentiment Analysis, aims to detect the emotional charge of words, to infer the intentions of the writer. However, here we propose the reverse approach: detect emotions produced on readers while processing text.
Detecting which emotions are induced by reading a piece of text can give us insights about the nature of the text itself. A word-emotion detector can be used to assign specific emotions experienced by readers to specific words or passages of text. This area of research has never been explored before.
There are many potential applications to a word-emotion detector. For example, a word-emotion detector can be used to analyze how passages in books, news or social media are perceived by readers. This can guide stylistic choices to cater for a particular audience. In a learning environment, it can be used to detect the affective states and emotions of students, so as to infer their level of understanding which can be used to provide assistance to students over difficult passages. In a commercial environment, it can be used to detect reactions to wording in advertizements. In the remainder of this report, we detail the first steps we followed to build a word-emotion detector. Moreover, we present the details of our system developed during QCRI's 2015 Hot Summer Cool Research internship program, as well as the initial experiments. In particular, we describe our experimental setup in which viewers watch a foreign language video with modified subtitles containing deliberate emotion inducing changes. We analyze the results and provide discussion about the future work.
The Pokerface System
A Pokerface is an inscrutable face that reveals no hint of a person's thoughts or feelings. The goal of the ‘Pokerface’ project is to build a word-emotion detector that works even if no facial movements are present. To do so, the Pokerface system uses a unique symbiose of the latest consumer-level technologies such as: eye-tracking to detect words that are being read; electroencephalography (EEG) to detect brain activity of the reader; and facial-expression recognition (FER) to detect movement in a reader's face. We then classify the brain activity and facial movements detected into emotions using Neural Networks.
In this report, we present the details of our Pokerface system, as well as the initial experiments done during QCRI's 2015 Hot Summer Cool Research internship program. In particular, we describe the setup in which viewers watch a foreign language video with subtitles containing deliberate emotion inducing changes.
Methodology
To detect emotions experienced by readers as they read text, we used different technologies. FER and EEG are used to detect emotional reactions through changes in facial expressions and brainwaves, while eye-tracking is used to identify the stimulus (text) to the reaction detected. A video interface was created to run the experiments. Below we describe each of them independently, and how we used them in the project.
EEG
EEG is the recording of electrical activity along the scalp (Niedermeyer and Lopes da Silva, 2005). EEG measures voltage fluctuations resulting from ionic current flows within the neurons of the brain. EEG is one of the few non-intrusive techniques available that provides a window on physiological brain activity. EEG averages the response from many neurons as they communicate, measuring the electrical activity by surface electrodes. We can then use the brain activity of a user to detect their emotional status.
Data Gathering
In our experiments, we used the Emotiv | EEG EPOC neuroheadset (2013), which has 14 EEG channels plus two references, inertial sensors, and two gyroscopes. The raw data from the neuroheadset was parsed with the timestamps for each sample.
Data Cleaning and Artifact Removal
After retrieving the data from the EEG, we need to remove “artifacts” which are changes in the signals that do not originate from neurons (Vidal, 1977), such as ocular movements, muscular movements, as well as technical noise. To do so, we used the open source toolbox EEGlab (Delorme & Makeig, 2004) to remove artifacts and filtering (removing the 4–45 Hz line noise).
ERP Collection
We decided to consider remaining artifacts as random noise and move forward with extracting Event Related Potentials (ERPs), since all of other options we had found required some level of manual intervention. ERPs are the relevant sections of our EEG data with regards to stimuli and the subjects' reaction time. To account for random effects form the artifacts, we averaged the ERPs over different users and events. To do so, we used EEGlab's plugin ERPlab, to add events codes to our continuous EEG data based on stimulus time.
Events
Our events were defined as textual modifications in subtitles, designed to induce emotions of confusion, frustration or surprise. The time at which the subject looks at a word was marked to be the stimulus time (st) for that word, and the reaction time was marked to be st+800 ms, because we rarely see a reaction to a stimulus 800 ms after its appearance (Fischler and Bradley, 2006).
The ERPs were obtained as the average of different events corresponding to the same condition (control or experimental).
Eye-Tracking
An eye-tracker is an instrument to detect the movements of the eye. Based on the nature of the eye and human vision, the eye-tracker identifies where a user is looking by shining a light that will be reflected into the eye, such that the reflection will be captured by image sensors. The eye-tracker will then measure the angle between the cornea and pupil reflections to calculate a vector and identify the direction of the gaze.
In this project, we used the EyeTribe eye-tracker to identify the words a reader looked at while reading. It was set up in a Windows machine. Before an experiment, the user needs to calibrate the eye-tracker. Recalibration is necessary every time the user changes their sitting position. While the eye-tracker is running, Javascript and NodeJS were used to create a function that extracts and parses the data from the machine and prints it into a text file. This data includes the screen's x and y coordinates of the gaze; the timestamp; and an indicator of whether the gaze point is a fixation or not. The data is received at a rate of 60fps. The gaze points are used to determine which words are looked at at any specific time.
Video Interface
In our experiments, each user was presented with a video with subtitles. To create the experimental interface, we made different design choices based on previous empirical research. Therefore, we used Helvetica font, given its consistency across all platforms (Falconer, 2011), and used the font size 26 given that it improves the readability of subtitles on large desktops (Franz, 2014). We used Javascript to detect the location of each word that was displayed on the screen.
After gathering the data the experiment, we used an off-line process to detect the “collisions” between the eye-tracker gaze points and the words displayed to the user. To do so, we used both time information and coordinate information. The result was a series of words annotated with the specific time-spans in which they were looked at.
FER
Facial Expression Recognition (FER) is the process of detecting an individual's emotion by accessing their facial expressions in an image or video. In the past, FER has been used for various purposes, including psychological studies, tiredness detection, facial animation and robotics, etc.
Data Gathering
We used the Microsoft Kinect with the Kinect SDK 2.0 for capturing the individual's face. The data extracted from the Kinect provided us with color and infrared images, as well as depth data. However, for this project we only worked with the color data. The data from the Kinect was saved as a sequence of color images, recorded at a rate of 30 frames per second (fps). The code made use of the process of multithreading to ensure high fps, and low memory usage. Each image frame was assigned a timestamp in milliseconds, which was saved in a text file.
Feature Extraction
After having extracted the data from the Kinect, the images were processed so as to locate the facial landmarks. We tested the images with Face++ which is a free API for face detection, recognition and analysis. Using Face++, we were able to locate 83 facial landmarks on the images. The data obtained from the API was the name of the landmark along with it's x and y-coordinates.
The next step involved obtaining Action Units (AUs) by using the facial landmarks located through Face++. Action Units are the actions of individual muscles or groups of muscles, such as, raising the outer eyebrow, or stretching of lips etc (Cohn et al. 2001). To determine which AUs to use for FER, as well as how to calculate them, Tekalp and Ostermann's (2000) was taken as a reference.
Classification
The final step of the process was classifying the image frames into one of the eight different emotions - happiness, sadness, fear, anger, disgust, surprise, neutral and confused. We used the MATLAB Neural Network toolkit (MathWorks, Inc., 2015) and designed a simple feed-forward neural network with backpropagation.
Pilot results
EEG
In our pilot classification study, we used we experimented with the Alcoholism data used in Kuncheva and Rodriguez (2012) paper, from the UC Irvine (UCI) machine learning repository (2010) which contains ERP raw data for 10 alcoholic subjects and 10 sober subjects. We extracted the features using the interval feature extraction. Total of 96K Features were extracted from each subject's data. We Achieved around 98% accuracy with the training data.
FER
We experimented on three different individuals as well as several images from the Cohn-Kanade facial expressions database. It was found that the application had roughly 75–80% accuracy, and this accuracy could further be improved by adding more data into the training set. It was also observed that the classifier was more accurate when classifying certain emotions than some others. For example, the images depicting happiness were classified accurately more than images that depicted any other emotion. The classifier had difficulty distinguishing between fear, anger and sadness.
Conclusion
In this paper, we presented Pokerface, a word-emotion detector that can detect the emotion of users as they read text. We built a video interface that displays subtitled videos and used the EyeTribe eye-tracker to identify the word a user is looking at a certain time in the subtitles. We used the Emotiv Epoc headset to obtain EEG brainwaves from the user, and the Microsoft Kinect to obtain their facial expressions, and extracted features from both. We used Neural Networks to classify both the facial expressions and the EEG brainwaves into emotions. In the future.
Future directions of work include to improve the accuracy of the FER and EEG emotion classification components. Furthermore, the EEG results can be improved by exploring additional artifact detection and removal techniques. Furthermore, we want to integrate the whole pipeline in a seamless application that allows effortless experimentation.
Once the setup is streamlined, Pokerface can be used to explore many different applications to optimize users' experiences in education, news, advertizing, etc. For example, the word-emotion detector can be utilized in Computer-assisted learning to provide students with virtual affective support, such as detecting confusion and providing clarifications.
-
-
-
The Secure Degrees of Freedom of the MIMO BC and MIMO MAC with Multiple Unknown Eavesdroppers
Authors: Mohamed Khalil, Tamer Khattab, Tarek Elfouly and Amr MohamedWe investigate the secure degrees of freedom (SDoF) of a two-transmitter Gaussian multiple access channel with multiple antennas at the transmitters, a legitimate receiver and an unknown number of eavesdroppers each with a number of antennas less than or equal to a known value NE. The channel matrices between the legitimate transmitters and the receiver are available everywhere, while the legitimate pair have no information about the fading eavesdroppers’ channels. We provide the exact sum SDoF for the considered system. We show that it is equal to min(M 1 + M 2 − NE, 1/2(max(M 1;N 2 + max(M 2;N) − NE,N). A new comprehensive upperbound is deduced and a new achievable scheme based on utilizing jamming is exploited. We prove that Cooperative Jamming is SDoF optimal even without the eavesdropper CSI available at the transmitters.
-
-
-
On Practical Device-2-device Communication in The Internet of Things
More LessDevice-to-device (D2D) communication, or the establishment of direct communication between nearby devices, is an exciting and innovative feature of the next-generation networks. D2D has specially attracted the attention of the research community because it is a key ingredient in creating Internet of Things (IoT): a network of physical objects that are embedded with sensors to collect data about the world and networking interfaces to exchange this data. IoT can serve a wide variety of application domains ranging from health and education to green computing and property management. Fortunately, processors, communication modules and most electronic components are diminishing in size and price, which allows us to integrate them into more objects and systems.
Today, companies such as Intel and Cisco are trying to make general purpose IoT devices available to everyday developers, thus expediting the rate at which the world is adopting IoT. Many of those devices are becoming more affordable and the internet is swamped with tutorials on how to build simple systems with them. However, despite the fundamental importance of effective D2D communication in IoT, we have a shortage of work that goes beyond standards being developed to accurately assess practical D2D communication performance of such devices.
We address this gap by studying different communication metrics on representative general purpose IoT devices and examining their impact in practical settings. We share our experiences assessing the performance of different communication technologies and protocol stacks on these devices in different environments. Additionally, we utilize this knowledge to improve “UpnAway”, an agile UAV Cyber-Physical-System Testbed developed at CMU Qatar, by enhancing communication between IoT devices attached to the drones for on-board processing and autonomous navigation control.
Measuring D2D Performance in IoT devices
We investigated the performance of Intel Edison (Fig. 1, a) and Raspberry-Pi (Fig. 1, b) devices because they are the two most representative general purpose IoT devices. The Intel Edison devices are equipped with a dual-core CPU, single-core microcontroller, integrated Wi-Fi, Bluetooth 4.0 support, 1 GB DDR and 4GB Flash memory and 40 multiplexed GPIO interfaces. Being so highly equipped, they are being rapidly adopted by a substantial segment of IoT developers worldwide. The Raspberry-Pi devices are the currently most widely used devices, so we used them for results comparison and validation. We made our measurements on D2D WiFi and Bluetooth links because, on both the Edison and Raspberry-Pi devices, they are the most user friendly communication interfaces. The Intel Edison devices come with built-in Wifi and Bluetooth interfaces that can be controlled through friendly GUI and the Raspberry-Pi devices can support these technology by simply plugging the corresponding USB dongles.
Our investigations involved accurately measuring RTT, Throughput, Signal Strength and reliability. The experiments involved:
Sending time-stamped packets to an echo-server then using the echo to calculate the round-trip-time (RTT), Exchanging files multiple times then calculating average time-delay over different distances (Fig. 2), (Throughput), Increasing distance between nodes and reading signal strength (Signal Strength), Transferring large files repeatedly to test reliability of different protocols (reliability).
In addition to the data collected, these investigations helped us make some unsettling observations about the Intel's Edison devices. First, we found a bug in configuring BlueZ, the linux implementation of the Bluetooth stack, over Yocto, the operating system running on the Edison devices: the RFCOMM, a reliable protocol by specification, link was unable to catch transmission errors. We suggest using TCP/IP over a BNEP bluetooth connection as a reliable alternative for RFCOMM when using the Edison Devices. Second, we observed that the RTT of WiFi D2D connection between Edison devices was significantly higher than that between the Raspberry-Pi. We suspect that this is attributed to the Edison's energy saving feature.
Investigating A Relevant Application: UpNAway
The second part of our research involved using D2D communication between Edison Intel devices to improve UpnAway. We chose this application because of its significance in improving cyber-physical systems which can contribute to various industrial areas such as aerospace, transportation, robotics and factory automation systems. UpnAway addresses the Cyber-Physical System community's unmet need for a scalable, low-cost and easy-to-deploy test-bed.
The UpnAway testbed, however, this testbed was centralized: The UAV Node Modules (Fig. 4) used to run on a central computer while streaming motion instructions to the drones. It is a given that a distributed system has higher scalability and stability than a centralized one. So we helped the Up and Away team leverage D2D communication to upgrade the testbed to a distributed version of itself. We did this by mounting an Edison device on each drone. Then we established D2D connections between the edisons, their respective drones and the central node. Now, instead of having all computation done in the central node, the central drone now only performs localization and informs each Node Module, now running on the Edison, of its drone's coordinates.
Future Work
In the future, we would like to study the energy consumption aspect of data transmission over D2D connections. Further, we are interested in measuring how these devices perform over different technologies in different environments: outdoors, indoors in an open corridor and inside cars, for example. Such measurements will contribute to the design of many developing IoT systems.
-
-
-
Wireless Car Control Over the Cellular System
More LessNowadays, electric robots play big role in many fields as they can replace humans and decrease the amount of load on humans. There are several types of robots that are present in the daily life, some of them are fully controlled by humans while others are programmed to be self-controlled. In addition there are self-control robots with partial human control. Robots can be classified into three major kinds: industry robots, autonomous robots and mobile robots which will be discussed. One of the main advantages of mobile robots is to provide safety by replacing humans to enter dangerous places like industrial areas, factories, underground rail tunnel, buildings after disasters, etc.
Our objective is to design and develop a mobile robot car that is capable of reaching the needed destination using a camera that will provide a surface monitoring, while being controlled using four directions controller embedded in an Android mobile phone application. It is operated over a cellular communication system (that will provide national and even international (through roaming) coverage for its working area) in parallel with its self-action in presence of obstacle. Its self-action is maintained by an ultrasonic sensors that will be mounted on the car body. This is of crucial importance as disaster areas usually loss their Wi-Fi connections.
Its main role is to provide an insight monitoring for disaster areas, damaged factories with hazard spilled over products and remote anti-terrorist protection. There are many areas where human can't enter due to hazardous and fatal conditions or small dimensions, for instance collapsed buildings, areas after disasters and earthquake, nuclear power plants and so on. For example, the great earthquake that occurred on March 11th 2012 and caused damage to the north part of Japan particularly in Fukushima Daiichi nuclear power plant. The disaster result in disabling the power supply and heat sinks which leads to release of radioactivity in the area surrounding the plant. Such areas and environment are very dangerous to enter by human being, in this case robotic car can be sent in order to search, discover and provide live communication.
The project is divided into three main units: robotic car, cellular communication system and Android application. The body of the robotic car is a plastic magician chassis programmed using an Arduino micro-controller with collaboration of sensors, motors and shields to avoid the obstacles and enable the surface monitoring of the car surrounding. The cellular communication system aims to build a cellular communication bridge between the Arduino micro-controller and the user interface controller, which is the third part as a user-friendly application which has been designed using Android Studio software, to control the robot through a smart phone at any place in the World. This project establishes two ways of communications; from the robot to the user showing the video content and status about the scenario. And from the user to the robot indicating the direction, distance and/or offloading demands.
We would like to present the project objectives, challenges and results, as well as a comparison to previous state of the art.
-
-
-
Adaptative Network Topology for Data Centers
Authors: Zina Chkirbene, Sebti Foufou and Ridha HamilaData centers have an important role in supporting cloud computing services (such as email, social networking, web search, etc.) enterprise computing needs, and infrastructure-based services. Data center networking is a research topic that aims at improving the overall performances of the data centers. It is a topic of high interest and importance for both academia and industry. Several architectures such as FatTree, FiConn, DCel, BCube, and SprintNet have been proposed. However, these topologies try to improve the scalability without any concerns about energy that data centers use and the network infrastructure cost, which are critical parameters that impact the performances of data centers.
In fact, companies suffer from the huge energy their data centers use and the network infrastructure cost witch is seen by operators as a key driver for maximizing data centers profits and according to industry estimates, the united states data center market achieved almost US\$39 billion in 2009, growing from US\$16.2 billion in 2005 Moreover, the studies show that the installed base of servers has been increasing 12 percent a year, from 14 million in 2000 to 35 million in 2008. Yet that growth is not keeping up with the demands placed on data centers for computing power and the amount of data they can handle. Almost 30 percent of respondents to a 2008 survey of data center managers said their centers will have reached their capacity limits in three years or sooner.
The infrastructure cost and power consumption are the first order design concern for data center operators. In fact, they represent an important fraction of the initial capital investment while not contributing directly to the future revenues. Thus, the design goals of data center architectures seen by operators are high scalability, low latency, low Average path length and especially low energy consumption and low infrastructure cost (the number of interface cards, switches, and links).
Motivated by these challenges, we propose a new data center architecture, called VacoNet that combines the advantages of previous architectures while avoiding their limitations. VacoNet is a reliable, high-performance, and scalable data center topology that is able to improve the network performances in terms average path length, network capacity and network latency. In fact, VacoNet can connect more than 12 times the number of nodes in FlatNet without increasing the APL. Also, it achieves a good network capacity even with a bottleneck effect (bigger than 0.3 even for 1000 servers). Furthermore, VacoNet reduced the infrastructure cost by about 50%, and the power consumption will be decreased with more than 50000 watt compared to all the previous architectures.
In addition, and thanks to the proposed fault tolerant algorithm, the new architecture shows a great performance even when the failure rate equals to 0.3, which means when about one third of the links failed, the connection failure rate is only 15%. By using VacoNet, operators can win till 2 million US dollars compared to Flatnet, Dcell, Bcube and Fattree.
Both theoretical analysis and simulation experiments have conducted and validated to evaluate the overall performance of the proposed architecture.
-
-
-
Narrowband Powerline Channel Emulation Platform for Smart Grid Applications
Authors: Chiheb Rebai, Souha Souissi and Ons BenrhoumaThe characterization powerline networks and investigation of communicarion performances has been the interest of several research works. Despite of its complex noise scenario and variable attenuation, narrowband powerline communication (NB-PLC) systems are key element in smart grids which provide different applications such as remote meter reading, home automation and energy management. Mainly located at the last mile communication, NB-PLC systems participate in monitoring and controlling power consumption at different level from local utilities to final customers in conjunction with wireless technology. To provide a proper communication flow over powerlines, it is primordial to get efficient designed and proven PLC systems responding to smart grids requirements. After PLC system design and development, a major task is the infield test which is a heavy task, time consuming and does not give enough information about system performance and robustness. In one hand, test and verification result are as variable as channel conditions. So, it is not obvious to get relevant information from classical infield test. In the other hand, the behavior of existing industrial PLC systems behavior cannot be accurately reproduced in simulation environment. Therefore, getting a flexible standalone platform emulating NB-PLC channel phenomena allows overcoming verification process difficulty. In literature, few research works are interested in emulating narrowband powerline.
First, the idea of emulating PLC channels using a stand-alone device has started in a broadband PLC by presenting a hardware platform based on a digital signal processor (DSP) and a field programmable gate array (FPGA). Then, algorithms have been developed and optimized to reduce complexity and improve real-time performance. Regarding NB-PLC, an emulator has been proposed for indoor channels in the frequency range between 95 and 148.5 kHz using analog to digital circuit for channel transfer function and noise scenario. More flexibility and accuracy in generating the NB-PLC channel behavior has been studied and extended to support frequencies up to 500 kHz. The main challenge was the definition of time varying reference channels and tunable sophisticated noise scenarios for emulation. Both proposed NB-PLC channel emulator only takes into consideration the attenuation as predefined function and noise phenomena. The ZC detection is supposed to be ideal that it does not affect the communication performance.
The objective of this research work consists in proposing a flexible DSP based NB-PLC channel emulator encompassing the channel bottlenecks interfering during communication. The channel attenuation is deduced using bottom up approach and appropriate noise scenarios are defined to meet as much as possible realistic phenomena. The effect of zero crossing variation is also taken into account. The overall parameters are optimized to be embedded on a DSP platform.
-
-
-
The impact of the International Convention on the Rights of Persons with Disabilities on the Internal Legislation of Qatar: Analysis and Proposals
Qatar ratified the Convention on the Rights of Persons with Disabilities (CRPD) on May 13 2008, and has signed its Optional Protocol, pending ratification.
All persons with disabilities, and to promote respect for their inherent dignity aim the CRPD at promoting the full and equal enjoyment of all human rights and fundamental freedoms. The Optional Protocol (OP) establishes a complaints mechanism. The CRPD creates no new rights for the individuals, but makes it possible the full exercise of human rights to all. Complying with the CRPD requires the states to introduce numerous adjustments to their internal legislation.
The study of high significance for Qatar. At any rate, Qatar has being doing significant efforts, and major progress in catering to the needs of people with disabilities. Already in 1995, Qatar issued the Law No. 38/1995, on aspects of the Social Security system, providing governmental assistance to social groups, including organizations of persons with disabilities. In the 1998, Qatar created the Supreme Council for Family Affairs (SCFA, Decree No. 53/1998), a high-level national body that among other things has the mandate to deal with the implementation of those international conventions related to the rights of children, women, and persons with disabilities which has been ratified by Qatar. Following the SCFA's recommendations, Qatar passed in 2004, Qatar passed the Law No. 2/2004, for the protection of people with special needs, which ensures the rights of persons with disabilities in all fields.
According to that law, the people with special needs enjoy special protection in the State of Qatar, by means of:
- Special education, health treatment, disease prevention and vocational training;
- Receiving all the tools and means to facilitate their learning and mobility process;
- Receiving special qualifications and training certificates upon completion of certain training programs and appoint them in areas that would accommodate their obtained skills and training;
- Dedicating around 2% of the jobs in the private sector to people with special needs without any discrimination based on disability.
Nevertheless, the implementation of these efforts is till a work in progress. A United Nations Special Rapporteur on Disability, reported after a brief mission in Qatar that there is “a clear commitment from Qatari society to the needs of persons with disabilities”, which are tangible at Shafallah Centre for Children with Special Needs and at Al Noor Institute for the Blind. The rapporteur stressed that “”it appears that there is a clear commitment from the State and the private sector toward the issues confronting persons with disabilities in Qatar. Anecdotal evidence suggests that the private sector is a big contributor to institutions [for people with disabilities]”. Nevertheless, the rapporteur warned that “it also became clear that much of the caring and development remain almost exclusively disability-specific as opposed to the mainstreaming of the development needs of persons with disabilities. There appears to be a distinct lack of mainstreaming of disability in Qatar”[1].
In 2010, the International Disability Alliance, the global network aimed at promoting the effective and full implementation of the CRPD, recommended that “Qatar adopt a proactive and comprehensive strategy to eliminate de jure and de facto discrimination on any grounds and against all children, paying particular attention to girls, children with disabilities”[2]
The significance of this project comprehends several facets. This project is aimed at helping fulfill Qatar's commitment as requested by the CRPD. Beyond that, and above all, this project would have the direct potential to yield tangible benefits for people with disabilities in Qatar, by helping remove barriers that may prevent their full integration into mainstream society might made harder their personal and professional development. Moreover, this project should help the people with disabilities to become visible in Qatar.
The objectives of the proposed study are:
To analyze the impact that the ratification of the CRPD has on the Qatari legal system;
To elaborate a general framework aimed at defining possible ways in which the Qatari legal system could better develop the mandates of the CRPD;
To elaborate recommendations about possible modifications to the internal Qatari legislation in order to specifically incorporate the mandates of the CRPD into the law of the country.
According to the Census 2010, carried out by the Qatari Statistics Authority, the total number of people with disabilities in Qatar is 7,743, which represents 0.45% of the total population of 1,699,435 inhabitants. Among non-Qataris, the percentage of people with disabilities is of a 0.28%, while among Qatari nationals the figure is six times higher, 1,71%, with 2972 persons with disabilities on a population of 174,279. All these numbers appear as significantly low by international standards, where people with disabilities compose around 10% of the population. Some phenomena could help explain these numbers, though [3].
The significantly low number of non-Qataris with disabilities can be explained by the fact that that group of population are mostly young, healthy workers who come to the country with a work contract, and after having passed medical tests of aptitude for the position. The disability among Qataris would, thus, reflect better the natural rate of disability in the country. However, this rate is still too low for international standards. The causes to this low number could be multiple. First, it may show that in Qatar there is not yet a full awareness about what constitutes a disability, and thus people do not declare themselves–or declare family members–as disabled. Second, it is possible that the tightly knitted family structure provides for the disabled, and thus little or no support is requested for outside the family, causing a considerable under-registration of cases of disabilities, since the disabled do not reach out for specific social or health services. Third, it is also possible that despite all efforts, Qatar still has a relatively poor network of services aimed at satisfying the needs of the disabled, who that way go under-detected in the official statistics. At any rate, the numbers show that there is significant room in Qatar for the development of strategies aimed at raising awareness and implementing programs and services for the disabled.
In that sense, this project might produce very valuable information, and propose high-impact measures to further the protection that Qatar provides to people with disabilities. This project should have, thus, a critical significance to people with disabilities in Qatar, and also to their families.
The Qatari population at large would benefit from this project, since it is ultimately aimed at helping integrate a group of people–the disabled–whose contribution to the country's human capital can be of extreme value, in a context of a growingly complex, diverse global society.
Finally, this project can become a valuable tool to help reaffirm Qatar's leadership in the region in matters of human rights and human development.
[1] Report of the Special Rapporteur's Mission to Qatar - Preliminary Observations (9 – 13 March 2010). http://www.un.org/disabilities/documents/specialrapporteur/qatar_2010.doc
[2] IDA CRPD FORUM Suggestions for disability-relevant recommendations 7th UPR Working group session (8 to 19 February 2010). http://www.internationaldisabilityalliance.org/sites/disalliance.e-presentaciones.net/files/public/files/UPR-7th-session-Recommendations-from-IDA.doc
[3] Qatar Statistics Authority. Census 2010. http://www.qix.gov.qa
-
-
-
The Doha Paradox: Disparity between Educated and Working Qatari Women
Authors: Mariam Bengali, Tehreem Asghar and Rumsha ShahzadQatar is considered one of the best places in the world for women to get an education. Research has shown that for every man, there are six women enrolled in tertiary education. This upward trend in the willingness and ability of women to receive higher education is undeniably encouraging. However, though labelled a “vital element within the development process” of Qatar, the Qatari women's role in the labor market is, at best, limited. Recent data demonstrates that the participation of women in Qatar's labor force was a meagre 35%. Qatar, however, has made the empowerment of women in the labor market a significant part of its Development Strategy. The designers for Qatar National Vision have formulated its first National Development Strategy (2011–2016) with Human development being one of the four major pillars of this strategy. One of the aims of Human Development under NDS (2011–2016) is to increase opportunities for women to “contribute to the economic and cultural world without reducing their role in the family structure.” This research, therefore, intends to analyze a) Qatar's success in carving out a more vital role for its female citizens and b) the obstacles in the realization of their goal to establish a more gender-inclusive labor force. The reasons for this analysis are, therefore, not solely to augment and scrutinize Qatar's Development Strategy but to demonstrate that Qatar's extensive investment in education will not reap benefits if the majority of its educated does not take advantage of the various avenues their learning opens up. Whether this is due to unwillingness on the part of women to work or due to gender neutral reasons such as the gap between education, training and job placement or other motives; this research aims to ascertain the reasons for this difference.
-
-
-
Measuring Qatari Women's Social Progress through Marriage Contracts
Authors: Mohanalakshmi Rajakumar and Tanya KaneContemporary Qatari women's social progress can in part be measured through an analysis of current marriage practices. The Islamic marriage contract is a legal and religious document wherein Muslim brides indicate their expectations for post-marital life and an essential step in the marriage negotiation process. The conditions they stipulate in their marriage contracts are symbolic of the degree to which they exercise agency in their personal and professional lives as wives. As part of a larger study on marriage practices in Qatar, we collected and analyzed marriage contracts from a broad range of Qatari families. We treated these documents as archival evidence, reflecting changing bridal expectations from 1975–2013. A content analysis of contracts in our sample demonstrated an increase in the age at marriage for both Qatari men and women. The contracts also show the major areas in which brides negotiated the terms of their married lives including educational, professional, and household expectations. We read these stipulating conditions as moves to guarantee autonomy as wives.
-
-
-
Globalization and Socio-Cultural Change in Qatar
More LessGlobalization is impacting many aspects of life in Qatar and Qatari nationals must increasingly cope with forces generated by economic, cultural, political, and social changes in their country. Because of borrowing, large scale migration, new computer technology, and multinational corporations, many cultural traits and practices in Qatar have been altered. The widespread existence of fast foods, cell phones, the internet, Western movies and TV shows, global electronic and print media, giant shopping malls, and latest fashion designs are some excellent examples of direct and indirect diffusion and exchange of products and cultural features between Qatar and the western world. Despite the continuing positive social and economic outcomes of modernization and development, the inevitable ongoing powerful economic and social changes in Qatar have put the country at a crossroad and created formidable cultural challenges for Qatari nationals. On the one hand, the social and economic consequences of Qatar's development and modernization by 2030 will increase the mutual dependence between Qatar and the expanding global economy and strengthen the continuing cultural contact and interconnection between Qatar and the global culture. On the other hand, in conjunction with the rapid economic and social changes, the country must also commit to making its future path of development compatible with cultural and religious traditions of an Arab and Islamic nation in Qatar. The crucial challenge between continuing the move toward socio-economic development and preserving the Arab-Islamic tradition in Qatar as stated in the Qatar National Vision 2030: “Qatar's very rapid economic and population growth have created intense strains between the old and new in almost every aspect of life. Modern work patterns and pressures of competitiveness sometimes clash with traditional relationships based on trust and personal ties, and create strains for family life. Moreover, the greater freedoms and wider choices that accompany economic and social progress pose a challenge to deep-rooted social values highly cherished by society.” (Qatar National Vision 2030, 2008:4). To minimize the anticipated “intense strain” between the old and new aspects of life and to avoid the clash between traditional cultural values and the emerging modern patterns of social life in Qatar it is crucial for government officials, especially the policy makers in Qatar to understand the perception of Qatari men and women about the ongoing changes and their outcomes and its impact on their culture, religion, family, and social life. Therefore, as an anthropologist I propose to explore the following critical issues that is related to the social and cultural consequences of modernization of Qatar by 2030:
How will the new generation of Qatari nationals internalize the economic, environmental, human, and social developments envisioned by QNV 2030?
Do Qatari nationals find the current economic, social, and cultural changes consistent with core values of their culture or do they feel threatened by these changes?
What strategies, if any, Qatari nationals have devised to deal with threats to their cultural and religious identity?
How do Qataris perceive and feel about the ongoing large-scale social and economic changes and interpret their manifestations and outcomes?
Do they contest or embrace these changes?
Will the socio-economic changes in Qatar sculpt and drive cultural norms and impact cultural practices in Qatar, or the traditional family structure, religious beliefs, and will cultural assumptions direct the process of social change in Qatar?
What will be the impact of the QNV 2030 changes on the national psyche and cultural identity of Qatari citizens?
Will Qatari citizens form a new identity and transform their old religious and cultural identity, maintain the traditional religio-cultural identity, or find a balance between the old and a new one?
This project will offer several significant applied and practical outcomes and benefits for government leaders and policy makers in Qatar. In addition to exploring the perceptions of Qatari nationals toward social and economic change and their consequences for Qatari culture, this project provides an excellent opportunity to identify and assess the: intended and unintended changes in attitudes and behaviors of Qatari men and women regarding marriage, family, work, education, and related social patterns as well as values, ideas, symbols, and judgments, new cultural adaptive kits that are likely to emerge, degree of consistency between the new emerging cultural patterns of behavior and changes in attitudes and behaviors with cultural traditions and social values of Qatari men and women, different mechanisms through which Qatari people combine modern life with local traditions and cultural values, and different meanings that different individuals and groups (e.g., ethnic, gender, class, and age) attach to the ongoing cultural and social changes in Qatar. Furthermore, this project on the impact of social and economic change on Qatari culture and society will provide government officials in Qatar with a new perspective so they can understand the link between the private lives of their citizens and the larger social and cultural issues, and the impact of social change on communities and social institutions in Qatar. This new perspective will enable government and business leaders in Qatar to: chart economic and social progress more effectively and with a clearer vision, assess the life chances of their citizens in a new globalized Qatar, face future problems successfully, and build a stronger bridge between the present and the future through Qatar National Vision 2030. Finally, this project will enhance our understanding of the social and cultural consequence of globalization for Islamic societies in general and GCC in particular. It will help social scientists to understand the unique socio-cultural characteristics of Islamic countries in the Gulf region in their confrontation with the West and global forces. Moreover, it will add to our knowledge about the perception of Muslim Arabs in the Gulf region about the powerful technological, political, and social changes that are taking place in this region. The findings of this project will help social scientists to explain whether Qatari nationals find these changes incompatible with their cultural traditions and resist against them, or find them compatible with the cultural fabric of their society and embrace them.
-
-
-
Water Resources and Use in Qatar Prior to the Discovery of Oil
More LessWater Resources and Use in Qatar Prior to the Discovery of Oil Qatar is known as one of the most arid countries in the world with all of the land characterised as dessert which in geographical terms is defined as territory with no surface water. Despite this unpromising situation people have lived on the Qatar peninsula for thousands of years carefully conserving, harvesting and using the scarce water resources. This paper will argue that a better understanding of the conservation and utilisation of fresh water in the past may have implications for the future development of agriculture and water management in Qatar. All the water resources in the past were fed either directly by scarce and sporadic rainfall events or through sub-surface water (fossil water) which formed a fresh water lens floating above a predominantly saline aquifer. The presence of shallow wad is located throughout the country but particularly near the coast are indicative of the occasionally heavy rainfall. This paper will investigate the methods used to provide water to the traditional settlements and more ancient archaeological sites throughout the country. The paper will begin by reviewing the hydrological structure of the Qatar peninsula and its condition prior to the modern over exploitation of water resources from the 1950's onwards. The paper will pay particular attention to the location of settlements around the northern coast where particularly favourable conditions exist as a result of the north Qatar arch. The settlements of the northern coast employed a variety of water catchment methods including modifying shallow natural depressions (rawdhas) to provide either grazing land or in some cases agricultural zones. In some cases such as at Jifara extensive sunken field systems enclosed within mud brick walls were created fed by a series of shallow wells taking advantage of both rainfall catchment and fossil rainwater. In some cases such as at Ruwayda on the north Qatar coast an existing rawdah appears to have been modified to create a garden with trees enclosed by a fortified masonry wall. A variety of well forms were created including shaft wells where water was accessed by buckets or other receptacles lowered by rope (two example of traditional leather buckets with goat hair rope have been found in archaeological excavations in Qatar) and wide shallow stepped wells which allowed direct access to the water either for humans or animals. In the centre of the country access to water has always been more difficult because the freshwater acquifer has been closest to the surface near the coast. As a consequence the majority of human settlement in Qatar has always been close to the coast. Where inland settlement do exist they are usually located either within or next to some geographical feature such as a wadi or a large depression which will act as a catchment area for rainfall. Such settlements are nearly always supplemented with wells which tend to be larger and much deeper than those on the coast. Examples of inland wells include those associated with gardens at Umm Salal Muhammad. The transformation of inland wells with motorised pumps from the 1950's onwards was one of the causes for the depletion and subsequent salinization of the freshwater aquifer in Qatar. In addition to wells and modified rawdahs a number of other forms of water catchment exist. One of the most surprising water sources is the perennial spring (naba‘a) which following periods of heavy or sustained rainfall may result in water literally spring out from the ground. Another unusual source are fresh water springs located off the coast which were traditionally exploited by fishermen and pearl divers. A rare form of water catchment exists in the Jabal Jassasiya rock formations on the north-east coast of Qatar where the natural contours of the rock are modified to form a cistern blocked on one side by a masonry dam. Masonry dams- possibly of medieval date- have also been documented near Umm Salal Muhammad.
-