- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2014 Issue 1
- Conference date: 18-19 Nov 2014
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2014
- Published: 18 November 2014
301 - 400 of 480 results
-
-
Heme Oxygenase Isoform 1 Regresses Cardiomyocyte Hypertrophy Through Regressing Sodium Proton Exchanger Isoform 1 Activity
Authors: Ahmed Sobh, Nabeel Abdulrahman, Soumaya Bouchoucha and Fatima MraicheBackground: Pathological cardiac hypertrophy is a worldwide problem and an independent risk factor that predisposes the heart to failure. Enhanced activity or expression of the sodium proton exchanger isoform 1 (NHE1) has been implicated in conditions of cardiac hypertrophy. Induction of cGMP has previously been demonstrated to reduce NHE1 activity and expression, which could be through the expression of heme oxygenase isoform 1 (HO-1), a stress-induced enzyme that shows cardioprotective properties. In our study, we aimed to investigate the role of inducing HO-1 in a cardiac hypertrophy model that expresses active NHE1 to determine whether HO-1 could protect against NHE1 induced cardiomyocyte hypertrophy. Methods: H9c2 cardiomyocytes were infected with the active form of the NHE1 adenovirus in the presence and absence of protoporphyrin (CoPP). Which was used to induce HO-1. Protein and mRNA expression of HO-1 were invested in H9c2 cardiomyocytes in the presence and absence of the expression of the active form of the NHE1 adenovirus. The effects of HO-1 induction on NHE1 protein expression and cardiomyocyte hypertrophic markers were measured respectively by western blotting and analyzing the cell surface area of H9c2. Results: Our results showed a significant decrease in HO-1 mRNA expression in cardiomyocytes expressing active NHE1 (74.84 ± 9.19 % vs. 100 % normal NHE1 expression, p<0.05). However, we did not see any changes in NHE1 protein expression following HO-1 induction. A trend towards decrease in cardiomyocyte hypertrophy was observed in H9c2 cardiomyoblasts infected with the active form of NHE1 following stimulation with HO-1 (NHE1, 154.93 ± 14.87 % vs. NHE1 + CoPP, 109 ± 16.44 %). Conclusion: In our model, HO-1 maybe a useful means to reduce NHE1 induced cardiomyocyte hypertrophy, although the mechanism by which it does that requires further investigation.
-
-
-
Qatar´s Youth Is Putting On Weight: The Increase In Obesity Between 2003 And 2009
The profound economic growth in the Arabian Gulf states over the past few decades has had a great impact on the lifestyle of the Qatari population. There has been a rapid appearance of fast food restaurants and other hallmarks of western society. These influences have been accompanied by a higher energy intake and decreased levels of physical activity. The potential impact on the younger population is particularly alarming. Throughout the recent past, the prevalence of type 2 diabetes (T2DM) among children and adolescents has increased significantly due to the high prevalence of obesity. Obese children can have a higher risk of premature mortality due to consequent cardiometabolic morbidity. According to WHO, obesity-induced medical conditions have now led to excess mortality surpassing that associated with tobacco. However, data on the body weight status of Qatari children are lacking for the past ten years. This study estimates the magnitude of the increase in BMI among Qatari adolescents (aged 12-18 years), by comparing our data (obtained during 2009) with published results from 2003 [Bener et al, JHPN 2005;23(3):250-8]. The present data originate from a pilot study on lung function conducted in Qatar in 2009. The subjects were chosen by random sampling of Qatari students attending government schools (grade 7-12). For our study, only students aged 12-18 years are included, resulting in a total number of 705 participants (400 girls and 305 boys). Although a large variety of data was collected, our study focused only on height, weight and BMI. The results reveal a substantial increase in BMI during this 7 year period for both Qatari boys and girls. For boys aged 12 years, mean BMI increased by 2 Kg m-2 which became a 5 Kg m-2 increase at the age of 17 years, and possibly as much as 8 Kg m-2 by 18 years. By contrast, the increase in mean BMI for girls remained more or less constant between the age of 12 years and 17 years, fluctuating between 2 Kg m-2 and 4 Kg m-2, before reaching almost 7 Kg m-2 at the age of 18 years. Using International Obesity Task Force (IOTF) criteria, the overall prevalence of Qatari children who were overweight or obese was 26.5% (boys) and 23.1% (girls) in 2003 [Bener, Food Nutr Bull 2006;27(1):39-45], and 47.2% (boys) and 40.8% (girls) in 2009. For boys, this represents a 21% increase, with a corresponding increase of 18% for girls. These results show that during this 7-year period, the prevalence of overweight and obesity among both boys and girls has increased by more than 75%. Based on these figures, the prevalence of childhood obesity is alarmingly high and points to an acute need for intervention, and a need for local research into the most appropriate and effective actions. In addition, there is also a need to systematically collect regular and ongoing observational data regarding body weight status of adolescent Qataris in order to continue to monitor this situation.
-
-
-
Effect Of Diabetes On Gastric Stem Cell Lineage In Rat Models
Authors: Ali Abdullah Al Jabri and Sheriff KaramIn 2013, it was estimated that over 382 million people throughout the world suffered from diabetes. Despite the numerous treatment approaches to manage this condition, diabetic patients continue to suffer from various symptoms and complications. These include, but are not limited to, retinopathy, nephropathy, peripheral and autonomic neuropathy, gastrointestinal, genitourinary, cardiovascular, and cerebral symptoms. In this study, we attempt to investigate the variations in the gastric stem cell lineage in order to further understand the gastrointestinal symptoms experienced by diabetic patients and reveal possible treatment promises. Using rats as an animal model, we divide them into different age groups of 3, 6, 9, and 12 months. For each age group, there were twelve animals, six of which were kept as control. The other six were injected with Streptozotocin to destroy the Beta cells of the Islet of Langerhans in the pancreas. The diabetic rats' plasma glucose was closely monitored; those that naturally recovered from diabetes were researched separately. Antibodies against KI-67 and Oct3/4 are used to examine cellular proliferation, Ghrelin for cells secreting this hormone, H,K-ATPase for parietal cells, UEA and GSII lectins for the surface and neck mucus cells respectively. For more quantitative results, qRT-PCR using primers specific for genes of gastric stem cell differentiation pathways was used. Statistical calculations including One-tailed T-test were used to determine whether the changes between the control and diabetic groups were significant. Results suggest an increase in the proliferation activity of stem cells number of some cells like surface mucus cells, and a decrease in the number of ghrelin secreting cells. Future tests to be used in order to support these results include antibodies against gastrin, CCK, and LGR5.
-
-
-
Electronic Library Institute-seerq (elisq)
An electronic library is a computer-managed set of collections with services tailored for its user communities. The project team—a collaboration of four universities (Qatar University - QU, Virginia Tech, Pennsylvania State University, Texas A & M University), the Qatar National Library - QNL, and consultants—focused on the two project aims for Qatar: building community and building infrastructure (i.e., collections and information services). Thus we fit with Qatar's Thematic Pillar of Research on Computing and Information Technology, and overlap with a number of Research Grand Challenges (e.g., Cyber-security; Managing the Transition to a Diversified, Knowledge-based Society, and Culture, Arts, Heritage, Media and Language within the Arabic Context). With regard to our aim of building an electronic library community in Qatar, we have: 1. Participated in the Special Library Association Gulf Chapter, hosted in Qatar, to create awareness about electronic libraries; 2. Launched a consulting center at QU Library—with more than 30 new reference works, online educational resources, and specialized databases—and are sharing knowledge with librarians and information professionals to support those interested in collections and services; 3. Established a collaboration with Gulf Studies at QU, so we can identify and host content on this topic, and assist QU researchers and students; and 4. Collected citation-based and non-citation-based metrics (altmetrics), for Qatar and 35 nations that are competing with Qatar's annual scholarly production. We published a new approach for comparing the metrics and evaluating country-level scholarly impact. 5. Studied the evolving scholarly activities and needs of researchers in Qatar, and compared them with our findings from USA, informing ELISQ about requirements and solutions appropriate for international electronic libraries. With regard to our aim of building electronic library infrastructure in Qatar, we have built collections and provided related services: 1. Penn State's SeerSuite software is running at QU, allowing users to search the metadata and full-text of collections of PDF files from scholarly articles, e.g., QScience papers. SeerSuite gathers scholarly documents and automatically extracts metadata (authors, venues etc.) from crawled WWW content, allowing QNL and other libraries to harvest that metadata using OAI-PMH.. SeerSuite is being improved for searching on the content of the figures and tables in scholarly documents. 2. An historical collection of old Arabic documents has been assembled, indexed, and made accessible as well as data/text mined. 3. Using our QU server running Heritrix, gathered our first Arabic collection (8GB from 2,200 PDF files), from Qatari newspapers (Al-Rayah, Al-Watan, Qatar News Agency, Al-Arab, and Al-Sharq). This news collection was indexed with Apache Solr and is available for searching. Building upon the IPTC system we created a categorization system (taxonomy) for news stories, and then applied it through machine learning to train classifiers to aid browsing. 4. Both QNL and QU are building Web archives of portions of the WWW in Qatar, adapting Heritrix and the Wayback Machine, thus preserving history, culture, and Arabic content (including news, sports, government information, and university webpages) for future use and scholarly study.
-
-
-
Openadn: Middleware Architecture For Cloud Based Services
Authors: Mohammed Samaka, Deval Bhamare, Aiman Erbad, Subharthi Paul and Raj JainAny global enterprise, such as, Qatar National Bank, with branches in many countries is an example of an Application Service Provider (ASP) that uses multiple cloud data centers to serve their customers. Depending upon the time of the day, the number of users at different location changes and the ASPs need to rescale their operation at each data center to meet the demand at that location. ASPs are facing a great challenge to leverage the benefits provided by such multi-cloud distributed environments without service-centric Internet service Provider (ISP) infrastructure. In addition, each ASP's requirements are different and since these ASPs are large customers of ISPs, they want the network traffic handling to be tailored to their requirements. While the ASP wants to control the forwarding of its traffic on the ISP's network; the ISP does not want to relinquish control of its resources to the ASPs. In this work we present an innovative architecture, which facilitates ASPs to automate the deployment and operation of their applications over multiple clouds. We have developed Middleware Architecture for Cloud based applications using Software Defined Networking (SDN) concepts. Especially we discuss how the implementation of interface between ASP and ISP control planes as well as implementation of generic packet header abstraction is achieved. Using our system, ASPs may specify the policies in the control plane and the control plane is responsible for enforcing these policies in the data plane. In OpenADN architecture, each application consists of multiple workflows, which are dynamically created and the required virtual servers and middleboxes are automatically created at the appropriate clouds. OpenADN allows both new applications that are designed specifically for it as well as legacy applications. It implements "Proxy-Switch Port" (pPort) to provide an interface between OpenADN-aware and OpenADN-unaware services. Depending on the available resources in the host, the controller launches a pPort with a pre-configured number of workflows that it can support. The pPort automatically starts a proxy server. The proxy service acts as the interface between OpenADN-aware services and OpenADN-unaware applications. We support both packet level middleboxes (such as intrusion detection systems) and message level middleboxes (such as firewalls). A cross-layer design is proposed in the current architecture that allows application-layer flow of information to be placed in the form of labels at layer 3.5 (packet level) and at layer 4.5 (message level). Layer 3.5 is called as "Application Label Switching" (APLS) layer. APLS is used by the path policy (routing/switching) component while layer 4.5 information is used to initiate and terminate application sessions. In addition to traditional applications, OpenADN can also be used for other multi-cloud applications such as Internet of Things, Virtual Worlds, Online Games, and Smart Wide Area Network services.
-
-
-
Load Follows Generation: The New Paradigm For Future Power System Control In Presence Of High Penetration Of Variable Renewable Resources
More LessIn the 130 years since the invention of the legacy electric power system concept, electrical generation has been adjusted to match electrical consumption (i.e., the "load") as it varies throughout the time of day and seasons. This "generation follows load" paradigm is a major roadblock to the large-scale incorporation of renewable energy into the national power grid since energy sources such as wind and solar provide inconsistent, variable power that cannot easily be controlled to follow consumption. As a result, today's centrally planned and controlled power system design is no longer adequate. This paper is to introduce a new control approach to enable a "load follows generation" paradigm where a flexible engineered system with distributed control at the users' sites will revolutionize the power industry. Customers will be able to generate power on-site, purchase power from a variety of sources (including each other), sell power back to the grid, select the level of the supply reliability they wish to purchase, and choose how to manage their electricity use. The resulting solution will make the electricity grid significantly more efficient and robust by facilitating extensive use of renewable energy sources and reducing end-use losses in the system. With renewable energy sources widely distributed (e.g., roof-top solar panels and wind farms), the proposed approach will allow exchange of power among utilities, market service providers, consumers, and aggregators (services representing many loads), and also allow power exchange within a customer site. By incorporating this flexibility into their operations, utilities will be able to adjust the overall load they must serve to match available power. This "load follows generation" approach is essential to allow unhindered inclusion of low-emission renewables in the electricity grid. At the same time it makes consumers active agents in the energy exchange. Earlier research addressed discrete aspects of the scientific challenges, but the new approach called Flexible Load Energy eXchange (FLEX) will provide the system-level, holistic approach to achieve its vision. The overarching goal is to develop the science, technology, and control system design required to enable energy exchange between the customer and other parties in the electricity energy exchange ecosystem to unlock the huge potential of renewable generation. A central impediment is the stochastic variability of renewable energy sources such as wind and solar, prompting some to question if dependence on renewable sources will ever be viable. Even when wind (which tends to produce at night) and solar (produced during the day) are combined, severe variations in generation require huge adjustments (termed "ramping" within the industry), and spinning (on-line) reserves using conventional generation. This paper explores the missing critical component of smart grid development: smart and flexible loads. In addition, power interruptions, which are often caused by generation/load imbalances or faults on end-user radial connections, will be greatly reduced in a FLEX-enabled power grid by on-site or alternate customer generation. FLEX will also enable customer participation to create new market arrangements, such as wholesale and retail options, and incentives to increase energy efficiency not previously available.
-
-
-
Plate: Problem-based Learning Authoring And Transformation Environment
Authors: Mohammed Samaka, Yongwu Miao, John Imagliazzo, Disi Wang, Ziad Ali, Khulood Aldus and Mohamed AllyThe project entitled Problem-based Learning Authoring and Transformation Environment (PLATE) is housed at Qatar University. It is under the auspices of the Qatar National Priority Research Program (NPRP). The PLATE project seeks to improve student learning using innovative approaches to problem-based learning (PBL) in a cost-effective, flexible, interoperable, and reusable manner. Traditional subject-based learning that focuses on passively learning facts and reciting them out of context is no longer sufficient to prepare potential engineers and all students to be effective. Within the last two decades, the problem based learning approach to education has started to make inroads into engineering and science education. This PBL educational approach comprises an authentic, ill-structured problem with multiple possible routes to multiple possible solutions. A systematical approach to support online PBL is the use of a pedagogy-generic e-learning platform such as IMS Learning Design (IMS-LD 2003), which is an e-learning technical standard useful to script a wide range of pedagogical strategies as formal models. It seeks to research and develop a process modeling approach together with software tools to support the development and delivery of face-to-face, online, and hybrid PBL courses or lessons in a cost-effective, flexible, interoperable, and reusable manner. The research team seeks to prove that the PLATE authoring system optimizes learning and that the PLATE system improves learning in PBL activities. For this poster presentation, the research team will demonstrate the progress it has made within the second year of research. This includes the development of a PBL scripting language to represent a wide range of PBL models, the creation of transformation functions to map PBL models represented in the PBL scripting language into the executable models represented in IMS-LD, and the architecture of the PLATE authoring tool. In addition, the project team designed the run-time environment and developed an initial version of a run-time engine and a run-time user agent. A teacher can instantiate a PBL script and execute a script instance as a course. The user can manipulate the diagram-based script instance in the user agent and the engine will response to the users' actions. Because of this, the system supports the user in executing a course module according to the definition of the PBL script. The research team plans to illustrate that the research and development of a PBL scripting language and the associated authoring and execution environment can provide a significant thrust toward further research of PBL by using meta-analysis, designing effective PBL models, and extending or improving a PBL scripting language. The PLATE project can enable PBL practitioners to develop, understand, customize, and reuse PBL models at a high level by relieving the burdens of handling complex details to implement a PBL course. The research team believes that the project will stimulate the application and use of PBL in curricula with online learning practice by incorporating PBL support into popularly used e-learning platforms and by providing a repository of PBL models and courses.
-
-
-
World Cybersecurity Indicator Using Computational Intelligence
Authors: Ahmad Al Shami and Simeon ColemanThe aim of this research is to investigate the utilisation of Computational Intelligence (CI) methods for constructing a World Cybersecurity Indicator (WCI) to enable consistent and transparent assessments of the cybersecu- rity capabilities of nation's through the utilisation of Synthetic Composite Indicators (SCI's) concept for ranking their readiness and progress. SCI are assessment tools usually constructed to evaluate and contrast entities perfor- mance by aggregating intangible measures in many areas such as technology and innovation. SCI key value is inhibited in its capacity to aggregate com- plex and multi-dimensional variables into a single meaningful value. As a result, SCIs have been considered as one of the most important tools for macro-level and strategic decision making. Considering the shortcomings of the existing SCI, this study is proposing a CI approach to develop a new WCI. The suggested approach utilizes Fuzzy Proximity Knowledge Mining technique to build the qualitative taxonomy initially, and Fuzzy c-mean is employed to form a macro level cybersecurity indicator. To illustrate the method of construction a fully worked application is pre- sented. The application employs real variables of possible threats to the In- formation and Communication Technology (ICT). The weighting and aggre- gation results obtained were compared against classical approaches namely Principal Component Analysis, Factor Analysis and the Geometric Mean to weight and aggregate SCI's. The proposed model has the capability of weighting and aggregating major cybersecurity indicators into a single value that ranks nations even with limited data points. The validity and robustness of the WCI is evaluated using Monte Carlo simulation. In order to show the value added by the new cybersecurity index, the WCI is applied to the Middle East and North Africa (MENA) region as a special case study and then generalised. In total seventy-three countries were included, that are representative of developed, developing and under- developed nations. The nal and overall ranking results obtained, suggest novel and unbiased way compared to traditional or statistical methods when building, the WCI.
-
-
-
Residential Load Management System For Future Smart Energy Environment
Authors: Shady Samir Khalil and Haitham Abu-rubElectricity consumption has increased substantially over the last decade. According to the Gulf Research Center (2013), residential sector represents the largest portion (about 50%) of electricity consumption in the GCC region, due to substantial growth of electrical residential appliances. Therefore, we present a novel online smart residential load management system that is used to monitor and control power consumption of the loads for minimizing energy consumption, balancing electric power supply, reducing peak demand, and minimizing energy bill while considering residential customer preferences and comfort level. The presented online algorithm manages power consumption by assigning the residential load according to utilities power supply events. The input data to the management algorithm is set based on the categorized loads according to: importance (vital, essential, and non-essential electrical loads), electrical power consumption, electricity bill limitation, utilities power limitation, and load priority. The data are processed and fed to the presented algorithm, which accurately manages the power of Dwelling Loads using external controlled disconnectors. The proposed online algorithm yields to improve the overall grid efficiency and reliability, especially during the demand response periods. Simulation results demonstrate the validity of the proposed algorithm.
-
-
-
Using Social Computing For Knowledge Translation: Exploiting Social Network And Semantic Content Analyses To Facilitate Online Knowledge Translation Within An Online Social Community Of Medical Practitioners
Authors: Samuel Stewart and Syed Sibte Raza AbidiSocial computing has led to new approaches for Knowledge Translation (KT) by overcoming the temporal and geographical barriers experienced in face-to-face KT settings. Social computing based discussion forums allow the formulation of communities of practice whereby a group of professionals disseminate their knowledge and experiences through online discussions on specialized topics. In order to successfully build an online community of practice, it is important to improve the connectivity between like-minded community members and between like-topic discussions. In this paper we present a Medical Online Discussion Analysis and Linkages (MODAL) method to identify affinities between members of an online social community by applying: (a) social network analysis to understand their social communication patterns during KT; and (b) semantic content analysis to establish affinities between different discussions and professionals based on their communicated content. Our approach is to establish linkages between users and discussions at the semantic and contextual levels—i.e. we do not just link discussions that share exact medical terms, rather we link practitioners and discussions that share semantically and contextually similar medical terms, thus accounting for vocabulary variations, concept hierarchies and specialized clinical scenarios. MODAL incorporates two novel semantic similarity methods to analyze online discussions using: (i) the Generalized Vector Space Model (GVSM) that leverages semantic and contextual similarity to find similarities between discussion threads and between practitioners; and (ii) an extension of the Balanced Genealogy Model (BGM) so that we are able to address non-leaf mappings, issues of homonymity noted in medical terminologies, and further contextualization of the similarity measures using information content measures. We have implemented a similarity metric that captures the concept of "interest" between users or threads, i.e., a numeric measure of how interested user A is in user B, or how much of the information contained in thread A is related to thread B. MODAL measures the "interest" one professional has in another professional within the online community, and then uses this metric to identify those professionals that are sought by other professionals for expert advice—the content experts. Furthermore, by incorporating the interest measures with SNA, MODAL is able to identify the content experts within the community, and analyze the content of their conversations to determine their areas of expertise. Given the short and unstructured nature of online communications, we use the MeSH medical lexicons and the medical text analysis tools, i.e. Metamap, to map the unstructured narrative of online discussions to formal medical keywords based on the MeSH lexicon. MODAL is tested on two online professional communities of healthcare practitioners: (a) Pediatric Pain Mailing List is a community of 460 clinicians from around the world--over a four year period 2505 messages were shared on 783 different discussion threads; (b) SURGINET is a community of 865 clinicians from around the world that use the forum to discuss general surgical issues-it contains over 17000 messages on 2111 threads by 231 users. MODAL is able to identify content experts and link like-minded practitioners based on the content of their conversations rather than on direct ties between them.
-
-
-
Innovative Data Collection And Dissemination In V2x Networks
Authors: Wassim Drira and Fethi FilaliThe emergence of V2X (Vehicle-to-Vehicle and Vehicle-to-Infrastructure) networks lends a significant support for the Intelligent Transportation Systems to improve many applications for different purposes such as safety, traffic efficiency and added-value services. Typically, such environment is distinguished by its mobility and topology dynamics over space and time. Moreover, these applications need to collect and disseminate data reactively or proactively from vehicles or the traffic control center (TMC) to run efficiently. Thus, in this abstract paper, we will provide an extended framework to collect and disseminate data in V2X networks based on the new emerging network NDN (Named Data Networking) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. The communication paradigm in this network is based on the pattern of Request/Response where request messages are Interest packets and responses are Data packets. In order to provide an efficient reactive data collection mechanism, we propose an NDN Query mechanism (NDN-Q to allow any node to submit a query in the network to collect data which is built on the fly. Then, NDN-Q uses a reduce mechanism to aggregate data hop by hop towards the Query Source. Thus, in NDN-Q, the data collection is performed in two steps. The first one is the query dissemination towards data sources while the second one concerns data sources's response collection and aggregation. Then, we extended NDN with a Publish/Subscribe (Pub/Sub) capability in order to provide an efficient data collection and dissemination mechanism in V2X networks. Therefore, a node, a vehicle or the TMC, can subscribe to a content topic through a rendezvous node to receive zero or many messages as and when they are published without re-expressing its interest. Thus, many nodes may subscribe to the same topic which will allow the use of the NDN capabilities in terms of multicast to reduce the communication load in the network. The framework has been designed to meet the V2X communication requirements in terms of mobility and dynamics. Then, it has been implemented based on the NS-3 module ndnSim. Moreover, SUMO has been used to generate the vehicle mobility scenario. The evaluation argues that this extension reduces the number of packets disseminated to subscribers and efficiently handles the mobility of vehicles. Moreover, the data collection fulfills the delay requirement of traffic safety applications. The evaluation of NDN-Q argues that it reduces the number of packets collected from data sources, efficiently handles the mobility of vehicles, and deliver query results in a reasonable time, i.e. 280ms when a reduce process is performed in intermediate nodes or less than 50ms without any reduce process. In this abstract paper, we presented an innovative data collection and dissemination framework in V2X networks based on NDN. The framework gives a fully distributed query mechanism that does not require knowledge of the network organization and vehicles, and a Pub/Sub mechanism that takes into consideration V2X network characteristics and needs in terms of mobility, periodic disconnectivity, publish versioning and tagging with location, etc.
-
-
-
Maximum Power Transfer Of Pv-fed Inverter-based Distributed Generation With Improved Voltage Regulation Using Flywheel Energy Storage Systems
Authors: Hisham El Deeb, Ahmed Massoud, Ahmed Abbas, Shehab Ahmed, Ayman Abdel-khalik and Mohamed DaoudOne of the main issues accompanied with the high penetration of PV distributed generation (DG) systems in low voltage (LV) networks is the overvoltage challenge. The amount of injected power to the grid is directly related to the voltage at the point of common coupling (PCC), which necessitates limiting the amount of injected power to the grid to conservative values compared to the available capacity from the PV panels particularly at light loading. In order to mitigate the tradeoff between injecting the maximum amount of electrical power and voltage rise phenomena, many control schemes were suggested in order to optimize the operation of PV DG energy sources as well as maintaining safe voltage levels. Unlike these conventional methods, this paper proposes a combined PV inverter-based distributed generation and flywheel energy storage system to ensure improved voltage regulation as well as making use of the maximum available power from the PV source at any instant, decoupling its relation with the terminal voltage. The concluded assumptions were simulated through Matlab/Simulink and verified experimentally.
-
-
-
Enhancing Rpl Resilience Against Packet Dropping Insider Attacks
Authors: Karel Heurtefeux, Ochirkhand Erdene-ochir, Nasreen Mohsin and Hamid MenouarTo gather and transmit data, low cost wireless devices are often deployed in open, unattended and possibly hostile environment making them particularly vulnerable to physical attacks. Resilience is needed to mitigate such inherent vulnerabilities and risks related to security and reliability. In this work, Routing Protocol for Low-Power and Lossy Networks (RPL) is studied in presence of packet dropping malicious compromised nodes. Random behavior and data replication have been introduced to RPL to enhance its resilience against such insider attacks. The classical RPL and its resilient variants have been analyzed through simulations. Resilient techniques introduced to RPL have enhanced significantly the resilience against attacks providing route diversification to exploit the redundant topology created by wireless communications. In particular, the proposed resilient RPL exhibits better performance in terms of delivery ratio (up to 40%), fairness and connectivity while staying energy efficient.
-
-
-
Transformation Of Online Social Networks And Communication Tools: A Technological Point Of View
More LessOnline social networks and online real time communication tools gained large popularity in the last years. In the Arab spring they emerged as one of the main tools to exchange ideas and to organize activities. Many users were unaware that their communication and their interaction patterns are traced. After the Arab spring and after the information demystified by Edward Snowden, users are becoming security aware and look for alternative technologies to communicate securely. Our contribution in this presentation is twofold, first we present the current technological developments in computer science and networking technologies that envision to provide tools to the users, so that their communication is secured and impossible to track. Based on peer-to-peer-technologies, known from file-sharing applications, several solutions for secure, reliable and untraceable communication already exist. In addition, opportunistic and delay-tolerant networks, provide asynchronous "online" communication which do not even use the Internet infrastructure, thus making it even harder to identify and to trace. As a second contribution, we present a technological framework based on peer-to-peer networking, which allows to build platforms for online social network. In depth, we show how this framework is capable to provide common functionality of online social networks, but at the same time is privacy-aware and impossible to trace or to shut of. The framework uses fully decentralized data storage and sophisticated cryptography for the user management, secure communication and access control to the data of the users. The quality of the presented framework, in terms of performance and communication costs, has been evaluated in simulations with up to 10,000 network nodes as well as in test with up to 30 participants. Our findings are, that up to now the usability, economical pressure and the security of (centralized) communication tools were incompatible. This observation changes with decentralized solutions which do not come with economic pressure. Both secure and usable solutions are about to emerge, fully distributed and secure, which will be impossible to shut down or to trace. For the future it will be interesting, whether the presented decentralized technology will be ignored or whether it will disrupt the economics and the market of social networks, like it was the case with Napster and the music industry, a case in which decentralized peer-to-peer networking unfolded it potential.
-
-
-
Data Science at QCRI
Authors: Divy Agrawal, Laure Berti, Hossam Hammady, Prasenjit Mitra, Mourad Ouzzani, Paolo Papotti, Jorge Quiane Ruiz, Nan Tang, Yin Ye, Si Yin and Mohamed Zaki"The Data Analytics group at QCRI has embarked on an ambitious endeavor to become a premiere world-class research group in Data Science by tackling diverse research topics related to information extraction, data quality, data profiling, data integration, and data mining. We will present our ongoing projects to overcome different challenges encountered in Big Data Curation, Big Data Fusion, and Big Data Analytics. (1) Big Data Curation: Due to complex processing and transformation layers, data errors proliferate rapidly and sometimes in an uncontrolled manner, thus compromising the value of information and impacting data analysis and decision making. While data quality problems can have crippling effects and no end-to-end off-the-shelf solutions to (semi-)automate error detection and correction existed, we built a commodity platform, NADEEF that can be easily customized and deployed to solve application-specific data quality problems. This project implements techniques that exploit several facets of data curation including: * assisting users in the semi-automatic discovery of data quality rules; * involving users (and crowds) in the cleaning process with simple and effective questions; * and unifying logic-based methods, such as declarative data quality rules, together with quantitative statistical cleaning methods. Moreover, implementation of error detection and cleaning algorithms has been revisited to work on top of distributed processing platforms such as Hadoop and Spark. (2) Big Data Fusion: When data is combined from multiple sources, it is difficult to assure its veracity and it is common to find inconsistencies. We have developed tools and systems that tackle this problem from two perspectives: (a) In order to find the true value for two or more conflicting ones, we automatically compute the reliability (accuracy) of the sources and dependencies among them, such as who is copying from whom. Such information allows much higher precision than simple majority voting and ultimately leads to values that are closer to the truth. (b) Given an observed problem over the integrated view of the data, we compute explanations for it over the sources. For example, given erroneous values in the integrated data, we can explain which source is making mistakes. (3) Big Data Analytics: Data analysis tasks typically employ complex algorithmic computations that are hard/tedious to express in current data processing platforms. To cope with this problem, we are developing Rheem, a data processing framework that provides an abstraction on top of current data processing platforms. This abstraction allows users to focus only on the logics of their applications and developers to provide ad-hoc implementations (optimizations) over existing data processing platforms. We have already created two different applications using Rheem, namely data repair and data mining. Both have shown benefits in terms of expressivity of the Rheem abstraction as well as in terms of query performance through ad-hoc optimizations. Additionally, we have developed a set of scalable data profiling techniques to understand relevant properties of big datasets in order to be able to improve data quality and query performance."
-
-
-
Up & Away: A Visually-controlled Easy-to-deploy Uav Cyber Physical Testbed
Authors: Ahmed Saeed, Azin Neishaboori, Khaled Harras and Amr MohamedCyber-Physical Systems (CPS) rely on advancements in fields such as robotics, mobile computing, sensor networks, controls, and communications, to advance complex real-world applications including aerospace, transportation, factory automation, and intelligent systems. The multidisciplinary nature of effective CPS research, diverts specialized researchers' efforts towards building expensive and complex testbeds for realistic experimentation, thus, delaying or taking the focus away from the core potential contributions to be made. We present Up and Away (UnA), a cheap and generic testbed composed of multiple autonomously controlled Unmanned Ariel Vehicle (UAV) quadcopters. Our choice of using UAVs is due to the their deployment flexibility and maneuverability enabling a wide range of CPS research evaluations in areas such as 3D localization, camera sensor networks, target surveillance, and traffic monitoring. Furthermore, we provide a vision-based localization solution that uses color tags to identify different objects in varying light intensity environments and use that system to control the UAVs within a specific area of interest. However, UnA's architecture is modular so that the localization system can be replaced by any other system (e.g. GPS) as deployment conditions change. UnA enables interaction with real world objects, treating them as CPS input, and uses the UAVs to carry out CPS specific tasks while providing sensory information from the UAV's array of sensors as output. We also provide an API that allows the integration of simulation code that will obtain input from the physical world (e.g. targets to track) then provide control parameters (i.e. number of quadcopters and destinations coordinates) to the UAVs. The UnA architecture is depicted in Figure 1a. To demonstrate the promise of UnA, we use it to evaluate another research contribution we make in the area of smart surveillance, particularly that of target coverage using mobile cameras. Optimal camera placement to maximize coverage has been shown to be NP-complete for both area and target coverage. Motivated by the need for practical computationally efficient algorithms to autonomously control mobile visual sensors, we propose efficient near-optimal algorithms for finding the minimum number of cameras to cover a high ratio of targets. First, we develop a basic method, called cover-set coverage to find the location/direction of a single camera for a group of targets. This method is based on finding candidate points for each possible camera direction and spanning the direction space via discretizing camera pans. We then propose two algorithms, (1) Smart Start K-Camera Clustering (SSKCAM) and (2) Fuzzy Coverage (FC), which divide targets into multiple clusters and then use the cover-set coverage method to find the camera location/direction for each cluster. Overall, we were able to integrate the implementation of these algorithms with the UnA testbed to witness real-time assessment of our algorithms as shown in Figure 1b.
-
-
-
قياس فاعلية بناء وتطوير برنامج محوسب لإدارة وتحليل الاستشهادات المرجعية العربية
Authors: Saleh Alzeheimi and Akram Zekiإن من المؤشرات العلمية التي خرجت بها بعض الدراسات حول أسباب ضعف الانتاج الفكري العربي وضعف المحتوى الرقمي في قواعد البيانات العالمية هو غياب البرامج المحوسبة التي تعنى بتحليل المراجع والاستشهادات المرجعية (العربية) الورادة في هذا الانتاج، وانعدام البرامج الآلية التي تقدم مؤشرات احصائية (ببليومترية) تبرز التوجه البحثي العربي، وتعين صناع القرار في المؤسسات البحثية والأكاديمية والباحثين أيضا في توجيه مسار البحث العلمي العربي ومعرفة نقاط القوة والضعف في التخصصات العربية المبحوث فيها، وعرض الخريطة القياسية للعلوم العربية Systematic Map وتفرعاتها. وفي المقابل توجد أدوات متقدمة وبرامج عالمية تقدم حلولاً لتحليل النتاج الفكري الأجنبي (المنشور باللغة الانجليزية) مثل Scopus وJournal Citation Report (JCR) إلا أن هذه البرامج لا تدعم النتاج الفكري المنشور باللغة العربية، كما أنها غير مجانية، ولا تتيح الاستخدام الحر والمفتوح للمؤسسات والأفراد. لذا تهدف الدراسة إلى قياس فاعلية بناء وتطوير برنامج محوسب لإدارة وتحليل الدراسات الببليومترية والاستشهادات المرجعية العربية. وستعمد الدراسة إلى طريقتين لتحقيق أهداف الدراسة وهما: - بناء برنامج محوسب (مفتوح المصدر) باستخدام لغةPHP وMYSQL من أجل اتاحة الفرصة للمطورين للتطوير المستمر واتاحته مجانا للمتخصصين في مجال الدراسات الببليومترية وبهذا يكون أول برنامج عربي مفتوح المصدر في مجال التحليل الآلي للدراسات الببليومترية. - تجربة البرنامج على دوريات عربية محكمة وقياس فاعلية التقارير والاحصائيات الببليومترية ومدى تطابقها مع القوانيين الببليومترية مثل قانون لوتكان وقانون برادفورد، ومدى استيفاء البرنامج للخصائص والمؤشرات التي تقدمها البرامج العالمية الداعمة للانتاج الفكري الأجنبي مثل Scopus و Google Scholar و Science Direct. وعليه فإن الدراسة ستعتمد على المنهج البنائي والتجريبي لقياس فاعلية برنامج محوسب مفتوح المصدر في مجال الدراسات الببليومترية العربية Arabic Citation Engine وسيتم تقييم البرنامج من خلال ادخال بيانات عينة من المقالات العربية المنشورة في الدروريات التي تصدرها الجامعة الاسلامية العالمية بماليزيا MUII وفحص التقارير والمؤشرات العلمية التي يستخرجها البرنامج ومدى دقة المعطيات بناء على القوانيين الببليومترية.
-
-
-
Energy Storage As An Enabling Technology For The Smart Grid
Authors: Omar Ellabban and Haitham Abu-rubIn today's world, the need for more energy seems to be ever increasing. The high cost and limited sources of fossil fuels, in addition to the need to reduce greenhouse gasses, have made renewable energy sources (RES) attractive in today's world economy. However, the fluctuating and intermittent nature of these RES causes variations of power flow that can significantly affect the stability and operation of the electrical grid. In addition, the power output of these RES is not as easy to adjust to changing demand cycles as the output from the traditional power sources. To overcome these problems, energy from these RES must be stored when excess is produced and then released, when production levels are less than the required demand. Therefore, in order for RES to become completely reliable as primary sources of energy, energy storage systems (ESS) is a crucial factor. The impact of the ESS in future grid is receiving more attention than ever from system designers, grid operations and regulators. Energy storage technologies have the potential to support our energy system's evolution, they can be used for multiple applications, such as: energy management, backup power, load leveling, frequency regulation, voltage support, and grid stabilization. In this work, an overview of the current and future energy storage technologies used for electric power applications is carried out. Furthermore, an assessment of the dynamic performance of energy storage technologies for the stabilization and control of the power flow of emerging smart grid will be presented. The EES can enhance the operation security and ensure the continuity of energy supply in future smart grids.
-
-
-
A Novel Solution For Addressing Network Firewall Issues In Remote Laboratory Development
Authors: Ning Wang, Xuemin Chen, Michael Ho, Hamid Parsaei and Gangbing SongThe increased use of remote laboratories for online education has made network security issues ever more critical. To defend against numerous new types of potential attacks, the complexity of network firewalls has been significantly increased. Consequently, the network firewall will inevitably limit the real time remote experimental data transmission. To solve the issue of traversing network firewalls, we designed and implemented a novel real-time experimental data transmission solution which includes two parts, real-time experiment control data transmission and real-time experiment video streaming transmission, for remote laboratory development. To implement real-time experiment control data transmission, a new web server software architecture was designed and developed to allow the traversing of network firewalls. With this new software architecture, the public network port 80 can be shared between Node.js, which is a stable server-side software engine to support real-time communication web applications, and the Apache web server software system. With this new approach, the Apache web server application still listens to the public network port 80, but any client requests for the Node.js web server application through the port will be forwarded to a special network port which Node.js web server application is listening to. Accordingly, a new solution in which both Apache and Node.js web server applications work together via HTTP proxy developed by the Node-HTTP-Proxy software package is implemented on the server-side. With this new real time experiment control and data transmission solution, the end user can control remote experiments and view experimental data on the web browser without firewall issues and without the need of third party plug-ins. It also provides a new approach for the remote experiment control and real time data transmission based on pure HTTP protocol. To implement real-time experiment video transmission part, we developed a complete novel solution via HTTP Live Streaming (HLS) protocol and FFMPEG that is a powerful cross-platform command line video trans-code/encoding software package on the server side. In this paper, a novel, real-time video streaming transmission approach based on HLS for the remote laboratory development is presented. With this new solution, the terminal users can view the real-time experiment live video streaming on any portable device without any firewall issues or the need for a third party plug-in. We have successfully implemented this novel real-time experiment data transmission solution in the remote SVP experiment and remote SMA experiment development. End users can now conduct the SVP and SMA remote experiment and view the experiment data and video in real time through web browsers anywhere that has internet connection without any third party plug-in. Consequently, this novel real-time experiment data transmission solution gives the unified framework significant improvements.
-
-
-
Power System Stabilizer Design Based On Honey-bee Mating Optimization Algorithm
Authors: Abolfazl Halvaei Niasar, Dariush Zamani and Hassan MoghbeliPower system stability is one of the main factors in performance of electrical system. A control system must retain frequency and voltage size under any distortion such as sudden increase in load, to leave a generator from circuit or cut off a transmission line in a constant level. In this paper, Honey-Bee Mating Optimization (HBMO) algorithm has been used to design power system stabilizer. It is based on the mating between queen and bees. Meta-heuristics honey-bee mating optimization algorithm is considered to be as intelligent algorithm. Simulation results show that HBMO algorithm is simple to solve optimization issues. It works based on the changes in the system to adapt with reality and increase flexible parameters in comparison with other methods. Furthermore, considering performance level, it has suitable standard deviation and convergence of approximation.
-
-
-
Sensorless Direct Power Control Of Induction Motor Drive Using Artificial Neural Network
Authors: Abolfazl Halvaei Niasar, Hassan Moghbeli and Hossein Rahimi KhoeiThis paper proposes design of sensorless induction motor drive based on direct power control (DPC) technique. Principle of DPC technique is presented and possibilities of direct power control (DPC) for induction motors (IMs) fed by a space vector pulse-width modulation inverter (SV-PWM) are studied. The simulation results show that the DPC technique enjoys all advantages of pervious method such as fast dynamic and ease of implementation, without having the previous method problems. Some simulations are carried out for the closed-loop speed control systems under various load conditions to verify proposed methods. Results show that DPC of IMs works well with output power and flux control. Moreover, to reduce cost of drive and enhancing reliability, an effective sensorless strategy based on artificial neural network (ANN) is developed to estimate rotor's position and motor speed. Developed sensorless scheme is a new model reference adaptive system (MRAS) speed observer for direct power control of induction motor drives. The proposed MRAS speed observer uses the current model as an adaptive model. The neural network has been designed and trained online by employing a back propagation network (BPN) algorithm. The estimator is simulated in Matlab/Simulink. Simulation results confirm the performance of ANN-based sensorless DPC induction motor drive in various conditions.
-
-
-
Sensor Fault Detection And Isolation System
Authors: Cheng-ken Yang, Alireza Alemi and Reza LangariThe purpose of this paper is mainly aimed to provide an energy security strategy for the petroleum production and processing in the grand challenges. Fault detection and diagnosis is the central component of abnormal event management (AEM) [1]. According to the International Federation of Automatic Control (IFAC), a fault is defined as an unpermitted deviation of at least one characteristic property or parameter of the system from the acceptable/usual/standard condition [2-4]. Generally, there are three parts in a fault diagnosis system, detection, isolation, and identification [5, 6, 7]. Depending on the performance of fault diagnosis systems, they are called FD (for fault detection) or FDI (for fault detection and isolation) or FDIA (for fault detection, isolation and analysis) systems [5]. As the increasing needs for energy grows rapidly, energy security becomes an important issue especially in petroleum production and processing. The importance can be mainly considered in following perspectives: higher system performance, product quality, human safety, and cost efficiency [5, 8]. With this in mind, the purpose of this research is to develop a Fault Detection and Isolation (FDI) system which is capable to diagnosis multiple sensor faults in nonlinear cases. In order to lead this study closer to real world applications in oil industries, the system parameters of the applied system are assumed to be unknown. In the first step of the proposed method, phase space reconstruction techniques are used to reconstruct the phase space of the applied system. This step is aimed to infer the system property by the collected sensor measurements. The second step is to use the reconstructed phase space to predict future sensor measurements, and residual signals are generated by comparing the actually measured measurements to the predicted measurements. Since, in practice, residual signals will not perfectly equal to zero in the fault-free situation, Multiple Hypothesis Shiryayev Sequential Probability Test (MHSSPT) is introduced to further process those residual signals, and the diagnostic results are presented in probability. In addition, the proposed method is extended to a non-stationary case by using the conservation/dissipation property in phase space. The proposed method is examined by both of simulated data and real process data. The three tank model is modeled according to a nonlinear laboratory setup DTS200 and introduced for generating simulated data. On the other hand, the real process data collected from a sugar factory actuator system are also used to examine the proposed method. According to our results obtained from simulations and experiments, the proposed method is capable to indicate both of healthy and faulty situations. In the end, we have to emphasize that the proposed approach is not limited in the applications of petroleum production and processing. For example, our approach can also apply to enhance the quality of water and avoid the discharges, such as leakage, in the process of water resource management. Therefore, the proposed approach not only benefits the issue of energy security but also other issues in the grand challenges.
-
-
-
Accurate Characterization Of Bicmos Bjt Across Dc-67 Ghz With On-wafer Measurement And Em De-embedding
Authors: Juseok Bae, Scott Jordan and Nguyen CamThe complementary metal oxide semiconductor (CMOS) and bipolar-complementary metal oxide semiconductor (BiCMOS) technologies offer low power dissipation, good noise performance, high packing density, et cetera in analog and digital circuit design. They have contributed significantly to the advancement of wireless communication and sensing systems and are presently inevitable devices in these systems. As the technologies and device performance have advanced into the millimeter-wave regime over last two decades, accurate S-parameters of CMOS and BiCMOS devices at millimeter-wave frequencies are highly demanded for millimeter-wave radio-frequency integrated-circuit (RFIC) design. The accuracy of these S-parameters is absolutely essential for extracting accurately the device parameters and small- and large-signal models. The conventional extraction techniques using both impedance standard substrate and de-embedding technique have been replaced by on-wafer calibration techniques implementing calibration standards fabricated on the same wafer together with the device under test (DUT) in virtue of accurate characterization over wide frequency and at high frequencies. However, some challenges for the on-wafer calibration still remain where the calibration is conducted over a wide frequency range covering millimeter-wave frequencies with a DUT such as bipolar junction transistor (BJT). The ends of the interconnects for the open and load standards are inherently very close to each other since it depends on the spacing between base (or collector) and emitter of the BJT (about 0.25 μm), hence not only causing significant gap and open-end fringing capacitances, which leads to substantial undesired effects for device characterization at millimeter-wave frequencies, but also making it impossible to place resistors within such narrow gaps for the load standard design. In order to resolve the structural issue of the conventional on-wafer calibration standards, a new method implementing both on-wafer calibration and electromagnetic (EM)-based de-embedding has been developed. In the newly developed technique, appropriate spacing in the on-wafer calibration standards, which minimizes the parasitic capacitance between the close open-ends and sets enough space to place the load standard's resistors, is determined based on EM simulations, and the non-calibrated part within the spacing consisting of interconnects and vias is extracted by the EM-based de-embedding. The developed procedure with the on-wafer calibration and the EM-based de-embedding characterizes the S-parameters of BJTs in 0.18-µm SiGe BiCMOS technology from DC to 67 GHz. The measured results show sizable differences in insertion loss and phase between the on-wafer characterizations with and without the EM-based de-embedding, demonstrating that the developed on-wafer characterization with EM-based de-embedding is needed for accurate characterization of devices at millimeter-wave frequencies, which is essential for the design of millimeter-wave wireless communication and sensing systems.
-
-
-
Visual Scale-adaptive Tracking For Smart Traffic Monitoring
Authors: Mohamed Elhelw, Sara Maher and Ahmed SalaheldinThis paper presents a novel real-time scale adaptive visual tracking framework and its use in smart traffic monitoring where the framework robustly detects and tracks vehicles from a stationary camera. Existing visual tracking methods often employ a semi-supervised appearance modeling where a set of samples are continuously extracted around the vehicle to train a discriminant classifier between the vehicles and the background. While proving their advantage, many issues are still to be addressed. One is a tradeoff between high adaptability (prone to drift) and preserving original vehicle appearance (susceptible to tracking loss with significant appearance variations). Another issue is vehicle scale changes due to perspective camera effect which increases the potential for an inaccurate update and subsequently visual drifting. Still, scale adaptability has received little attention in vision-based discriminant trackers. In this paper we propose a three-step Scale Adaptive Object Tracking (SAOT) framework that adapts to scale and appearance changes. The framework is divided into three phases: (1) vehicle localization using a diverse ensemble, (2) scale estimation, and (3) data association where detected and tracked vehicles are correlated. The first step computes vehicle position by using an ensemble based on a compressed low-dimensional feature subsets projected from high-dimension feature space by random projections. This provides the diversity needed to accommodate for individual classifiers errors and different adaptability rates. The scale estimation step, applied after vehicle localization, is computed based on matched points between a pre-stored template and the localized vehicle. This doesn't only estimate the new scale of the vehicle but also serves as a correction step to prevent drifting by estimating the displacements between correspondences. The data association step is subsequently applied to link detected vehicle of current frame with the tracked vehicles. Data association must consider factors like the absence of detected target, false detections and ambiguity. Figure 1 illustrates the framework in operation. While the vehicle detection phase is executed per frame, the continuous tracking procedure ensures that all vehicles in the scene, no matter how complex it is, are correctly accounted for. The performance of the proposed Scale Adaptive Object Tracking (SAOT) algorithm is further evaluated with a different set of sequences with scale and appearance changes, blurring, moving camera and illumination. SAOT was compared to three established trackers in the literature: Compressive Tracking , Incremental Learning for Visual Tracking and Random Project with Diverse Ensemble Tracker using standard visual tracking evaluation datasets [4]. The initial target position for all sequences was initialized using manual ground truth. Centre Location Error (CLE) and Recall are calculated to evaluate the methods. Table 1 presents the CLE errors and recall in parentheses measured on a set of 2 sequences with different challenges. It clearly demonstrates that SAOT performs better than the other trackers.
-
-
-
“ElectroEncephaloGram “EEG” Mental Task Discrimination”,digital Signal Processing -- Master Of Science , Cairo University
More Less"ElectroEncephaloGram "EEG" Mental Task Discrimination", Master of Science dissertation, Cairo University, 2010. Recent advances in computer hardware and signal processing have made possible the use of EEG signals or "brain waves" for communication between humans and computers. Locked-in patients have now a way to communicate with the outside world, but even with the last modern techniques, such systems still suffer communication rates on the order of 2-3 tasks/minute. In addition, existing systems are not likely to be designed with flexibility in mind, leading to slow systems that are difficult to improve. This Thesis is classifying different mental tasks through the use of the electroencephalogram (EEG). EEG signals from several subjects through channels (electrodes) have been studied during the performance of five mental tasks: Baseline task for which the subjects were asked to relax as much as possible, Multiplication task for which the subjects were given nontrivial multiplication problem without vocalizing or making any other movements, Letter composing task for which the subject were instructed to mentally compose a letter without vocalizing (imagine writing a letter to a friend in their head),Rotation task for which the subjects were asked to visualize a particular three-dimensional block figure being rotated about its axis, and Counting task for which the subjects were asked to imagine a blackboard and to visualize numbers being written on the board sequentially. The work presented here maybe a part of a larger project, whose goal is to classify EEG signals belonging to a varied set of mental activities in a real time Brain Computer Interface, in order to investigate the feasibility of using different mental tasks as a wide communication channel between people and computers.
-
-
-
Designing A Cryptosystem Based On Invariants Of Supergroups
Authors: Martin Juras, Frantisek Marko and Alexander ZubkovThe work of our group falls within the area of Cyber Security, which is one of Qatar's Research Grand Challenges. We are working on designing a new public key cryptosystem that can improve the security of communication networks. The most widely used cryptosystem at present (like RSA) are based on the difficulty of factorization of numbers that are constructed as product of two large primes. The security of such systems was put in doubt since these systems can be attacked with a help of quantum computers. We are working on a new cryptosystem that is based on different (noncommutative) structures, like algebraic groups and supergroups. Our system is based on the difficulty of computing invariants of actions of such groups. We have designed a system that uses invariants of (super)tori of general linear (super)groups. Effectively, we are building a "trapdoor function" enabling us to find a suitable invariant of high degree and do the encoding of the message quickly and efficiently but which provides an attacker with computationally very expensive and difficult task to find an invariant of that high degree. As with every cryptosystem, the possibility of its break have to be scrutinized very carefully and the system has to be investigated independently by other researchers. We have established theoretical results about minimal degrees of invariants of a torus that are informing possible selection of parameters of our system. We continue getting more general theoretical results and we are working towards an implementation and testing of this new cryptosystem. A second part of our work is an extension from the classical case of algebraic groups to the case of algebraic supergroups. We are concentrating especially on general linear supergroups. We have described the center of the distribution superalgebras of general linear groups GL(m|n) using the concept of an integral in the sense of Haboush and computed explicitly all generators of invariants of the adjoint action of the group GL(1|1) on its distribution algebra. The center of the distribution algebra is related via the Harish-Chandra map to infinitesimal characters. Understanding of these characters and blocks would lead us to the description of the linkage principle, that is of composition factors of induced modules. Finding and proving linkage principle for supergroups over the field of positive characteristics is one of our main interests. This extends classical results from the representation theory that are giving scientists, mathematicians and physicists, a tool to find a theoretical model where the fundamental rules of symmetries of the space-continuum are realized. Better theoretical background could lead to better understanding of the experimental data and predictions confirming or contradicting our current understanding of the universe. As happened many times in the past, finding the right point of view and developing new language can often lead to different level of understanding. Therefore we value the theoretical part of our work the same way as the practical work related to the cryptosystems.
-
-
-
Experimental Results On The Performance Of Visible Light Communication Systems
Authors: Mohamed Kashef, Mohamed Abdallah and Khalid QaraqeEnergy efficient wireless communication networks become essential due to the associated environmental and financial benefits. Visible light communication (VLC) is a promising candidate to reach energy efficient communications. Light emitting diodes (LEDs) have been introduced as energy efficient light sources and their light signal intensity can be modulated to transfer data wirelessly. As a result, VLC using LEDs can be considered an energy efficient solution that exploits the illumination energy, which is already consumed, in data transmission. We set up a fully operative VLC system testbed composed of both the optical transmitters and receivers. We use this system in testing the performance of VLC communication systems. In this work, we discuss the results obtained by running the experiment using different system parameters. We apply different signaling schemes for the LED transmitter including optical orthogonal frequency division modulation (O-OFDM). We also validate our previously obtained analytical results for applying power control in VLC cooperative networks.
-
-
-
Cerebral Blood Vessel Segmentation Using Gauss-hermite Quadrature Filtering And Automatic Seed Selection
More LessBackground & Objective: Blood vessel segmentation has various applications such as proper diagnosis, surgical planning, and simulation. However, the common challenges faced in blood vessel segmentation are mainly vessels of varying width and contrast changes. In this paper, we propose a segmentation algorithm, where: (1) a histogram-based approach is proposed to determine the initial patch (seeds) and (2) on this patch, a Gauss- Hermite quadrature filter is applied across different scales to handle vessels of varying width with high precision. Subsequently, a level set method is used to perform segmentation on the filter output. Methods: In spatial domain, a Gauss-Hermite quadrature filter is basically a complex filter pair, where the real component is a line filter that can detect linear structures, and the imaginary component is an edge filter that can detect edge structures; the filter pair is used for combined line-edge detection. The local phase is the argument of the complex filter response that determines the type of structure (line/edge), and the magnitude of the response determines the strength of the structure. Since the filter is applied in different directions, all filter responses are then combined to produce an orientation invariant phase map by summing filter responses for all directions. We use 6 filters with center frequency pi/2. To handle vessels of varying width, a multi-scale integration approach is implemented. Vessels of different width appear both as lines and edges across different scales. These scales are combined to produce a global phase map that is used for segmentation. The resulting global phase map contains detailed information about line and edge structures. For blood vessel segmentation, a local phase of 90 degree indicates edge structures. Therefore, it is necessary to consider only the real part of the quadrature filter response. Edges will be found at zero crossing whereas positive and negative values will be obtained for inside and outside of line structures. Therefore, level set (LS) approach is utilized that uses the real part of the phase map as a speed function to drive the deforming contour towards the vessel boundary. In this way, the blood vessel boundary gets extracted. An initial patch on the desired image object is a requirement in this algorithm to start calculating the local phase map. It is obtained by first selecting a few possible partitions using peaks (local maxima) in the intensity histogram. Then, optimal number of seeds is obtained by an iterative clustering of these peaks using their histogram heights and grey scale difference. The seeds around the object form the patch. Results & Conclusion: The proposed method has been tested on 6 subjects of Head MRT Angiography having resolution 416×512×112. We use 6 filters of size 7×7×7 and 4 scales in this experiment. The average time required by MATLAB R14 to perform segmentation is 3 m for one subject by a 2 GB RAM and core2duo processor (without optimization). The resulted segmentation is promising and robust in terms of boundary leakage as can be observed from the Figure.
-
-
-
Self-learning Control Of Thermostatically Controlled Loads: Practical Implementation Of State Of The Art In Machine Learning.
More LessOptimal control of thermostatically controlled loads such as air conditioning and hot water storage plays a pivotal role in the development of demand response. Demand response being an enabling technology in a society with an increased electrification and growing production from intrinsically stochastic renewable energy. Optimal control however, often relies on the availability of a system model in combination with an optimizer, a popular approaches being model predictive control. Building such a controller, is considered a cumbersome endeavor requiring custom expert knowledge, making large scale deployment of similar solutions challenging. To this end we propose to leverage on recent developments in machine learning, enabling a practical implementation of a model-free controller. This model-free controller interacts with the system within safety and comfort constraints and learns from this interaction to make near-optimal decisions. All of this within a limited convergence time on the order of 20-40 days. When successful, self-learning control allows for a large scale cost-effective deployment of demand response applications supporting a future with increased uncertainty in the energy production. To be more precise, recent results in the field of batch reinforcement learning and regression algorithms such as extremely randomized trees open the door for practical implementations. To support this, we intend in this work to show our most recent experimental results in implementing generic self-learning controllers for thermostatically controlled loads showing that indeed near optimal policies can be obtained within a limited time.
-
-
-
Self-powered Sensor Systems Based On Small Temperature Differences: Potential For Building Automation And Structural Health Monitoring
Authors: Jana Heuer, Hans-fridtjof Pernau, Martin Jägle, Jan D. König and Kilian BartholoméSensors are the eyes and ears in the service of people - especially in inaccessible areas where regular maintenance or battery replacement is extremely difficult. By using thermoelectric generators, which are capable of directly converting heat flux into electrical energy, self-powered sensor systems can be established wherever temperature differences of a few Kelvin exist. After installation of the sensors, they collect and transmit their data without any need for further maintenance like battery replacement. Intelligent building automation for instance is the key for significant energy reduction in buildings. Through precise control of sun blinds and set temperature for thermostats and air conditioning, radio signal sensors help to increase a building's efficiency massively. Thermoelectric self-powered sensors have the additional potential to introduce more flexibility in the area of building technology as complex wiring is avoided. Buildings can easier be adapted to altered utilization. Structural health monitoring is another field where energy autarkic sensors could be of vital use. In-situ measurements of e.g. temperature, humidity, strain and cracks of buildings are essential in order to determine the condition of construction materials. Respective sensors are hardly accessible, wiring or battery replacement is costly or even impossible. Sensors that are driven by thermoelectric generators are maintenance-free and can help enhancing the longevity of buildings as well as reducing the maintenance costs. Furthermore, leakages in water transport systems can be reduced by in-situ monitoring by self-powered sensors and thus, reduce unnecessary water losses. The large progress in the development of low-power sensors, power management and radio communication combined with the availability of high efficiency thermoelectric generators opens the possibility to run a self-powered sensor node with temperature gradients as low as 0.8K. This potential will be presented with respect to selected fields of application.
-
-
-
Smart Consumers, Customers And Citizens: Engaging End-users In Smart Grid Projects
Authors: Pieter Valkering and Erik LaesThere is no smart grid without a smart end-user! Smart grids will be essential in future energy systems to allow for major improvements in energy efficiency and for the integration of solar energy and other renewables into the grid, thereby contributing to climate change mitigation and environmental sustainability at large. End-users will play a crucial role in these smart grids that aim to link end-users and energy providers in a better balanced and more efficient electricity system. The success of smart grids depends on effective active load and demand side management facilitated by appropriate technologies and financial incentives, requiring end-user, market and political acceptance. However, current smart grid pilot projects typically focus on technological learning and not so much on learning to understand consumer needs and behaviour in a connected living environment. The key question thus remains: how to engage end-users in smart grid projects so as to satisfy end-user needs and stimulate active end-user participation, thereby realizing as much as possible the potential of energy demand reduction, energy demand flexibility, and local energy generation? The aim of European S3C project (www.s3c-project.eu) is to further understanding of engaging end-users (households and SMEs) in smart grid projects and ways this may contribute to the formation of new 'smart' behaviours. This research is based upon three key pillars: 1) the analysis of a suite of (recently finished or well-advanced) European smart-grid projects to assess success factors and pitfalls; 2) the translation of lessons learned to the development of concrete engagement guidelines and tools, and 3) the further testing of the guidelines and tools in a collection of ongoing smart grid projects leading to a finalized 'toolbox' for end-user engagement. Crucially, it differentiates findings for three key potential end-user roles: 'Consumer' (a rather passive role primarily involving energy saving), 'Customer' (a more active role offering demand flexibility and micro-scale energy production), and 'Citizen' (the most pro-active role involving community-based smart grid initiatives). Within this context, this paper aims to deliver a coherent view on current good practice in end-user engagement in smart grid projects. Starting from a brief theoretical introduction, it highlights the success factors - like underscoring the local character of a smart energy projects - and barriers - like the lack of viable business cases - the S3C case study analysis has revealed. It furthermore describes how these insights are translated into a collection of guidelines and tools on topics such as understanding target groups, designing adequate incentives, implementing energy monitoring systems, and setting up project communication. Also, an outlook towards future testing of those guidelines and tools within on-going smart grid projects is given. Consequently, we argue for each one of the three typical end-user roles which principles of end-user engagement should be considered good (or bad) practice. We conclude with highlighting promising approaches for end-user engagement that require further testing, as input for a research agenda on end-user engagement in smart grids.
-
-
-
Illustrations Generation Based On Arabic Ontology For Children With Intellectual Challenges
Authors: Abdelghani Karkar, Amal Dandashi and Jihad Al Ja'amDigital devices and computer software have the prospect to help children with intellectual challenges (IC) in learning capabilities, profession growth, and self-consciousness living. However, most tools and existing software applications that these children utilize are prepared without observance of their particular deficiency. We conduct an Arabic ontology-based learning system that presents automatically illustrations to characterize the content of stories for children with IC in the state of Qatar. We utilize different mechanisms in order to produce these illustrations which comprise: Arabic natural language processing, animal domain-based ontology, word-to-word based relationship extraction, automatic online search-engine querying. The substantial purpose of our proposed system is to ameliorate children with IC the educational, comprehension, perception, and reasoning through the generated illustrations.
-
-
-
Application Of Design For Verification To Smart Sensory Systems
Authors: Mohammed Gh Al Zamil and Samer SamarahWireless Sensor Networks (WSNs) have unleashed researchers and developers to propose a series of smart systems that serve the needs of societies and enhance the quality of services. WSNs consist of a set of sensors that sense the environmental variables, such as temperature, humidity, speed of objects, and report them back to a central node. Although such architecture seems simple, it is lack of many limitations that might affect its scalability, modularity of coded programs, and correctness in terms of synchronization problems such as nested monitor lockouts, missed or forgotten notifications, or slipped conditions. This research investigated the application of design for verification approach in order to come with a design for verification framework that takes into account the specialized features of smart sensory systems. Such contribution would facilitate 1) verifying coded programs to detect temporal and concurrent problems, 2) automating the verification process of such complex and critical systems, and 3) modularizing the coding of these systems to enhance their scalability. Our proposed framework relies on separating the design of the system's interfaces from the coded body; separation of concerns. For instance, we are not looking for recompiling the coded programs but, instead, we are looking for discovering design errors resulted from the concurrent temporal interactions among different programming objects. For this reason, our proposed framework adapts the concurrency controller design pattern to model the interactions modules. As a result, we were able to study the interactions among different actions and automatically recognize the transitions among them. Therefore, such recognition guarantees building a finite-state automaton that formulate the input description to a model-checker to verify some given temporal properties. To evaluate our proposed methodology, we have verified a real-time smart irrigation system that consists of a set of different sensors, which are controlled by a single controller unit. The system has already installed at our research field to control the irrigation process for the purpose of saving water. Further, we designed a set of specifications, temporal specifications, to check whether the system confirms to these specification during the interactions among heterogeneous sensors or not. If not, the model-checker returns a counter example of a sequence of states that violate a given specification. Thus, the counter example would be a great gift to fix the design error, which would minimize the risk of facing such error during run-time. The results showed that applying the proposed framework facilitates producing scalable, modular, and error-free sensory systems. The framework allowed us to detect critical design errors and fix them before deploying the smart sensory system. Finally, we also were able to check the power consumption model of our installed sensors and the effect of data aggregation on saving more power during future operations.
-
-
-
NEGATIVE FOUR CORNER MAGIC SQUARES OF ORDER SIX WITH A BETWEEN 1 AND 5
More LessIn this paper we introduce and study special types of magic squares of order six. We list some enumerations of these squares. We present a parallelizable code. This code is based on the principles of genetic algorithms. A magic square is a square matrix, where the sum of all entries in each row or column and both main diagonals yields the same number. This number is called the magic constant. A natural magic square of order n is a matrix of size n×n such that its entries consist of all integers from one to square of n. We define a new class of magic squares and present some listing of the counting carried out over two years.
-
-
-
Arabic Natural Language Processing: Framework For Translative Technology For Children With Hearing Impairments
Authors: Amal Dandashi, Abdelghani Karkar and Jihad AljaamChildren with hearing impairments (HI) often face many educational, communicational and societal challenges. Arabic Natural Language Processing can be used to develop several key technologies that may alleviate cognitive and language learning difficulties children with HI face in the Arab world. In this study, we propose a system design that provides the following component functionalities: (1) Multimedia translation elements that can be dynamically generated based on Arabic text, (2)3D Avatar based text-to-video translation (from Arabic text to Qatari Sign Language),involving manual and non manual signals, (3)Emergency phone based system that translates Arabic text to Qatari Sign Language Video and vice versa, and (4) Multi component system designed to be mobilized and used on various platforms. This system involves the use of Arabic Natural Language Processing, Arabic word and video Ontologies, and customized engine querying. The objective of the presented system framework is to provide translational and cognitive assistive technology to individuals with HI and empower their autonomous capacities.
-
-
-
Optimized Search Of Corresponding Patches In Multi-scale Stereo Matching: Application To Robotic Surgery
Authors: Amna Alzeyara, Jean-marc Peyrat, Julien Abinahed and Abdulla Al-ansariINTRODUCTION: Minimally-invasive robotic surgery benefits the surgeon with increased dexterity and precision, more comfortable seating, and depth perception. Indeed, the stereo-endoscopic camera of the daVinci robot provides the surgeon with a high-resolution 3D view of the surgical scene inside the patient body. To leverage this depth information using advanced computational tools (such as augmented reality or collision detection), we need a fast and accurate stereo matching algorithm, which computes the disparity (pixel shift) map between left and right images. To improve this trade-off between speed and accuracy, we propose an efficient multi-scale approach that overcomes standard multi-scale limitations due to interpolation artifacts when upsampling intermediate disparity results from coarser to finer scale. METHODS: Standard stereo matching algorithms perform an exhaustive search of the most similar patch between the reference and target images (along the same horizontal line when images are rectified). This requires a wide search range in the target image to ensure finding the corresponding pixel in the reference image (Figure 1). To optimize this search, we propose a multi-scale approach that uses the disparity map resulting from previous iteration at lower resolution. Instead of directly using the pixel position in the reference image to place the search region in the target image, we shift it by the corresponding disparity value from previous iteration and reduce the width of the search region as it is expected to be closer to the optimal solution. We also add two additional search regions shifted by disparity values at left and right adjacent pixel positions (Figure 2) to avoid errors typically related to interpolation artifacts when resizing disparity map. To avoid important overlaps between different search regions, we only add them where the disparity map has strong gradients. MATERIAL: We used stereo images from the Middlebury dataset (http://vision.middlebury.edu/stereo/data/) and stereo-endoscopic images captured at full HD 1080i resolution using a daVinci S/Si HD Surgical System. Experiments were performed with a GPU implementation on a workstation with 128GB RAM, an Intel Xeon Processor E5-2690, and an NVIDIA Tesla C2075. RESULTS: We compared the accuracy and speed between standard and proposed methods using ten images from the Middlebury dataset that has the advantage to provide ground truth disparity maps. We used the sum of square difference (SSD) as a similarity metric between patches of size 3x3 in left and right rectified images, resized to half their original size (665x555). For the standard method, we set the search range offset and width to respectively -25 and 64 pixels. For the proposed method, we initialize the disparity to 0 followed by five iterations with a search range width of 16. Results in Table 1 show that we managed to improve the average accuracy by 27% without affecting the average computation time of 120ms. CONCLUSION: We proposed an efficient multi-scale stereo matching algorithm that significantly improves accuracy without compromising speed. In future work, we will investigate the benefits of a similar approach using temporal consistency between successive frames and its use in more advanced computational tools for image-guided surgery.
-
-
-
On The Use Of Pre-equalization To Enhance The Passive Uhf Rfid Communication Under Multipath Channel Fading
Authors: Taoufik Ben Jabeur and Abdullah KadriOn the Use of Pre-Equalization to Enhance the Passive UHF RFID Communication under Multipath Channel Fading Dr. Taoufik Ben-Jabeur & Dr. Abdullah Kadri Qatar Mobility Innovations Center (QMIC), Qatar Science and Technology Park, Doha, Qatar taoufikb,[email protected] Background: We consider a monostatic passive UHF RFID system where it is composed from one RFID reader with one antenna, for transmission and reception, and RF tags. The energy of the continuous signals transmitted by the RFID reader is used to power up the internal circuitry of the RF tags and to backscatter their information to the reader. In the passive UHF RFID, we note the absence of source of the energy other than that is coming from the continuous wave. Experiences show that the presence of the multipath channel fading can reduce dramatically the received power at the tag. Therefore, the received energy isn't sufficient to power up the RF tag. To remedy this problem, we propose a pre-equalizer applied on the transmitted reader in order to maintain a useful received power able to power-up the tag. Objectives: This work aims to design a specific a pre-equalizer for passive UHF RFID systems able to combat the effect of the multipath channel fading and then maintain a high received power on the tag. Methods: a.First stage, we assume the knowledge of the multipath channel fading and the continuous wave. Then, the pre-equalizer is designed for a fixed Rayleigh multipath channel in order to maximize the energy of the receiver signal on the tag. b.Proprieties are extracted from the designed pre-equalizer associated to the fixed channel. c.More general equalizer uses these proprieties to design an equalizer that can be applied for any unknown multipath Rayleigh channel. Simulation results: Simulation results are provided to show that the proposed pre-equalizers allow combating the effect of the multipath channel fading and thus increasing the received power at the RF tag. The energy consumption of the tag is still the same and all operations are made at the RFID reader side.
-
-
-
Determination Of Magnetizing Characteristic Of A Single-phase Self Excited Induction Generator
Authors: Mohd Faisal Khan, Mohd Rizwan Khan, Atif Iqbal and Moidu ThavotThe magnetizing characteristic of a Self Excited Induction Generator (SEIG) defines relationship between its magnetizing reactance and air-gap voltage. The characteristic is essential for steady state, dynamic or transient analysis of SEIGs as the magnetizing inductance is the main factor responsible for voltage build-up and its stabilization in these machines. In order to obtain essential data to get this characteristic the induction machine is subjected to synchronous speed test. The data yielded by this test can be utilised to extract complete magnetizing behaviour of the test machine. In this paper a detailed study is carried out on a single phase induction machine to obtain its magnetizing characteristic. The procedure of performing synchronous speed test to record necessary data has been explained in detail along with relevant expressions for the calculation of different parameters. The magnetizing characteristic for the investigated machine is reported in the paper.
-
-
-
Control Of Packed U-cell Multilevel Five-phase Voltage Source Inverter
Authors: Atif Iqbal, Mohd Tariq, Khaliqur Rahman and Abdulhadi Al-qahtaniA seven level five-phase voltage source inverter with packed U-cell topology is presented in this paper. This is called Packed U-cell because each unit of the inverter is of shape U. Fig. 1 presents a five-phase seven-level inverter power circuit configuration using Packed U-cell. Depending upon the number of capacitors in the investigated topology different level of voltages can be achieved. In the presented topology two capacitors have been used to obtain seven levels (Vdc, 2Vdc/3, Vdc/3, 0, - Vdc/3, -2Vdc/3, -Vdc ). The Voltage across second capacitor (C) must be maintained at one-third of the voltage of the dc link.
-
-
-
An Ultra-low-power Processor Architecture For High-performance Computing And Other Compute-intensive Applications
Authors: Toshikazu Ebisuzaki and Junichiro MakinoGRAPE-X processor is an experimental processor chip designed to achieve extremely high performance per watt. It was made using TSMC's 28 nm technology, and has achieved 30 Gflops/W. This number is three times higher than the performance of best GPGPU cards announced so far, using the same 28 nm technology. The power consumption has been the main factor which limits the performance improvement of HPC systems. This is because of the break of the so-called CMOS scaling law. Until early 2000's, or when the design rule of the silicon device was larger than 130nm, shrinking the transistor size by a factor of two results in: for times more transistors, two times higher clock frequency, half the supply voltage, and the same power consumption. Thus, one could achieve 8x performance improvement. However, with transistors smaller than 130nm design rules, it has become difficult to reduce the supply voltage, resulting in only a factor-of-two performance improvement for the same power consumption. As a result, reduction in the power consumption of the processor, when it is fully in operation, has become the most important issue. In addition, it has also become important to achieve high parallel efficiency on relatively small-sized problems. With large parallel machines, high peak performance is realized, but that peak performance is in many cases not so useful, since it is achieved only for unrealistically large problems. For the problems of practical interest, the efficiencies of large scale parallel machines are sometimes surprisingly low. In order to achieve high performance-per-watt and high parallel efficiency on small problems, we developed a highly streamlined processor architecture. In order to reduce the communication overhead and improve parallel efficiency, we adopted an SIMD architecture. To reduce the power consumption, we adopted the distributed-memory-on-chip architecture, in which each of SIMD processor core has its own main memory. Based on the GRAPE-X architecture, an exa-flops (10^18 flops) system with the power consumption less than 50 MW will be possible in 2018-2019 time-frame. For many real applications including those in the cyber security area, which requires 10TB or less memory, a parallel system based on our GRAPE-X architecture will provide the highest parallel efficiency and the shortest time to the solution at the same time. Oral presentation is preferred
-
-
-
Energy Storage System Sizing For Peak Hour Utility Applications In Smart Grid
Authors: Islam Safak Bayram, Mohamed Abdallah and Khalid QaraqeEnergy Storage Systems (ESS) are expected to play a critical role in future energy grids. ESS technologies are primarily employed for reducing the stress on grid and the use of hydrocarbons for electricity generation. However, in order for ESS option to become economically viable, proper sizing is highly desired to recover the high capital cost. In this paper we propose a system architecture that enables us to optimally size the ESS system according to the number of users. We model the demand of each customer by a two-state Markovian fluid and the aggregate demand of all users are multiplexed at the ESS. The proposed model also draws a constant power from the grid and it is used to accommodate the customer demand and charge the storage unit, if required. Then, given the population of customers and their stochastic demands, and the power drawn from the grid we provide an analytical solution for ESS sizing using the underflow probability as the main performance metric, which is defined as the percentage of time that the system resources fall short of demand. Such insights very important in designing the system planning phases of future energy grid infrastructures.
-
-
-
An Enhanced Dynamic-programming Technique For Finding Approximate Overlaps
Authors: Maan Haj Rachid and Qutaibah MalluhiThe next generation sequencing technology creates a huge number of sequences (reads), which constitute the input for genome assemblers. After prefiltering the sequences, it is required to detect exact overlaps between the reads to prepare the necessary ingredients to assemble the genome. The standard method is to the find the maximum exact suffix-prefix match between each pair of reads after executing an error-detection technique. This is applied in most assemblers, however, a few studies worked on finding the approximate overlap. This direction can be useful when error detection and prefiltering techniques are very time consuming and not very reliable. However, there is a huge difference in term of complexity between finding exact and approximate matching techniques. Therefore, any improvement in time could be valuable when approximate overlap is the target. The naive technique to find approximate overlaps applies a modified version of dynamic programming (DP) on every pair of reads, which consumes O(n2) time where n is the total size of all reads. In this work, we take advantage of the fact that many reads share prefixes. Accordingly, it is obvious that some work is continuously repeated. For example, consider the sequences in Figure 1. If dynamic programming is applied on S1 and S2, assuming S2 and S3 share a prefix of length 4, then it is easy to notice that calculation of a portion of DP table of size |S1| X 5 can be avoided when applying the algorithm on S1 and S3 (the shaded area in Figure 1). Figure 1. DP table for S1,S2 alignment. We assume the following: gap = 1, match =0 and mismatch=1. no calculation for the shaded area is required when calculating S1,S3 table since S2,S3 share the prefix AGCC. The modification is based on the above observation: first, the reads are sorted in lexicographical order and the largest common prefix (LCP) between every two consecutive reads is found. Let group G denote the reads after sorting. For every string S, we find the DP table for S and every other string in G. Since the reads are sorted, a portion of DP table can be skipped for every string, depending on the size of LCP, which has already been calculated in the previous step. We implemented the traditional technique to find approximate overlap with and without the proposed modification. The results show that there is an improvement of 10-61% in time. The interpretation for this wide range is that the gain in performance depends on the number of strings. The larger the number of strings is, the better the gain in performance since the sizes of LCPs are typically larger.
-
-
-
Practical Quantum Secure Communication Using Multi-photon Tolerant Protocols
More LessThis paper presents an investigation of practical quantum secure communication using multi-photon tolerant protocols. Multi-photon tolerant protocols loosen the limit on the number of photons imposed by currently used quantum key distribution protocols. The multi-photon tolerant protocols investigated in this paper are multi-stage protocols that do not require any prior agreement between a sender Alice and a receiver Bob. The security of such protocols stems from the fact that the optimal detection strategies between the legitimate users and the eavesdropper are asymmetrical, allowing Bob to obtain measurement results deterministically while imposing unavoidable quantum noise to the eavesdropper Eve's measurement. Multi-photon tolerant protocols are based on the use of transformations known only to the communicating party applying them i.e. either Alice or Bob. In this paper multi-photon tolerant protocols are used in order to share a key or a message between a sender Alice and a receiver Bob. Thus such protocols can be either used as quantum key distribution (QKD) protocols or quantum communication protocols. In addition, multi-stage protocols can be used to share a key between Alice and Bob, followed by the shared key used as a seed key to a single-stage protocol, called the braiding concept. This paper presents a practical study of multi-photon tolerant multi-stage protocols. Security aspects as well as challenges to the practical implementation are discussed. In addition, secret raw key generation rates are calculated with respect to both losses and distances over a fiber optical channel. It is well-known that raw key generation rates decreases with the increase in channel losses and distances. In this paper, coherent non-decoying quantum states are used to transfer the encoded bits from Alice to Bob. Raw key generation rates are calculated for different average photon numbers µ and compared with the case of µ=0.1, which is the average number of photons used in most single-photon based QKD protocols. Furthermore, an optimal average number of photons to be used within the secure region of the multi-photon tolerant protocols is calculated. It is worth noting that, with the increased key generation rates and distances of communication offered by the multi-photon tolerant protocols, quantum secure communication need not be restricted to quantum key distribution; it can be elevated to attain direct quantum secure communication.
-
-
-
Power Grid Protection
Authors: Enrico Colaiacovo and Ulrich OttenburgerDue to its inherent short-term dynamic, the power grid is a critical component of the energy system. When a dangerous event occurs in a section of the grid (i.e. a power line or a plant fails), the overall system is subject to the risk of a blackout. The time available to counteract the risk is very short (only a few milliseconds) and there are no tools to ensure the power to a number of selected critical facilities. A way to tackle the blackout risk and to implement a smart management of the remaining part of the grid is a distributed control system with preemptive commands. It's based on the idea, that in case of dangerous events, there will be definetly no time to inform the control center, to make a decision and to send the commands to the active components of the power grid where, finally, they will be executed. The idea consist in the implementation of an intelligent distributed control system that continuously controls the critical components of the power grid. It monitors the operational conditions and evaluates the ability of single components to work well and their probability of an outage. In parallel, the control system continuously imparts preemptive commands to eventually counteract the outages expected on a probabilistic base. The preemptive commands can be defined taking into account the sensitivity to specific outages by different network elements and of course, on the base of a priority rule that preserve the power for the strategic sites. In case of a dangerous event, the monitoring device directly sends messages to all the actuator devices, where the action will be performed only if a preemptive command was previously delivered. This means that the latency of the traditional control chain will be reduced to the latency of communications between monitoring and actuator devices. The first consequence of this policy is that an event, which is potentially the cause for a complete blackout will affect only a limited portion of the grid. The second consequence is , that the control system will choose the network elements that will be involved in the emergency procedure, preserving the strategic plants. The third consequence is that with this kind of control, the power grid goes from a N-1 stable status to another N-1 stable status. The system loses contributions of generation and load, but it keeps its stability and its standard operations.
-
-
-
A Conceptual Model For Tool Handling In The Operation Room
Authors: Juan Wachs and Dov DoriBackground & Objectives: There are 98,000 deaths in the US annually due to errors in the delivery of healthcare causing inpatient mortality and morbidity. Among these errors, ineffective team interaction in the operating room (OR) accounts for one of the main causes. Recently, it has been suggested that developing a conceptual model of verbal and non-verbal exchanges in the OR could lead to a better understanding of the dynamics among the surgical team, and this in turn, could result in a reduction in miscommunication in the OR. In this work, we describe the main principles characterizing the Object-Process Methodology (OPM). This methodology enables to describe the complex interactions between surgeons and the surgical staff while delivering surgical instruments during a procedure. The main objective of such a conceptual model is to assess when and how errors occur during the requests and delivery of instruments, and how to avoid those. Methods: The conceptual model was constructed from direct observations of surgical procedures and eventual miscommunications cases in the OR. While the interactions in the OR are rather complex, the compact ontology of OPM allows stateful objects and processes to interact mutually and generate measurable outcomes. The instances modeled are related to verbal and non-verbal communication (e.g. gestures, proxemics) and the potential mistakes are modeled as processes that deviate for the “blue ocean” scenario. The OPM model was constructed through an iterative process of data collection through observation, modeling, brainstorming, and synthesis. This conceptual model provides the basis for new theories and frameworks needed to characterize operating OR communication. Results: The model adopted can accurately express the intricate that take place in the OR during a surgical procedure. A key component of the conceptual model is the ability to specify the features at various levels of detail, and each level represented through a different diagram. Nevertheless, each diagram is contextually linked to all the others. The resulting model, thus, provides a powerful and expressive ontology of verbal and non-verbal communication exchanges in the OR. Concretely, the model is validated through structured questionnaires, which allows assessing the level of consensus for criteria such as flexibility, accuracy, and it generality. Conclusion: A conceptual model was presented describing the tools handling processes during operations conducted at the OR. The focus is placed on communication exchanges between the main surgeon and the surgical technician. The objective is to create a tool to "debug" and identify the exact circumstances in which surgical delivery errors can happen. Our next step is the implementation of robotic assistant for the OR, which can deliver and retrieve surgical instruments. A necessary requirement for the introduction of such cybernetic solution is the development of a concise specification of these interactions in the OR. The development of this conceptual model can have a significant impact in both the reduction in tool-handling-related errors, and the formal designing robots which could complement surgical technicians in their routine tool handling activities during surgery.
-
-
-
Efficient Multiple Users Combining And Scheduling In Wireless Networks
Authors: Mohammad Obaidah Shaqfeh and Hussein AlnuweiriWireless networking plays a vital role in our daily life style and has tremendous applications in almost all fields of the economy. The wireless medium is a shared medium and, hence, user-scheduling is needed to allow multiple users access the channel jointly. Furthermore, the wireless channel is characterized by its time-based and location-based variations due to physical phenomena such as multi-path propagation and fading, etc. Therefore, unlike the traditional persistent round-robin scheduling schemes, the current standards of telecommunication systems support channel-aware opportunistic scheduling in order to exploit the varying channels of the users when they are at their peak conditions. The advantages of these schemes in enhancing the prospected throughput of the networks are evident and demonstrated. However, these schemes are basically based on selecting a single user to access a certain frequency sub-channel at a given time in order to avoid creating interference if more than one user access the same channel. Nevertheless, allowing multiple users to access the same channel can be feasible by using special coding techniques such as superposition coding with successive interference cancellation at the receivers. The main advantage of this is to improve the spectral efficiency of the precious wireless spectrum and to enhance the overall throughput of the network while maintaining the quality-of-service requirements of all users. Despite their advantages, multiple-users scheduling schemes require the use of proper resource allocation algorithms to process the channel conditions measurements in order to decide which users should be served in a given time slot and frequency sub-channel and the allocated data rate and power of each link in order to maximize the transmission efficiency. Failure to use a suitable resource allocation and scheduling scheme can degrade the performance significantly. We design and analyze the performance of efficient multiple-users scheduling schemes for wireless networks. One scheme is proven theoretically to be the most efficient one. However, the algorithm computation load is significant. The other scheme is a sub-optimal scheme that has low computation load to run the algorithm and it achieves very good performance which is comparable to the optimal scheme. Furthermore, we evaluate the performance gains of multiple-user scheduling over the conventional single-user scheduling under different constraints such as hard fairness and proportional fairness among the users and for fixed merit weights of the users based on their service class. In all of these cases, our proposed schemes can achieve a gain that may exceed 10% in terms of the data rate (bits/sec). This gain is significant taken into consideration that we use the same air-link and power resources of the conventional single-user scheduling schemes.
-
-
-
Maximizing The Efficiency Of Wind And Solar-based Power Generation By Gis And Remotely Sensed Data In Qatar
Authors: Ramin Nourqolipour and Abu Taleb GhezelsoflouQatar has a high potential to develop renewable energy generating systems spatially through solar and wind-based technologies. Although, substantial initiatives have been undertaken in Qatar to reduce the high per capita emissions of the Greenhouse Gases (GHG), solar and wind-based energy generation can also significantly contribute to the mitigation of climate change. The mean Direct Normal Irradiance (DNI) of Qatar is about 2008 kWh/m2/y, which is suitable to develop solar power systems, knowing that 1800 kWh/m2/y is enough to establish Concentrated Solar Power (CSP) plants. Although, the cost factor for developing the solar based power generation systems is about twice the gas based power generation, it generates environmental friendly energy along with keeping the limited gas resources. Moreover, being aware that 3 m/s is the critical wind speed to generate power, Qatar experiences wind speed over the critical speed in almost 80% of time that is a great potential to develop wind-based energy systems. In terms of economic feasibility, the minimum requirement of number for full load hours is 1400 while the number for Qatar is higher than the critical value. Furthermore, establishing wind power plant is cheaper than the gas-based one in off-shore locations even though the power generation is lower. This paper explains a methodology to determine the most suitable sites for developing the solar and wind-based power plants in order to maximize the efficiency of power generation using remote sensing and GIS. Analyses are carried out on two sets of spatial data derived from a recent Landsat 8 image such as land cover, urban and built-up areas, roads, water sources, and constraints, along with bands 10 and 11 (thermal bands) of same sensor for the year 2014, a DEM (Digital Elevation Model) derived from SRTM V2 (Shuttle Radar Topography Mission) to generate slope, aspect, and solar maps, and wind data obtained from Qatar meteorology department. The data are used to conduct two parallel Multi-Criteria Evaluation (MCE) techniques based on each objective of development (solar, and wind power plant development) through the following stages: (1) data preparation and standardization using categorical data rescaling, and fuzzy set membership function, (2) Logistic Regression-based analysis to determine suitability of each pixel for desired objective of development. The analysis produces two distinct suitability maps such that each one addresses suitable areas to establish solar, and wind power plants. The obtained suitability maps then are processed under a multi-objective land allocation model to allocate the areas that show the highest potential to develop both solar and wind-based power generation. Results show that the off-shore suitable sites for both objectives are mainly distributed in the north and north-west regions of Qatar.
-
-
-
An Efficient Model For Sentiment Classification Of Arabic Tweets On Mobiles
Authors: Gilbert Badaro, Ramy Baly, Hazem Hajj, Nizar Habash, Wassim El-hajj and Khaled ShabanWith the growth of social media and online blogs, people express their opinion and sentiment freely by providing product reviews, as well as comments about celebrities, and political and global events. These texts reflecting opinions are of great interest to companies and individuals who base their decisions and actions upon them. Hence, opinion mining on mobiles is capturing the interest of users and researchers across the world with the growth of available online data. Many techniques and applications have been developed for English while many other languages are still trying to catch up. In particular, there is an increased interest in easy access to Arabic opinion from mobiles. In fact, Arabic presents challenges similar to English for opinion mining, but also presents additional challenges due to its morphological complexity. Mobiles on the other hand present their own challenges due to limited energy, limited storage, and low computational capability. Since some of the state-of-the-art methods for opinion mining in English require the extraction of large numbers of features, and extensive computations, these methods are not feasible for real-time processing on mobile devices. In this work, we provide a solution to address the limitation of the mobile, and the required Arabic resources to derive opinion mining on mobiles. The method is based on matching stemmed tweets to our own developed Arabic sentiment lexicon (ArSenL). While there have been efforts towards building Arabic sentiment lexicons, they suffer from many deficiencies including limited size, unclear usability plan given Arabic's rich morphology, or non-availability publicly. ArSenL is the first publicly available large scale Standard Arabic sentiment lexicon (ArSenL) developed using a combination of English SentiWordnet (ESWN), Arabic WordNet, and the Standard Arabic Morphological Analyzer (SAMA). A public interface to browsing ArSenL is available at http://me-applications.com/test. The scores from the matched stems are then aggregated and processed through a decision tree for determining the polarity. The method was tested on a published set of Arabic tweets, and an average accuracy of 67% was achieved versus a 50% baseline. A mobile application was also developed to demonstrate the usability of the method. The application takes as input a topic of interest and retrieves the latest Arabic tweets related to this topic. It then displays the tweets superimposed with colors representing sentiment labels as positive, negative or neutral. The application also provides visual summaries of searched topics and a history showing how the sentiments for a certain topic has been evolving.
-
-
-
Email Authorship Attribution In Cyber Forensics
More LessEmail is one of the most widely used forms of written communication over the Internet, and its use has increased tremendously for both personal and professional purposes. The increase in email traffic comes also with an increase in the use of emails for illegitimate purposes to commit all sort of crimes. Phishing, spamming, email bombing, threatening, cyber bullying, racial vilification, child pornography, viruses and malware propagation, and sexual harassments are common examples of email abuses. Terrorist groups and criminal gangs are also using email systems as a safe channel for their communication. The alarming increase in the number of cybercrime incidents using email is mostly due to the fact that email can be easily anonymized. The problem of email authorship attribution is to identify the most plausible author of an anonymous email from a group of potential suspects. Most previous contributions employed a traditional classification approach, such as decision tree and Support Vector Machine (SVM), to identify the author and studied the effects of different writing style features on the classification accuracy. However, little attention has been given on ensuring the quality of the evidence. In this work, we introduce an innovative data mining method to capture the write-print of every suspect and model it as combinations of features that occur frequently in the suspect's emails. This notion is called frequent pattern, which has proven to be effective in many data mining applications, but has not been applied to the problem of authorship attribution. Unlike traditional approaches, the extracted write-print by our method is unique among the suspects and, therefore, provides convincing and credible evidence for presenting it in a court of law. Experiments on real-life emails suggest that the proposed method can effectively identify the author and the results are supported by a strong evidence.
-
-
-
Msr3e: Distributed Logic Programming For Decentralized Ensembles
Authors: Edmund Lam and Iliano CervesatoIn recent years, we have seen many advances in distributed systems, in the form of cloud computing and distributed embedded mobile devices, drawing more research interest into better ways to harness and coordinate the combined power of distributed computation. While this has made distributed computing resources more readily accessible to main-stream audiences, the fact remains that implementing distributed software and applications that can exploit such resources via traditional distributed programming methodologies is an extremely difficult task. As such, finding effective means of programming distributed systems is more than ever an active and fruitful research and development endeavor. Our work here centres on the development of a programming language known as MSR3e, designed for implementing highly orchestrated communication behaviors of an ensemble of computing nodes. Computing nodes are either traditional main-stream computer architectures or mobile computing devices. This programming language is based on logic programming, and is declarative and concurrent. It is declarative in that it allows the programmer to express the logic of synchronization between computing nodes without describing any form of control flow. It is concurrent in that its operational semantics is based on a concurrent programming model known as multiset rewriting. The result is a highly expressive distributed programming language that provides a programmer with a high-level abstraction to implement highly complex communication behavior between computing nodes. This allows the programmer to focus on specifying what processes need to synchronize between the computing nodes, rather than how to implement the synchronization routines. MSR3e is based on a traditional multiset rewriting model with two important extensions: (1) Explicit localization of predicates, allowing the programmer to explicitly reference the locations of predicates as a first-class construct of the language (2) Comprehension patterns, providing the programmer a concise mean of writing synchronization patterns that matches dynamically sized sets of data. This method of programming often result to more concise codes (relative to main-stream programming methodologies) that are more human readable and easier to debug. Its close foundation to logic programming also suggests the possibilities of effective automated verification of MSR3e programs. We have currently implemented a prototype of MSR3e. This prototype is a trans-compiler that compiles a MSR3e program into two possible outputs: (1) a C++ program that utilizes the MPI libraries, intended for execution on traditional main-stream computer architectures (e.g., ×86, etc..) or (2) a Java program that utilizes WiFi direct libraries of the android SDK, intended for execution on android mobile devices. We have conducted preliminary experimentations on a small set of examples, to show that MSR3e works in practice. In future, we intend to refine our implementation of MSR3e, scaling up the experiment suites, as well as developing more non-trivial applications in MSR3e, as further proof of concept.
-
-
-
Modelling The Power Produced By Photovoltaic Systems
Authors: Fotis Mavromatakis, Yannis Franghiadakis and Frank VignolaThe development and improvement of a model that can provide accurate estimates of the power produced by a photovoltaic system is useful for several reasons. A reliable model contributes to the proper operation of a photovoltaic power system since any deviations between modeled and experimental power can be flagged and studied for possible problems that can be identified and addressed. It is also useful to grid operators to know hours or a day ahead the contribution from different PV systems or renewable energy systems in general. In this way, they will be able to manage and balance production and demand. The model was designed to use the smallest number of free parameters. Apart from the incoming irradiance and module temperature, the model takes into account the effects introduced by the instantaneous angle of incidence and the air mass. The air mass is related to the position of the sun during its apparent motion across the sky since light travels through an increasing amount of atmosphere as the sun gets lower in the sky. In addition, the model takes into account the reduction in efficiency at low solar irradiance conditions. The model is versatile and can incorporate a fixed or variable percentage for the losses due to the deviation of MPPT tracking from ideal, the losses due to the mismatch of the modules, soiling, aging, wiring losses and the deviation from the nameplate rating. Angle of incidence effects were studied experimentally around solar noon by rotating the PV module at predetermined positions and recording all necessary variables (beam & global irradiances, module temperature and short circuit current, sun and module coordinates). Air mass effects were studied from sunrise to solar noon with the PV module always normal to the solar rays (global irradiance, temperature and short circuit current were recorded). Stainless steel meshes were used to artificially reduce the level of the incoming solar irradiance. A pyranometer and a reference cell were placed behind the mesh, while the unobstructed solar irradiance was monitored with a second reference cell. The different mesh combinations allowed us to reach quite low levels of irradiance (10%) with respect to the unobstructed irradiance (100%). Seasonal dust effects were studied by comparing the transmittance of glass samples exposed to outdoor conditions, at weekly time intervals, against a cleaned one. Data from several different US sites as well as from PV systems located in Crete, Greece are currently used to validate the model. Instantaneous values as well as daily integrals are compared to check the performance of the model. At this stage of analysis, it turns out that the typical accuracy of the model is better than 10% for angles of incidence less than sixty degrees. In addition, the performance of the model as a function of the various parameters is being studied and how these affect the calculations. In addition to the functions that have been determined from our measurements, functions available in the literature are also being tested.
-
-
-
A New Structural View Of The Holy Book Based On Specific Words: Towards Unique Chapters (surat) And Sentences (ayat) Characterization In The Quran
Authors: Meshaal Al-saffar, Ali Mohamed Jaoua, Abdelaali Hassaine and Samir ElloumiIn the context of web Islamic data analysis and authentication an important task is to be able to authenticate the holy book if published in the net. For that purpose, in order to detect texts contained in the holy book, it seems obvious to first characterize words which are specific to existing chapters (i.e. "Sourat") and words characterizing each sentence in any chapter (i.e. "Aya"). In this current research, we have first mapped the text of the Quran to a binary context R linking each chapter to all words contained in it, and by calculating the fringe relation F of R, we have been able to discover in a very short time all specific words in each chapter of the holy book. By applying the same approach we have found all specific words of each sentence (i.e. "Aya") in the same chapter whenever it is possible. We have found that almost all sentences in the same chapter have one or many specific words. Only sentences repeated in the same chapter or those sentences included in each other might not have specific words. Observation of words simultaneously specific to a chapter in the holy book and to the sentence in the same chapter gave us the idea for characterizing all specific sentences in each chapter with respect to the whole Quran. We found that for 42 chapters all specific words of a chapter are also specific of some sentence in the same chapter. Such specific words might be used to detect in a shorter time website containing some part of the Quran and therefore should help for checking their authenticity. As a matter of fact by goggling only two or three specific words of a chapter, we observed that search results are directly related to the corresponding chapter in the Quran. Al results have been obtained for Arabic texts with or without vowels. Utilization of adequate data structures and threads enabled us to have efficient software written in Java language. The present tool is directly useful for the recognition of different texts in any domain. In the context of our current project, we project to use the same methods to characterize Islamic books in general. ACKNOWLEDGMENT: This publication was made possible by a grant from the Qatar National Research Fund through National Priority Research Program (NPRP) No. 06-1220-1-233. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the Qatar National Research Fund or Qatar University.
-
-
-
A Novel Approach To Detection Of Glandular Structures In Colorectal Cancer Histology Images
Authors: Korsuk Sirinukunwattana, David Snead and Nasir RajpootBackground: Gland is a prevalent organ in a human body, synthesizing hormones and other vital substances. Gland morphology is an important feature in diagnosing malignancy and assessing the tumor grade in colorectal adenocarcinomas. However, a good detection and segmentation of glands is required prior to the extraction of any morphological features. Objectives: The aim of this work is to generate a glandular map for a histopathological image containing glandular structures. The map indicates the likelihood of different image regions belonging to glandular structures. This information can then be used as a clue for initial detection of glands. Methods: A pipeline to generate the probability map consists of the following steps. First, a statistical region merging algorithm is employed to generate superpixels. Second, texture and color features are extracted from each superpixel. For texture features, we calculate the coefficients of scattering trans- form. This transformation produces features at different scale-spaces which are translation-invariant and Lipschitz stable to deformation. To summarize the relationship across different scale-spaces, a region-covariance descriptor, which is a symmetric positive definite (SPD) matrix, is calculated. We call this image descriptor, scattering SPD. For color features, we quantize colors in all training images to reduce the number of features and to reduce the effect of stain variation between different images. Color information is encoded by a normalized histogram. Finally, we train a decision tree classifier to recognize superpixels belonging to glandular and nonglandular structures, and assign the probability of a superpixel belonging to the glandular class. Results: We tested our algorithm on a benchmark dataset consisting of 72 images of Hematoxylin & Eosin (H&E) stained colon biopsy from 36 patients. The images were captured at 20× magnification and the expert annotation is provided. One third of the images were used for training and the remaining for testing. Pixels with a probability value greater than 0.5 were considered as the detected glands. Table 1 shows that, in terms of the Dice index, the proposed method performs 5% better than local binary patterns and the combination between scattering SPD and color histogram results in 25% better accuracy than the baseline. Table 1: Average Segmentation Performance ApproachesSensitivitySpecificityAccuracyDice?Farjam et al. (baseline)0.50 ± 0.130.80 ± 0.150.62 ± 0.090.59 ± 0.14 superpixels + local binary pattern0.77 ± 0.060.67 ± 0.100.73 ± 0.040.77 ± 0.05 superpixels + scattering SPD0.77 ± 0.070.85 ± 0.090.81 ± 0.060.82± 0.06 superpixels + color histogram0.74 ± 0.220.82 ± 0.170.77 ± 0.10 0.79 ± 0.10 superpixels + scattering SPD + color histogram0.78 ± 0.07 0.88 ± 0.070.82± 0.060.84 ± 0.06 Conclusions: We present a superpixel-based approach for glandular structure detection in colorectal cancer histology images. We also present a novel texture descriptor derived from the region covariance matrix of scattering coefficients. Our approach generates highly promising results for initial detection of glandular structures in colorectal cancer histology images.
-
-
-
City-wide Traffic Congestion Prediction In Road Networks
Authors: Iman Elghandour and Mohamed KhalefaTraffic congestion is a major problem in many big cities around the world. According to a study performed by the world bank in Egypt in 2010 and concluded in 2012, the traffic congestion was estimated to 14 Billion EGP in the Cairo metropolitan area and to 50 Billion EGP (4\% of the GDP) in the entire Egypt. Few of the reasons of the high monetary cost of the traffic congestion are: (1) travel time delay, (2) travel time unreliability, and (3) excess fuel consumption. Smart traffic management addresses some of the causes and consequences of traffic congestion. It can predict congested routes, take preventive decisions to reduce congestion, disseminate information about accidents and work zones, and identify the alternate routes that can be taken. In this project, we develop a real-time and scalable data storage and analysis framework for traffic prediction and management. The input to this system is a stream of GPS and/or cellular data that has been cleaned and mapped to the road network. Our proposed framework allows us to (1) predict the roads that will suffer from traffic congestion in the near future, and traffic management decisions that can relieve this congestion; and (2) a what-if traffic system that is used to simulate what will happen if a traffic management or planning decision is taken. For example, it answers questions, such as: "What will happen if an additional ring road is built to surround Cairo?" or "What will happen if point of interest X is moved away from the downtown to the outskirts of the city. This framework has the following three characteristics. First, it predicts the flow of the vehicles in the road based on historical data. This is done by tracking vehicles every day trajectories and using them in a statistical model to predict the vehicles movement on the road. It then predicts the congested traffic zones based on the current vehicles in the road and their predicted paths. Second, historical traffic data are heavily exploited in the approach we use to predict traffic flow and traffic congestion. Therefore, we develop new techniques to efficiently store traffic data in the form of graphs for fast retrieval. Third, it is required to update the traffic flow of vehicles and predict congested areas in real-time, therefore we deploy our framework in the cloud and employ optimization technique to speedup the execution of our algorithms.
-
-
-
Integration Of Solar Generated Electricity Into Interconnected Microgrids Modeled As Partitioning Of Graphs With Supply And Demand In A Stochastic Environment
Authors: Raka Jovanovic and Abdelkader BousselhamA significant research effort has been dedicated in developing smartgrids in the form of interconnected microgrids. Their use is especially suitable for integration of solar generated electricity, due to the fact that by separating the electrical grid into smaller subsections, the fluctuations in the voltage and frequency that occur, can be to, a certain extent, isolated from the main grid. For the new topology, it is essential to optimize several important properties like the self-adequacy, reliability, supply-security and the potential for self-healing. These problems are frequently hard to solve, in the sense that they are hard combinatorial ones for which no polynomial time algorithm exists that can find the desired optimal solutions. Due to this fact research has been directed in finding approximate solutions, using different heuristic and metaheuristic methods. Another issue is that such systems are generally of a gigantic size. This resulted in two types of models, detailed ones that are applied to small systems and simplified ones for large ones. In the case of the former, graph models have shown to be very suitable especially ones that are based on graph partitioning problems[4]. One of the questions with the majority of previously developed graph models for large scales systems, is that they are deterministic. They are used for modeling an electrical grid which is in essence a stochastic system. In this work we focus on developing a stochastic graph model for including solar generated electricity to a system of interconnected microgrids. More precisely we focus on maximizing the self-adequacy of the individual microgrids, while trying to maximize the level of included solar generated energy, with a minimal amount of necessary energy storage. In our model we include the unpredictability of the generated electricity, and under such circumstances maintain a high probability that all demands in the system are satisfied. In practice we adapt and extend the concept of partitioning graphs with supply and demand for the problem of interest. This is done by having multiple values corresponding to the demand for one node in the graph. These values are used to represent energy usage in different time periods in one day. In a similar fashion we introduce a probability for the amount of electrical energy that will be produced by the generating nodes, and the maximal amount of storage in such nodes. Finally, we also include a heuristic approach to optimize this multi-objective optimization problem.
-
-
-
Empower The Vle With Social Computing Tools: System Prototype
Authors: Khaled Hussein, Jassim Al-jaber and Yusuf ArayiciSome lecturers gave reports and showed instances of moving part or all of their electronic course support from the Virtual Learning Environment (VLE) to social networking systems like Youtube, MySpace and Facebook because of greater student engagement with these kinds of social networking tools. Recent student interviews in Aspire Academy in Qatar have revealed that students are not concerned with what they are taught (e.g. through lectures, seminars, distance learning sessions, or through a blended learning approach) so long as the instruction was good. The latter reason opens great opportunities for delivering courses through SC media over the VLE, but also raises the question: To what extent can VLE and Social Media be leveraged as a good practice in learning and teaching in different modalities? In this research, the new experience of enriching the VLE with SC tools in Aspire Academy is presented through developing a new system prototype as a more effective solution. The prototyping process included usability testing with Aspire student-athletes and lecturers, plus heuristic evaluation by Human Computer Interaction (HCI) experts. Implementing the prototype system in academic institutions is expected to develop better learning levels and consequently better educational outcomes.
-
-
-
Gate Simulation Of A Clinical Pet Using The Computing Grid
Authors: Yassine Toufique and Othmane BouhaliNowadays, Nuclear Medicine becomes a potential research field of a growing importance at the universities. This fact can be explained, on one hand, by the increasing number of purchases in medical imaging devices by the hospitals, and, in other hand, by the number of PhD students and researchers becoming interested to medical studies. A Positron Emission Tomography (PET) system is a functional medical imaging technique which provides 3D images of the living processes inside the body relying on radioisotopes usage. The physics of PET systems is based on the detection in coincidence of the two 511 keV ?-rays, produced by an electron-positron annihilation, and emitted in opposite directions, as dictated by the conservation of energy and momentum physics laws. The radioactive nuclei used as sources of emission of positrons for PET systems are mainly 11C, 13N, 15O, and 18F, which are produced in cyclotrons, and decay with half-lives of 20.3 min, 9.97 min, 124 sec, and 110 min, respectively. These radioisotopes can be incorporated in a wide variety of radiopharmaceuticals that are inhaled or injected, leading to a medical diagnosis based on images obtained from a PET system. The PET scanners consists mainly of a large number of detector crystals arranged in a ring which surround the patient organ (or phantom in simulations) where the radioisotope tracer (e.g.: 18F-FDG) is inoculated. The final 3D image, representing the distribution of the radiotracer in the organ (or the phantom), is obtained by processing the signals delivered by the detectors of the scanner (when the ?-rays emitted from the source interact with the crystals) and using image reconstruction algorithms. This allows measuring important body functions, such as blood flow, oxygen use, and glucose metabolism, to help doctors evaluate how well organs and tissues are functioning and to diagnose and determine the severity of or treat a variety of diseases. The simulation of a real experiment using a GATE-modeled clinical Positron Emission Tomography (PET) scanner, namely PHILIPS Allegro, has been carried out using a computing Grid infrastructure. In order to reduce the computing time, the PET simulation tasks are split into several jobs submitted to the Grid to run simultaneously. The splitting technique and merging the outputs are discussed. Results of the simulation are presented and good agreements are observed with experimental data. Keywords—Grid Computing; Monte Carlo simulation; GATE; Positron Emission Tomography; splitting
-
-
-
A Cross-platform Benchmark Framework For Mobile Semantic Web Reasoning Engines In Clinical Decision Support Systems
Authors: William Van Woensel, Newres Al Haider and Syed Sr AbidiBackground & Objectives Semantic Web technologies are used extensively in the health domain to enable expressive, standards-based reasoning. Deploying Semantic Web reasoning processes directly on mobile devices has a number of advantages, including robustness to connectivity loss and more timely results. By leveraging local reasoning processes, Clinical Decision Support Systems (CDSS) can thus present timely alerts given dangerous health issues, even when connectivity is lacking. However, a number of challenges arise as well, related to mobile platform heterogeneity and limited computing resources. To tackle these challenges, developers should be empowered to benchmark mobile reasoning performance across different mobile platforms, with rule- and datasets of varying scale and complexity, and under typical CDSS reasoning process flows. To deal with the current heterogeneity of rule formats, a uniform interface on top of mobile reasoning engines also needs to be provided. System We present a mobile, cross-platform benchmark framework, comprising two main components: 1) a generic Semantic Web layer, supplying a uniform, standards-based rule- and dataset interface to mobile reasoning engines; and 2) a Benchmark Engine, to investigate mobile reasoning performance. This framework was implemented using the PhoneGap cross-platform development tool, allowing it to be deployed on a range of mobile platforms. During benchmark execution, the benchmark rule- and dataset (encoded using the SPARQL Inferencing Notation (SPIN) and Resource Description Framework (RDF)) are first passed to the generic Semantic Web layer. In this layer, the local Proxy component contacts an external Conversion Web Service, where converters perform conversion into the different rule engine formats. Developers may develop new converters to support other engines. The results are then communicated back to the Proxy and passed on to the local Benchmark Engine. In the Benchmark Engine, reasoning can be conducted using different process flows, to better align the benchmarks with real-world CDSS. To plugin new reasoning engines (JavaScript or native), developers need to implement a plugin realizing a uniform interface (e.g., load data, execute rules). New process flows can also be supplied. In the benchmarks, data and rule loading times, as well as reasoning times, are measured. From our work in clinical decision support, we identified two useful reasoning process flows: * Frequent Reasoning: To infer new facts, the reasoning engine is loaded with the entire datastore each time a certain timespan has elapsed, and the relevant ruleset is executed. * Incremental Reasoning: In this case, the datastore is kept in-memory, whereby reasoning is applied each time a new fact has been added. Currently, 4 reasoning engines (and their custom formats) are supported, including RDFQuery (https://code.google.com/p/rdfquery/wiki/RdfPlugin), RDFStore-JS (http://github.com/antoniogarrote/rdfstore-js), Nools (https://github.com/C2FO/nools) and AndroJena (http://code.google.com/p/androjena/). Conclusion In this paper, we introduced a mobile, cross-platform and extensible benchmark framework for comparing mobile Semantic Web reasoning performance. Future work consists of investigating techniques to optimize mobile reasoning processes.
-
-
-
Automatic Category Detection Of Islamic Content On The Internet Using Hyper Concept Keyword Extraction And Random Forest Classification
Authors: Abdelaali Hassaine and Ali JaouaThe classification of Islamic content on the Internet is a very important step towards authenticity verification. Many Muslims complain that the information they get from the Internet is either inaccurate or simply wrong. With the content growing in an exponential way, its manual labeling and verification is simply an impossible task. To the extent of our knowledge, no previous work has been carried out regarding his task. In this study, we propose a new method for automatic classification of Islamic content on the Internet. A dataset of four Islamic groups has been created containing texts from four different Islamic groups, namely: Sunni (Content representing Sunni Islam), Shia (Content representing Shia Islam), Madkhali (Content forbidding politics and warning against all scholars with different views) and Jihadi (Content promoting Jihad). We collected a dataset containing 20 different texts for each of those groups, totalizing 80 texts, out of which 56 are used for training and 24 for testing. In order to classify those contents automatically, we first preprocessed the texts using normalization, stop words removal, stemming and segmentation into words. Then, we used the hyper-concepts method which makes it possible to represent any corpus through a relation and to decompose it into non-overlapping rectangular relations and to highlight the most representative attributes or keywords in a hierarchical way. The hyper concept keywords extracted from the training set are subsequently used as predictors (containing either 1 when the text contains the keyword and 0 otherwise). Those predictors are fed to a random forest classifier of 5000 random trees. The number of extracted keywords varies according to the depth of the hyper concept tree, ranging from 47 keywords (depth 1) to 296 keywords (depth 15). The average classification accuracy starts at 45.79% for depth 1 and remains roughly stable at 68.33% from depth 10. This result is very interesting as there four different classes (a random predictor would therefore score around 25%). This study is a great step towards the automatic classification of Islamic content on the Internet. The results show that the hyper concept method successfully extracts relevant keywords for each group and helps in categorizing them automatically. The method needs to be combined with some semantic method in order to reach even higher classification rates. The results of the method are also to be compared with manual classification in order to foresee the improvement one can expect as some texts might indifferently belong to more than one category.
-
-
-
Optimal Communication For Sources And Channels With Memory And Delay-sensitive Applications
More LessShannon's theory of information was developed to address the fundamental problems of communication, such as the reliable data transmission over a noisy channel and the optimal data compression. During the years, it has expanded to find wide range of applications in many areas ranging from cryptography and cyber security to economics and genetics. Recent technological advances designate information theory as a promising and elegant tool to analyze and model information structures within living organisms. The key characteristics of data transmission within organisms are that they consider sources and channels with memory and feedback, they handle their information in a fascinating Shannon-optimum way, while the transmission of the data is delayless. Despite the extensive literature on memoryless sources and channels, the literature regarding sources and channels with memory is limited. Moreover, the optimality of communication schemes for these general sources and channels is completely unexplored. Optimality is often addressed via Joint Source Channel Coding (JSCC) and it is achieved if there exists an encoder-decoder scheme such that the Rate Distortion Function (RDF) of the source is equal to the capacity of the channel. This work is motivated by neurobiological data transmission and aims to design and analyze optimal communication systems consisting of channels and sources with memory, within a delay-sensitive environment. To this aim, we calculate the capacity of the given channel with memory and match it to a Markovian source via an encoder-decoder scheme, utilizing concepts from information theory and stochastic control theory. The most striking result to emerge from this research is that optimal and delayless communication for sources and channels with memory is not only feasible, but also it is achieved with the minimum complexity and computational cost. Though the current research is stimulated by a neurobiological application, the proposed approach and methodology as well as the provided results deliver several noteworthy contributions to a plethora of applications. These, among others, include delay sensitive and real time communication systems, control-communication applications and sensor networks. It addresses issues such as causality, power efficiency, complexity and security, extends the current knowledge of channels and source with memory, while it contributes to the inconclusive debates of real time communication and uncoded data transmission.
-
-
-
On And Off-body Path Loss Model Using Planar Inverted F Antenna
Authors: Mohammad Monirujjaman Khan and Qammer Hussain AbbasiThe rapid development of biosensors and wireless communication devices brings new opportunities for Body-Centric Wireless Networks (BCWN) which has recently received increasing attention due to their promising applications in medical sensor systems and personal entertainment technologies. Body-centric wireless communications (BCWCs) is a central point in the development of fourth generation mobile communications. In body-centric wireless networks, various units/sensors are scattered on/around the human body to measure specified physiological data, as in patient monitoring for healthcare applications [1-3]. A body-worn base station will receive the medical data measured by the sensors located on/around the human body. In BCWCs, communications among on-body devices are required, as well as communications with external base stations. Antennas are the essential component for wearable devices in body-centric wireless networks and they play a vital role in optimizing the radio system performance. The human body is considered an uninviting and even hostile environment for a wireless signal. The diffraction and scattering from the body parts, in addition to the tissue losses, lead to strong attenuation and distortion of the signal [1]. In order to design power-efficient on-body and off-body communication systems, accurate understanding of the wave propagation, the radio channel characteristics and attenuation around the human body is extremely important. In the past few years, researchers have been thoroughly investigating narrow band and ultra wideband on-body radio channels. In [4], on-body radio channel characterisation was presented at ultra wideband frequencies. In body-centric wireless communications, there is a need of communications among the devices mounted on the body as well as off-body devices. In previous study, researchers have designed the antennas for on-body communications and investigated the on-body radio channel performance both in narrowband and Ultra wideband technologies. This paper presents the results of on-body and off-body path loss model using Planar Inverted F Antenna (PIFA). The antenna used in this study works at two different frequency bands as 2.45 GHz (ISM band) and 1.9 GHz (PCS band). The 2.45 GHz is used for the communication over human body surface (on-body) and 1.9 GHz is used for the communication from body mounted devices to off-body units (off-body communications). Measurement campaigns were performed in the indoor environment and anechoic chamber. A frequency-domain measurement set-up was applied. Antenna design and on and off-body path loss model results will be presented.
-
-
-
Performance Analysis Of Heat Pipe-based Photovoltaic-thermoelectric Generator (hp-pv/teg) Hybrid System
Authors: Adham Makki and Siddig OmerPhotovoltaic (PV) cells can absorb up to 80% of the incident solar radiation of the solar spectrum, however, only certain percentage of the absorbed incident energy is converted into electricity depending on the conversion efficiency of the PV cell technology used, while the remainder energy is dissipated as heat accumulating on the surface of the cells causing elevated temperatures. Temperature rise at the PV cell level is addressed as one of the most critical issues influencing the performance of the cells causing serious degradations and shortens the life-time of the PV cells, hence cooling of the PV module during operation is essential. Hybrid PV designs which are able to simultaneously generate electrical energy and utilize the waste heat have been proven to be the most promising solution. In this study, analytical investigation of a hybrid system comprising of a Heat Pipe-based Photovoltaic-Thermoelectric Generator (HP-PV/TEG) for further enhanced performance is presented. The system presented incorporates a PV panel for direct electricity generation, a heat pipe to absorb excessive heat from the PV cells and assist uniform temperature distribution on the surface of the panel, and a thermoelectric generator (TEG) to perform direct heat-to-electricity conversion. A mathematical model based on the heat transfer process within the system is developed to evaluate the cooling capability and predict the overall thermal and electrical performances of the hybrid system. Results are presented in terms electrical efficiencies of the system. It was observed that the integration of TEG modules with PV cells aid improving the performance of the PV cells through utilizing the waste-heat available, leading to higher output power. The system presented can be applied in regions with hot desert climates where electricity demand is higher than thermal energy.
-
-
-
Effective Recommendation Of Reviewers For Research Proposals
Authors: Nassma Salim Mohandes, Qutaibah Malluhi and Tamer ElsayedIn this project, we address the problem that a research funding agency may face when matching potential reviewers with submitted research proposals. A list of potential reviewers for a given proposal is typically selected manually by a small technical group of individuals in the agency. However, the manual approach can be an exhausting and challenging task, and (more importantly) might lead to ineffective selections that affect the subsequent funding decisions. This research work presents an effective automated system that recommends reviewers for proposals and helps program managers in the assignment process. This system views the CVs of the reviewers and rank them by assigning weights for each CV against the list of all the proposals. We propose an automatic method to effectively recommend (for a given research proposal) a short list of potential reviewers who demonstrate expertise in the given research field/topic. To accomplish this task, our system extracts information from the full-text of proposals and the CVs of reviewers. We discuss the proposed solution, and the experience in using the solution within the workflow of the Qatar National Research Fund (QNRF). We evaluate our system on a QNRF/NPRP dataset that includes the submitted proposals and approved list of reviewers from the first 5 cycles of the NPRP funding program. Experimental results on this dataset validate the effectiveness of the proposed approach, and show that the best performance of our system demonstrated for proposals in three research areas: natural science, engineering, and medical. The system does not perform as well for proposals in the other two domains, i.e., humanities and social sciences. Our approach performs very well in overall evaluation with 68% of relevant results, i.e., from each 10 recommendations 7 are matching perfectly. Our proposed approach is general and flexible. Variations of the approach can be used in other applications such as conference paper assignment to reviewers and teacher-course assignment. Our research demonstrates also that there are significant advantages to applying recommender system concepts to the proposal-to-reviewer assignment problem. In summary, the problem of automatic assignment of proposals to reviewers is challenging and time-consuming when it is conducted manually by the program managers. Software systems can offer automated tools that significantly facilitate the role of program managers. We follow previous approaches in treating reviewers finding system as an information retrieval task. We use the same basic tools but the goal is to find relevant people rather than searching for relevant documents. For a specific user query (proposal), the system returns a list of qualified reviewers, ranked by their relevance to the query.
-
-
-
Intelligent Active Management Of Distribution Network To Facilitate Integration Of Renewable Energy Resources
More LessHarvesting electric energy from renewable energy resources is seen as one of the solutions to secure the energy sustainability due to depleting resources of fossil fuel, the conventional resources of electric energy. Renewable energy is typically connected to conventional distribution network which were not designed to accomodate any sources of electricity reduce the security of the energy supply system. Moreover the variency of renewable resources create many operational challenges to the distribution network operator. Higher shares of distributed energy sources lead to unpredictable network flows, greater variations in voltage, and different network reactive power characteristics as already evidence in many distribution networks. Local network constraints occurs more frequently, adversely affecting the quality of supply. Yet distribution network operators are nevertheless expected to continue to operate their networks in a secure way and to provide high-quality service to their customers. Active management of distribution network may provide some answers to these problems. Indeed, distribution management will allow grids to integrate renewable energu resources efficiently by leveraging the inherent characteristics of this type of generation. The growth of renewable energy resources requires changes to how distribution networks are planned and operated. Bi-directional flows need to be taken into account: they must be monitored, simulated and managed. This paper will describe features of smart grid concept that can be employed in distribution network for active management to facilitate the integration of renewable energy resources. The concepts include coordinated voltage control, microgrid operation and intelligent reactive power management to name a few. The development of physical testbed to test these new strategies in managing distribution network will also be described. The heart of these strategies is intelligent controller which acting as energy management system. The development of this controller will also be described and its operationality will be eplained.
-
-
-
Dynamic Team Theory With Nonclassical Information Structures Of Discrete-time Stochastic Dynamic Systems
Static Team Theory is a mathematical formalism of decision problems with multiple Decision Makers (DMs) that have access to different information and aim at optimizing a common pay-off or reward functional. It is often used to formulate decentralized decision problems, in which the decision-making authority is distributed through a collection of agents or players, and the information available to the DMs to implement their actions is non-classical. Static team theory and decentralized decision making originated from the fields of management, organization behavior and government by Marschak and Radner. However, it has far reaching implications in all human activity, including science and engineering systems, that comprise of multiple components, in which information available to the decision making components is either partially communicated to each other or not communicated at all. Team theory and decentralized decision making can be used in large scale distributed systems, such as transportation systems, smart grid energy systems, social network systems, surveillance systems, communication networks, financial markets, etc. As such, these concepts are bound to play key roles in emerging cyber-physical systems and align well with ARC'14 themes on Computing and Information Technology and Energy and Environment. Since the late 1960's several attempts have been made to generalize static team theory to dynamic team theory, in order to account for decentralized decision-making taken sequentially over time. However, to this date, no mathematical framework has been introduced to deal with non-classical information structures of stochastic dynamical decision systems, much as it is successfully done over the last several decades for stochastic optimal control problems, which presuppose centralized information structures. In this presentation, we put forward and analyze two methods, which generalize static team theory to dynamic team theory, in the context of discrete-time stochastic nonlinear dynamical problems, with team strategies, based on non-classical information structures. Both approaches are based on transforming the original discrete-time stochastic dynamical decentralized decision problem to an equivalent one in which the observations and/or the unobserved state processes are independent processes, and hence the information structures available for decisions are not affected by any of the team decisions. The first method is based on deriving team optimality conditions by direct application of static team theory to the equivalent transformed team problem. The second method is based on discrete-time stochastic Pontryagin's maximum principle. The team optimality conditions are captured by a "Hamiltonian System" consisting of forward and backward discrete-time stochastic dynamical equations, and a conditional variational Hamiltonian with respect to the information structure of each team member, while all other team members hold the optimal values.
-
-
-
Cunws/rgo Based Transparent Conducting Electrodes As A Replacement Of Ito In Opto-electric Devices
Transparent electrodes that conduct electrical current and allow light to pass through are widely used as the essential component in various opto-electric devices such as light emitting diodes, solar cells, photodectectors and touch screens. Currently, Indium Tin oxide (ITO) is the best, commercially available transparent conducting electrode (TCE). However, ITO is too expensive owing high cost on indium. Furthermore ITO thin films are too brittle to be used in flexible devices. To fulfill the demand of TCEs for wide range of applications, high performance ITO alternatives are required. Herein we demonstrate an approach for the successful, solution based synthesis of high aspect ratio copper nanowires, which were later combined with reduced graphene oxide (rGO), in order to produce smooth thin film TCEs on both glass and flexible substrate. Structure and component characterization for these electrodes was carried out through Four Probe, Spectrophotometer, Scanning electron Microscope (SEM), Transmission Electron Microscope (TEM) and Atomic Field Microscopy (AFM). In addition to the morphological and electrical characterization, these samples were also tested for their durability by carrying out experiments that involved exposure to various environmental conditions and electrode bending. Our fabricated transparent electrodes exhibited high performance with a transmittance of 91.6% and a sheet resistance of 9 O/sq. Furthermore, the electrodes showed no notable loss in performance during the durability testing experiments. Such results make them as replacement for indium tin oxide as a transparent electrode and presents a great opportunity to accelerate the mass development of devices like high efficiency hybrid silicon photovoltaics via simple and rapid soluble processes.
-
-
-
Development Of A Remote Sma Experiment - A Case Study
Authors: Ning Wang, Jun Weng, Michael Ho, Xuemin Chen, Gangbing Song and Hamid ParsaeiA remote laboratory containing a specially designed experiment was built to demonstrate and visualize the characteristics of wire-shape shape memory alloys (SMAs). In particular, the unit helps the study of the hysteretic behavior of SMAs as well as how the electrical driving frequency changes the hysteresis loop. One such SMA remote experiment with a novel unified framework was constructed at the Texas A&M University at Qatar. In this project, we developed a new experiment data transaction protocol and software package used in the remote lab. It is clear that the new platform makes some improvements in traversing network firewall function and software plug-in free. In order to provide a more realistic experience to the user in conducting the remote SMA experiment, the new solution also implements a Real-Time experiment video function. Compared to the traditional remote SMA experiment that uses the LabVIEW remote panel, the new SMA remote experiment solution has three advantages. The user interface of the new remote SMA experiment is plug-in free and can run in different web browsers. The new remote lab also resolves the issue of traversing a network firewall. End users only need to access the Internet and use a web browser to operate the SMA experiment. The experiment control webpage is developed by JavaScript which is a universally used computer language. Meanwhile, any regular web browsers are able to use all the features of the remote panel without requiring any extra software plug-ins. An additional function of the new remote lab is the real-time delivery of the video transmission from the experiment, thus providing a more realistic experience for the user. This new remote SMA experiment user interface can also be used through smart phones and tablet computers. Compared to LabVIEW based experiments, the experiment data collected from the use of the novel unified framework are similar except for the amplitude of the reference signal. The amplitude can be different because they are defined by the users. The data recorded from the new remote SMA experiment GUI has fewer samples per second comparing to that in remote SMA experiment with LabVIEW. The data transmission in the GUI is limited to 140 samples per second to minimize the memory and increase the connection speed. In the remote SMA experiment with LabVIEW, the sampling rate is 1000 samples per second; however, the hysteresis of SMA has been successfully demonstrated by the data recorded in the new remote SMA experiment with the novel unified framework, which matches the original results collected locally. The study compares two different implementation approaches for the remote SMA experiment; one is the traditional approach with the LabVIEW remote panel and the other is the new approach with the novel unified framework. The difference of these two solutions is listed, and the advantage of the new SMA remote experiment based on the novel unified framework is presented. The capability of running remote experiments on portable devices allows users to learn by observing and interacting with the real experiment in an efficient way.
-
-
-
Semantic Web Based Execution-time Merging Of Processes
Authors: Borna Jafarpour and Syed Sibte Raza AbidiA process is a series of actions executed in a particular environment in order to achieve a goal. It is often the case that several concurrent processes coexist in an environment in order to achieve several goals simultaneously. However, executing multiple processes is not always a possibility in an environment due to the following reasons: (1) All processes might be needed to be executed by a single agent that is not capable of executing more than one process at a time; (2) Multiple processes may have interactions between them that hamper their concurrent executions by multiple agents. As an example, there might be conflicting actions between several processes that their concurrent execution will stop those processes from achieving their goals. The existing solution to address the abovementioned complications is to merge several processes into a unified conflict-free and improved process before execution. This unified merged process is then executed by a single agent in order to achieve goals of all processes. However, we believe this is not the optimal solution because (a) in some environments, it is unrealistic to assume execution of all processes merged into one single process can be delegated to a single agent; (b) since merging is performed before actual execution of the unified process, some of the assumptions made regarding execution flow in individual processes may not be true during actual execution which will render the merged process irrelevant. In this paper, we propose a semantic web based solution to merge multiple processes during their concurrent execution in several agents in order to address the above-mentioned limitations. Our semantic web Process Merging Framework features a Web Ontology Language (OWL) based ontology called Process Merging Ontology (PMO) capable of representing a wide range of workflow and institutional Process Merging Constraints, mutual exclusivity relations between those constraints and their conditions. Process Merging Constraints should be respected during concurrent execution of processes in several agents in order to achieve execution-time process merging. We use OWL axioms and Semantic Web Rule Language (SWRL) rules in the PMO to define the formal semantics of the merging constraints. A Process Merging Engine has also been developed to coordinate several agents, each executing a process pertaining to a goal, to perform process merging during execution. This engine runs the Process Merging Algorithm that utilizes Process Merging Execution Semantics and an OWL reasoner to infer the necessary modifications in actions of each of the processes so that Process Merging Constraints are respected. In order to evaluate our framework we have merged several clinical workflows each pertaining to a disease represented as processes so that they can be used for decision support for comorbid patients. Technical evaluations show efficiency of our framework and evaluations with the help of domain expert shows expressivity of PMO in representation of merging constraints and capability of Process Merging Engine in successful interpretation of the merging constraints. We plan to extend our work to solve problems in business process model merging and AI plan merging research areas.
-
-
-
Performance Of Hybrid-access Overlaid Cellular Mimo Networks With Transmit Selection And Receive Mrc In Poisson Field Interference
Authors: Amro Hussen, Fawaz Al Qahtani, Mohamed Shaqfeh, Redha M. Radaydeh and Hussein AlnuweiriThis paper analyzes the performance of a hybrid control access scheme for small cells in the context of two-tier cellular networks. The analysis considers MIMO transmit/receive arrays configuration that implements transmit antenna selection (TAS) and maximal ratio combining (MRC) under Rayleigh fading channels when the interfering sources are described using Poisson field processes. The adopted models of aggregate interference at each receive station is modeled as a shot noise that follows a Stable distribution. Furthermore, based on the interference awareness at the receive station, two TAS approaches are considered through the analysis, which are the signal-to-noise (SNR)-based selection and signal-to-interference-plus-noise ratio (SINR)-based selection. In addition, the effect of delayed TAS due to imperfect feedback channel on the performance measures is investigated. New analytical results the hybrid-access scheme's downlink outage probability and error rate are obtained. To gain further insight on the system's behavior at limiting cases, asymptotic results for the outage probability and error rate at high signal-to-noise (SNR) are also obtained, which can be useful to describe diversity orders and coding gains. The derived analytical results are validated via Monte Carlo simulations.
-
-
-
Oryx Gtl Data Integration And Automation System For 21st Century Environmental Reporting
Authors: Sue Sung, Pon Saravanan Neerkathalingam, Ismail Al-khabani, Kan Zhang and Arun KanchanORYX GTL is an environmentally responsible company committed to creating an efficient, diversified energy business, developing its employees, and adding value to Qatar's natural resources. ORYX GTL considers the monitoring and reporting consistent environmental data and setting accurate targets is a critical component to increase the operational efficiency on sustainable manner. Monitoring key metrics such as air emissions, criteria pollutants, flaring and energy can provide an opportunity to reduce the environmental impacts and cost savings. ORYX GTL has adopted a state-of-art information technology (IT) solution to enhance the data handling process in support of the company's environmental performance reports such as the greenhouse gas (GHG) accounting and reporting (A&R) program required by Qatar Petroleum (QP). The automated system to report environmental data is proven to be more efficient and accurate which also increases consistency & requires fewer resources to report the data in a reliable manner. The system selected by ORYX GTL is the Data Integration and Automation (DIA) system designed developed by Trinity Consultants on the Microsoft® .net platform. The objective of this paper is to share the challenges and experience during the design, develop and implement this advanced DIA system for critical environmental reporting functions at ORYX GTL as a part of the company's commitment to improve environmental performance. The DIA application can be used as the central data storage/handling system for all environmental data reporting. The DIA software includes several functions built on a state-of-art IT platform to achieve near real-time environmental monitoring, performance tracking, and reporting. The key functions include: -Hourly data retrieval, aggregation, validation, and reconciliation from the plant process historian on a pre-defined schedule. The data retrieved from the process historian may include data such as hourly fuel usage and continuous emission monitoring data, and sampling data collected on routine basis. -Powerful calculation engine allows user to build complex emission calculation equations. Calculated results are stored in the database for use in reporting. In addition to user specified equations, the system also includes a complete calculation module to handle complex calculations for tank emissions. -Through the web interface of the DIA, users can manage system reporting entity hierarchy and user security, set up tags/sources, create manual data entries, create/modify equations, and execute emission reports. The DIA application sends email notifications of errors of tag data and calculation results at user specified intervals. Email recipients can provide timely response to the system with proper root causes and corrective actions. -Custom reports can be designed to generate regulatory reports in the format required by QP or the Qatar Ministry of Environment. The DIA system has significantly enhanced the quality of ORYX GTL's environmental reporting by reducing human interactions required for process data extraction, validation, reconciliation, calculations, and reporting. ORYX GTL's proactive approach to implement and integrate DIA system provided the opportunity to improve reporting functions and stakeholder & regulator satisfaction as well as it ensures the principles of environmental data reporting such as complete, consistent, transparent and accurate.
-
-
-
An Integrated Framework For Verified And Fault Tolerant Software
Authors: Samir Elloumi, Ishraf Tounsi, Bilel Boulifa, Sharmeen Kakil, Ali Jaoua and Mohammad SalehFault tolerance techniques should let the program continue servicing in spite of the presence of errors. They are of primary importance mainly in case of mission-critical systems. Their eventual failure may produce important human and economic casualties. For these reasons, researchers have assigned the software reliability as an important research area in terms of checking its design and functionality. As a matter of fact, software testing aims to increase the software correctness by verifying the program outputs w.r.t an input space generated in a bounded domain. Also, the fault tolerance approach has many effective error detection mechanisms as per as the Backward recovery, Forward recovery or redundancy algorithm. Our work consists of developing an integrated approach for software testing in a bounded domain. It tolerates transient faults to solve deficiencies and to obtain a robust and well-designed program. The developed framework comprises two types of tests: i) Semi-automatic test that enables the user to check the software by manually entering the values of the method and testing with specified values, ii) Automatic test that computerizes the test with the prepared instances of the program and generated values of a chosen method that exists inside the software. For generating the input values of a program, we have involved “Korat” that requires a class invariant, a bounded domain and Java Predicates (or preconditions). The framework uses the reflection technique in order to verify the correctness of the method under test. Based on the pre-post conditions, or Java predicates, previously fixed by the user, the backward recovery and the Forward recovery algorithm are applied to tolerate the transient faults. In case of Forward recovery, an efficient original solution has been developed based on reducing the number of re-executing a bloc of instructions. In fact, the re-execution is started from the current state instead of the initial state under the hypothesis of no loss of critical information. A plugin Java library has been implemented for fault tolerant version. The Framework was experimented for several java programs and was applied for improving the robustness of the Gas purification software. ACKNOWLEDGMENT: This publication was made possible by a grant from the Qatar National Research Fund through National Priority Research Program (NPRP) No. 04-1109-1-174. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the Qatar National Research Fund or Qatar University
-
-
-
Car Make And Model Detection System
Authors: Somaya Al-maadeed, Rayana Boubezari, Suchithra Kunhoth and Ahmed BouridaneThe deployment of highly intelligent and efficient machine vision systems accomplished to achieve new heights in multiple fields of human activity. A successful replacement of manual intervention with their automated systems assured safety, security and alertness in the transportation field. Automatic number plate recognition (ANPR) has become a common aspect of the intelligent transportation systems. In addition to the license plate information, identifying the exact make and model of the car is suitable to provide many additional cues in certain applications. Authentication systems may find it useful with an extra confirmation based on the model of the vehicle also. Different car models are characterized by the uniqueness in the overall car shape, position and structure of headlights etc. Majority of the research works rely on frontal/rear view of car for the recognition while some others are also there based on an arbitrary viewpoint. A template matching strategy is usually employed to find an exact match for the query image from a database of known car models. It is also possible to select and extract certain discriminative features from the region of interest (ROI) in the car image. And with the help of a suitable similarity measure such as euclidean distance it is able to demarcate between the various classes/models. The main objective of the paper is to understand the significance of certain detectors and descriptors in the field of car make and model recognition. The performance evaluation of SIFT, SURF, ORB feature descriptors for implementing a car recognition system was already available in literature. In this paper, we have studied the effectiveness of various combinations of feature detectors and descriptors on car model detection. The combination of the 6 detectors DoG, Hessian, Harris Laplace, Hessian Laplace, Multiscale Harris, Multiscale Hessian with the 3 descriptors SIFT, liop and patch was tested on three car databases. Scale Invariant Feature Transform (SIFT), a popular object detection algorithm allows the user to match different images and spot the similarities between them. The algorithm based on keypoints selection and description offers feature independent of illumination, scale, noise and rotation variations. Matching between images has been executed using Euclidian distance between descriptors. For the given keypoints in the test image, the smallest Euclidian distance between corresponding descriptor and all the descriptors of the training image indicates the best match. Our experiments were carried out in MATLAB using the VLFeat ToolBox. It was found to achieve a maximum accuracy of 91.67% with DoG-SIFT approach in database 1 comprising cropped ROI of toy car images. For the database 2 consisting of cropped ROI of real car images, the Multiscale Hessian-SIFT yielded the maximum accuracy of 96.88%. The database 3 comprised of high resolution real car images with background. The testing was conducted on the cropped and resized ROI's of these images. A maximum accuracy of 93.78% was obtained when the Multiscale Harris-SIFT feature descriptor was employed. As a whole these feature detectors and descriptors succeeded in recognizing the car models with an overall accuracy above 90%.
-
-
-
Visual Simultaneous Localization And Mapping With Stereo And Wide-angle Imagery
Authors: Peter Hansen, Muhammad Emaduddin, Sidra Alam and Brett BrowningMobile robots provide automated solutions for a range of tasks in industrial settings including but not limited to inspection. Our interest is automated inspection tasks including gas leakage detection in natural gas processing facilities such as those in Qatar. Using autonomous mobile robot solutions remove humans from potentially hazardous environments, eliminate potential human errors from fatigue, and provide data logging solutions for visualization and off-line post-processing. A core requirement for a mobile robot to perform any meaningful inspection task is to localize itself within the operating environment. We are developing a visual Simultaneous Localization And Mapping (SLAM) system for this purpose. Visual SLAM systems enable a robot to localize within an environment while simultaneously building a metric 3D map using only imagery from an on-board camera head. Vision has many advantages over alternate sensors used for localization and mapping. It requires minimal power compared to Lidar sensors, is relatively inexpensive compared to Inertial Navigation Systems (INS), and can operate in GPS denied environments. There is extensive work related to visual SLAM with most systems using either a perspective stereo camera head or a wide-angle of view monocular camera. Stereo cameras enable Euclidean 3D reconstruction from a single stereo pair and provide metric pose estimates. However, the narrow angle of view can limit pose estimation accuracy as visual features can typically be 'tracked' only across a small number of frames. Moreover, the limited angle of view presents challenges for place recognition whereby previously visited locations can be detected and loop closure performed to correct for long-range integrated position estimate inaccuracies. In contrast, wide-angle of view monocular cameras (e.g. fisheye and catadioptric) trade spatial resolution for an increased angle of view. This increased angle can enables visual scene points to be tracked over many frames and can improve rotational pose estimates. The increased angle of view can also improve visual place recognition performance as the same areas of a scene can be imaged under much larger changes in position and orientation. The primary disadvantage of a monocular wide-angle visual SLAM system is a scale ambiguity in the translational component of pose/position estimates. The visual SLAM system being developed in this work uses a combined stereo and wide-angle fisheye camera system with the aim of exploiting the advantages of each. For this we have combined visual feature tracks from both the stereo and fisheye camera within a single non-linear least-squares Sparse Bundle Adjustment (SBA) framework for localization. Initial experiments using large scale image datasets (approximately 10 kilometers in length) collected within Education City have been used to evaluate improvements in localization accuracy using the combined system. Additionally, we have demonstrated performance improvements in visual place recognition using our existing Hidden Markov Model (HMM) based place recognition algorithm.
-
-
-
Secured Scada System
More LessAbstract: SCADA ( Supervisory control and data acquisition) is a system which allows a control of remote industrial equipments, over a communication channel. These legacy communication channels were designed before the cyber space era,and hence they lack any security measures in their network which makes them vulnerable to any cyber attack. RasGas a joint venture between QP and ExxonMobil, was one victim of such attack, it was hit with an unknown virus. The nuclear facility in Iran was hit with a virus called “Stuxnet”, it particulary targets Siemens industrial control systems. The goal of this project is to design a model of a SCADA system that is secured against network attacks. Lets consider for example a simple SCADA system which consist of a Water tank in a remote location and a local control room. the operator controls the water level and temperature using a control panel. The communication channels uses a TCP/IP , protocols through WIFI. The operator raises the temperature of the water by raising the power of the heater, then reads the real temperature of the heater and the water via installed sensors. We consider a man-In-The middle (Adversary) which has access to the network through WIFI. With basic skills s/he is able to redirects tcp/ip traffic to his machine (tapping) and alter data. He can for instance raise water level to reach overflow, or increase the temperature above the "danger zone", and sends back fake sensors data by modifying their response. We introduce an encryption device that encrypt the data such that without the right security credentials , the adversary wont be able to interpret the data and hence not able to modify it. The device is installed at both the control room and the remote tank, and we assume both places are physically secured. To demonstrate the Model. We design and setup a SCADA Model emulator server, that represents and serves as a Water tank, which consists of actuators and sensors. Which is connected to a work station through a network switch. We also setup an adversary workstation that taps and alters the communication between them. We Design Two hardware encryption/decryption devices using FPGA boards and connect them at the ports of both the server and control workstation which we assume to be in a secured zone. and then we analyze the flow of data stream through both secured and non secured state of the channel.
-
-
-
Measurement Platform Of Mains Zero Crossing Period For Powerline Communication
Authors: Souha Souissi and Chiheb RebaiPower lines (PLC), mainly dedicated and optimized for delivering electricity, are, nowadays, used to transfer data. Its low cost and wide coverage makes it one of the pivotal technologies in building up smart grid. The actual role of PLC technology is controversial while some preconize that PLC systems are very good candidates for some applications others discard it and look at wireless as a more elaborated alternative. It is obvious that Smart Grid will include multiple types of communications technologies ranging from optics to wireless and wireline. Among wireline solutions, PLC appears to be the only technology that has deployment cost comparable to wireless as power installation and line are already existent. Narrowband PLCs are a key point in smart grid that is elaborated to support several applications such as Automatic Meter Reading (AMR), Advanced Metering Infrastructure (AMI), demand side management, in-home energy management and Vehicle-to-grid communications A critical stage in designing an efficient PLC system remains in getting sufficient knowledge about channel behavior characteristics such as attenuation, access impedance, multiple noise scenarios and synchronization. That's why; characterizing power line network has been the interest of several research works aiming to compromise between robustness of powerline communication and higher data rate. This interest in narrowband PLC systems to find the adequate one is inciting to deeply focus on channel characterization and modeling methods. It represents a first step to simulate channel behavior then propose a stand-alone hardware for emulation. Authors are investigating the building blocks of a narrowband PLC channel emulator that helps designers to evaluate and verify their systems design. It allows a reproduction of real conditions for any narrowband PLC equipment (single carrier or multicarrier) by providing three major functionalities: noise scenarios, signal attenuation and zero crossing reference mainly used for single carriers systems. For this purpose, authors deploy a bottom up approach to identify a real channel transfer function (TF) based on a prior knowledge of used power cables characteristics and connected loads. A simulator is, then, defined based on Matlab that generates a given TF according to defined parameters. The AC mains zero crossing variation is also studied. In field exhaustive measurements of this reference have shown a perpetual fluctuation presented as a jitter error. It is the reflection of variant AC mains characteristics (frequency, amplitude) which could be related to non-linearity of connected loads at network and used detection circuit in PLC systems. Authors propose a ZC variation model according to system environment (home/ lab, rural). This model will be embedded on channel emulator to reproduce ZC reference variation. Regarding noise, few models are found in literature, specific to narrowband PLCs. An implementation of some models is done and tested on a DSP platform which will include the two previous elements: TF and ZC variation.
-
-
-
Social Media As A Source Of Unbiased News
By Walid MagdyNews media are usually biased toward some political views. Also, the coverage of news is limited to news reported by news agencies. Social media is currently a hub for users to report and discuss news. This includes news reported or missed by news media. Developing a system that can generate news reports from social media can give a global unbiased view on what is hot in a given region. In this talk, we present the research work performed in QCRI for two years, which tackle the problem of using social media to track and follow posts on ongoing news in different regions and for different topics. Initially, we show examples of the presence of bias in reporting news by different news media. We then explore the nature of social media platforms and list the research questions that motivated this work. The challenges for tracking topics related to news are discussed. An automatically adapting information filtering approach is presented that allows tracking broad and dynamic topics in social media. This technique enables automatically tracking posts on news in social media while coping with the high changes occurring in news stories. Our developed system, TweetMogaz, is the demoed, which is an Arabic news portal platform that generated news from Twitter. TweetMogaz reports in real-time what is happening in hot regions in the Middle East, such as Syria and Egypt, in the form of comprehensive reports that include top tweets, images, videos, and news article shared by users on Twitter. It also reports news on different topics such as sports. Moreover, Search is enabled to allow users to get news reports on any topic of interest. The demo would be showing www.tweetmogaz.com live, where emerging topics in news would appear live in front of the audience. By the end of the talk, we would show some of the interesting examples that were noticed on the website in the past year. In addition, a quick overview would be presented on one of the social studies, which was carried out based on the news trend changes on TweetMogaz. The study shows the changes of people behavior when reporting and discussing news during major political changes such as the one happened in Egypt in July 2013. This work is an outcome of two years of research in the Arabic Language Technology group in Qatar Computing Research Institute. The work is published in the form of six research and demo papers in tier 1 conferences such as SIGIR, CSCW, CIKM, and ICWSM. The TweetMogaz system is protected by two patent applications filed in 2012 and 2014. Currently the website serves around 10,000 users, and the number is expected to significantly increase when officially advertised. Please feel free to visit TweetMogaz website for checking the system live: www.tweetmogaz.com Note: A new release with a better design to the website is expected by the time of the conference
-
-
-
Semantic Model Representation For Human's Pre-conceived Notions In Arabic Text With Applications To Sentiment Mining
Authors: Ramy Georges Baly, Gilbert Badaro, Hazem Hajj, Nizar Habash, Wassim El Hajj and Khaled ShabanOpinion mining is becoming of high importance with the availability of opinionated data on the Internet and the different applications it can be used for. Intensive efforts have been made to develop opinion mining systems, and in particular for the English language. However, models for opinion mining in Arabic remain challenging due to the complexity and rich morphology of the language. Previous approaches can be categorized into supervised approaches that use linguistic features to train machine learning classifiers, and unsupervised approaches that make use of sentiment lexicons. Different features have been exploited such as surface-based, syntactic, morphological, and semantic features. However, the semantic extraction remains shallow. In this paper, we propose to go deeper into the semantics of the text when considered for opinion mining. We propose a model that is inspired by the cognitive process that humans follow to infer sentiment, where humans rely on a database of preconceived notions developed throughout their life experiences. A key aspect for the proposed approach is to develop a semantic representation of the notions. This model consists of a combination of a set of textual representations for the notion (Ti), and a corresponding sentiment indicator (Si). Thus
denotes the representation of a notion. However, notions can be constructed at different levels of text granularity ranging from ideas covered by words to ideas covered in full documents. The range also includes clauses, phrases, sentences, and paragraphs. To demonstrate the use of this new semantic model of preconceived notions, we develop the full representation of one-word notions by including the following set of syntactic features for Ti: word surfaces, stems, and lemmas represented by binary presence and TFIDF. We also include morphological features such as part of speech tags, aspect, person, gender, mood, and number. As for the notion sentiment indicator Si, we create a new set of features that indicate the words' sentiment scores based on an internally-developed Arabic sentiment lexicon called ArSenL, and using a third-party lexicon called Sifaat. The aforementioned features are extracted at the word-level, and are considered as raw features. We also investigate the use of additional "engineered" features that reflect the aggregated semantics of a sentence. Such features are derived from word-level information, and include count of subjective words, average of sentiment scores per sentence. Experiments are conducted on a benchmark dataset collected from the Penn Arabic TreeBank (PATB) already annotated with sentiment labels. Results reveal that raw word-level features do not achieve satisfactory performance in sentiment classification. Feature reduction was also explored to evaluate the relative importance of the raw features, where the results showed low correlations between individual raw features and sentiment labels. On the other hand, the inclusion of engineered features had a significant impact on classification accuracy. The outcome of these experiments is a comprehensive set of features that reflect the one-word notion or idea representation in a human mind. The results from one-word also show promises towards higher level context with multi-word notions.
-
-
-
Intelligent M-Health Technology For Enhanced Smoking Cessation Management
Authors: Abdullah Alsharif and Nada PhilipAbstract Smoking-related illnesses are costly to the NHS and a leading cause of morbidity and mortality. Pharmacological treatments including nicotine replacement, some antidepressants, and nicotine receptor partial agonists, as well as individual- and group-based behavioural approaches, can help stop people from smoking. Circa 40% of smokers attempt to quit smoking each year, yet most have rapid relapses. The development of new tools acceptable by a wide range of smokers should be of particular interest. Smartphone interventions such as text messaging have shown some promise in helping people stop smoking. However most of these studies were based on text-messaging interventions with no interactive functionality that can provide better feedback to the smoker. In addition there is increasing evidence that smart mobile phones act as a conduit to behavioural change in other forms of healthcare. A study of currently available iPhone apps for smoking cessation have shown a low level of adherence to key guidelines for smoking cessation; few, if any, recommended or linked the user to proven treatments such as pharmacotherapy, counselling or a “quit line” and smoking cessation program. Hence there is a need for clinical validation of the feasibility of app-based intervention in supporting smoking cessation programmes in community pharmacy settings. The goal of this study is to design and develop an m-health programme platform to support smoking cessation in a community setting. The primary objectives are ascertaining what users require from a mobile app-based smoking cessation system targeting and supporting smokers, and looking into the literature for similar solutions. The study also involves the design and development of an end-to-end smoking cessation management system based on these identified needs; this includes the Patients Hub, Cloud Hub, and Physician/Social Worker Hub, as well as the design and development of a decision support system based on data mining and an artificial intelligent algorithm. Finally, it will implement the system and evaluate it in a community setting.
-
-
-
MobiBots: Risk Assessment Of Collaborative Mobile-to-Mobile Malicious Communication
Authors: Abderrahmen Mtibaa, Hussein Alnuweiri and Khaled HarrasCyber security is moving from traditional infrastructure to sophisticated mobile infrastreless threats. We believe that such imminent transition is happening at a rate exceeding by far the evolution of security solutions. In fact, the transformation of mobile devices into highly capable computing platforms makes the possibility of security attacks originating from within the mobile network a reality. All recent security report emphasize on the steadily increase of malicious mobile applications. Trend Micro, in their last security report, shows that the number of malicious application doubled in just six months to reach more than 700000 malwares in June 2013. This represents a major issue for today's cyber security in the world and particularly in the middle east. The last Trend Micro report shows that the United Arab Emirates has “by far” the highest malicious Android application download volume worldwide. Moreover, Saudi Arabia, another middle eastern country, register the highest downloads of high-risk applications. We believe that today mobile devices are capable of initiating sophisticated cyberattacks especially when they coordinate together forming what we call a mobile distributed botnet (MobiBot). MobiBots leverage the absence of basic mobile operating system security mechanism and the advantages of classical botnets which make them a serious security threat to any machine and/or network. In addition, MobiBot's distributed architecture (see attached figure), its communication model, and its mobility make it very hard to track, identify and isolate. While there has been many android security studies, we find that the proposed solutions can not be adopted in the challenging MobiBot environment due to its de-centralized architecture (figure). MoBiBots bring significant challenges to network security. Thus, securing mobile devices by vetting malicious tasks can be considered as one important first step towards MobiBot security. Motivated by the trends mentioned above, in our project we first investigate the potential for and impact of the large scale infection and coordination of mobile devices. We highlight how mobile devices can leverage short range wireless technologies in attacks against other mobile devices that come within proximity. We quantitatively measure the infection and propagation rates within MobiBots using short range wireless technology such as Bluetooth. We adopt an experimental approach based on a Mobile Device Cloud platform we have build as well as three real world data traces. We show that Mobibot infection can be really fast by infecting all nodes in a network in only few minutes. Stealing data however requires longer period of time and can be done more efficiently if the botnet utilizes additional sinks. We also show that while MobiBots are difficult to detect and isolate compared to common botnet networks, traditional prevention techniques costs at least 40% of the network capacity. We also study the scalability of MobiBots in order to understand the strengths and weaknesses of these malicious networks. We based our analysis on a dataset that consists of multiple disjoint communities, each one is a real world mobility trace. We show that MobiBots succeed on infecting up to 10K bots in less than 80 minutes.
-
-
-
The Infrastructure Of Critical Infrastructure: Vulnerability And Reliability Of Complex Networks
By Martin Saint* Background & Objectives All critical infrastructure can be modeled as networks, or systems of nodes and connections, and many systems such as the electric grid, water supply, or telecommunications exist explicitly as networks. Infrastructures are interdependent, for instance, telecommunications depend on electric power, and control of the electric grid depends increasingly upon telecommunications, creating the possibility for a negative feedback loop following a disturbance. The performance of these systems under disturbance are related to their inherent network characteristics, and network architecture plays a fundamental role in reliability. What characteristics of networks affect their robustness? Could, for instance, the vulnerability of the electric grid to cascading failure be reduced? * Methods We create a failure model of the network where each node and connection is initially in the operative state. At the first discrete time step a network element is changed to the failed state. At subsequent time steps a rule is applied which determines the state of random network elements based upon the state of their neighbors. Depending upon the rule and the distribution of the degree of connectedness of the network element, failures may be contained to a few nodes or connections, or may cascade until the entire network fails. * Results Quantitative measures from the model are the probability of network failure based upon the loss of a network element, and the expected size distribution of failure cascades. Additionally, there is a critical threshold below which infrastructure networks fail catastrophically. The electrical grid is especially vulnerable as it operates close to the stability limit, and there is a low critical threshold after which the network displays a sharp transition to a fragmented state. Failures in the electrical grid result not only in the loss of capacity in the network element itself, but load shifting to adjacent network elements, which contributes to further instability. While most failures are small, failure distributions are heavy tailed indicating occasional catastrophic failure. Many critical infrastructure networks are robust to random failure, but the existence of highly connected hubs give them a high clustering coefficient which makes the network vulnerable to targeted attacks. * Conclusions It is possible to design network architectures which are robust to two different conditions: random failure and targeted attack. It is also possible to alter architecture to increase the critical threshold at which failed network elements cause failure of the network as a whole. Surprisingly, adding more connections or capacity sometimes reduces robustness by creating more routes for failure to propagate. Qatar is in an ideal position to analyze and improve critical infrastructure from a systemic perspective. Modeling and simulation as detailed above are readily applicable to analyzing real infrastructure networks.
-
-
-
Annotation Guidelines For Non-native Arabic Text In The Qatar Arabic Language Bank
Authors: Wajdi Zaghouani, Nizar Habash, Behrang Mohit, Abeer Heider, Alla Rozovskaya and Kemal OflazerAnnotation Guidelines for Non-native Arabic Text in the Qatar Arabic Language Bank The Qatar Arabic Language Bank (QALB) is a corpus of naturally written unedited Arabic and its manual edited corrections. QALB has about 1.5 million words of text written and post-edited by native speakers. The corpus was the focus of a shared task on automatic spelling correction in the Arabic Natural Language Processing Workshop that was held in conjunction with 2014 Conference on Empirical Methods for Natural Language Processing (EMNLP) in Doha, with nine research teams from around the world competing. In this poster we discuss some of the challenges of extending QALB to include non-native Arabic text. Our overarching goal is to use QALB data to develop components for automatic detection and correction of language errors that can be used to help Standard Arabic learners (native and non-native) improve the quality of the Arabic text they produce. The QALB annotation guidelines have focused on native speaker text. Learners of Arabic as a second language (L2 speakers) typically have to adapt to a different script and a different vocabulary with new grammatical rules. These factors contribute to the propagation of errors made by L2 speakers that are of different nature than those produced by native speakers (L1 speakers), who are mostly affected by their dialects and levels of education and use of standard Arabic. Our extended L2 guidelines build on our L1 guidelines with a focus on the types of errors usually found in the L2 writing style and how to deal with problematic ambiguous cases. Annotated examples are provided in the guidelines to illustrate the various annotation rules and their exceptions. As with the L1 guidelines, the L2 texts should be corrected with a minimum number of edits that produce semantically coherent (accurate) and grammatically correct (fluent) Arabic. The guidelines also devise a priority order for corrections that prefer less intrusive edits starting with inflection, then cliticization, derivation, preposition correction, word choice correction, and finally word insertion. This project is supported by the National Priority Research Program (NPRP grant 4-1058-1-168) of the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.
-
-
-
Truc: Towards Trusted Communication For Emergency Scenarios In Vehicular Adhoc Networks (vanets) Against Illusion Attack
Authors: Maria Elsa Mathew and Arun Raj Kumar P,With data proliferation at an unprecedented rate, data accessibility while in motion has increased the demand of VANETs of late. VANETs use moving cars as nodes to create a mobile network. VANETs are designed with the goals of enhancing driver safety and providing passenger comfort. Providing security to VANETs is important in terms of providing user anonymity, authentication, integrity and privacy of data. Attacks in VANETs include the Sybil attack, DDoS attack, misbehaving and faulty nodes, sinkhole attack, spoofing, traffic analysis attack, position attack and illusion attack. Illusion attack is a recent threat in which the attacker generates fraud traffic messages to mislead the passengers thereby changing the passenger's driving behaviour. Thus, the attacker achieves his goal by moving the target vehicle along a traffic free route and creating traffic jams in areas where he wishes. Illusion attack is devised mainly by thieves and terrorists who require a clean get away path. The existing method used to prevent Illusion attack is the Plausibility Validation Network (PVN). In PVN the message validation is based on a set of rules depending on the message type. This is an overhead as the rule set for all possible message types have to be stored and updated in the database. Hence, an efficient mechanism is required to prevent illusion attacks. In this paper, our proposed system: TRUC verifies the message by a Message Content Validation (MCV) Algorithm thus ensuring the safety of the car drivers and passengers. The possibilities of attacker creating an Illusion attack is explored in all dimensions and the security goals are analyzed by our proposed design, TRUC
-
-
-
Software-hardware Co-design Approach For Gas Identification
Authors: Amine Ait Si Ali, Abbes Amira, Faycal Bensaali, Mohieddine Benammar, Muhammad Hassan and Amine BermakGas detection is one of the major processes that has to be taken into consideration as an important part of a monitoring system for production and distribution of gases. Gas is a critical resource. Therefore, for safety reason, it is imperative to keep monitoring in real time all parameters such as temperature, concentration and mixture. The presented research in this abstract on gas identification is part of an ongoing research project aiming at the development of a low power reconfigurable self-calibrated multi-sensing platform for gas applications. Gas identification system can be described as a pattern recognition problem. Decision tree classifier is a widely used classification technique in data mining due to its low implementation complexity. It is a supervised learning that consists in a succession of splits that leads to the identification of the predefined classes. The decision tree algorithm has been applied and evaluated for the hardware implementation of a gas identification system. The data used for training is collected from a 16-Array SnO2 gas sensor, the sensor array is exposed to three types of gases (CO, Ethanol and H2) at ten different concentrations (20, 40, 60, 80, 100, 120, 140, 160, 180 and 200ppm), the experiment is repeated twice to generate 30 patterns for training and another 30 patterns for testing. Training is performed in MATLAB. It is first done using the raw data, which is the steady states, and then using transformed data by applying principal component analysis. Table 1 shows the decision tree training results. These include the trees obtained from the learning on the original data and on different combinations of principal components. The resulted models are implemented in C and synthesised using Vivado High Level Synthesis (HLS) tool for a quick prototyping on the heterogeneous Zynq platform. Table 2 illustrates the on-chip resources usage, maximum running frequency and latency for the implementation of the trained decision tree models. The use of Vivado HLS helped to optimise the hardware design by applying different directives such as the one that allow loop unrolling for a better parallelism. The best performance is obtained when the first three principal components were used for the training which resulted in an accuracy of 90%. The hardware implementation illustrated that a trade-off is to be found between the accuracy of the identification and the performance in terms of processing time. It is planned to use drift compensation techniques to get more accurate steady states and increase the performance of the system. The system can be easily adapted to other types of gases by exposing the new gas to the sensor array, collecting the data, performing the training and finally implementing the model.
-
-
-
E-government Alerts Correlation Model
Authors: Aadil Salim Al-mahrouqi, Sameh Abdalla and Tahar KechadiBackground & Objectives Qatar's IT infrastructure is rapidly growing to encompass the evolution of businesses and economical growth the country is increasingly witnessing throughout its industries. It is now evident that the country's e-government requirements and associated data management systems are becoming large in number, highly dynamic in nature, and exceptionally attractive for cybercrime activities. Protecting the sensitive data e-government portals are relying on for daily activities is not a trivial task. The techniques used to perform cybercrimes are becoming sophisticated relatively with the firewalls protecting them. Reaching high-level of data protection, in both wired and wireless networks, in order to face recent cybercrime approaches is a challenge that is continuously proven hard to achieve. In a common IT infrastructure, the deployed network devices contain a number of event logs that reside locally within its memory. These logs are in large numbers, and therefore, analyzing them is a time consuming task for network administrators. In addition, a single network event often generates a redundancy of similar event logs that belong to the same class within short time intervals. This makes it difficult to manage them during forensics investigation. In most cybercrime cases, a single alert log does not contain sufficient information about malicious actions background and invisible network attackers. The information for a particular malicious action or attacker is often distributed among multiple alert logs and among multiple network devices. Forensic investigators mission is to reconstruct incident scenarios is now very complex considering the number as well as the quality of these event logs. Methods My research will focus on involving mathematics and algorithm science for each proposed sub models in the alerts correlation model. After collecting alert logs from network sensors; then it will be stored in the alert logs warehouse. The stored alert log contains redundancy data and irrelevant information. The alert correlation model used to filter out all redundancy data and irrelevant information from the alert logs. This model contains two stages; format standardization and redundancy management. The format standardization process aims unified different event logs format into one format, while the redundancy management process aims to reduce the duplication of the single event. Furthermore, this research will try to utilized criminology science to enhance security level of the proposed model and forensics experiments tools to validate the proposal approach. Results In response to attacks and potential of attacks against network infrastructure and assets, my research focuses on how to build an organized legislative e-government environment. The idea of this approach is to forensically utilize the current network security output by collect, analysis and present evidence of network attack in an efficient manner. After data mining process we can utilize our preprocessing results for e-government awareness purpose. Conclusions This research proposed Qatar e-government alerts correlation model. The proposed model used to process and normalize the captured network event logs. The main point of designing the model is to find a way to forensically visualize the evidence and attack scenario in e-government infrastructure.
-
-
-
First Hybrid 1gbps/0.1 Gbps Free-space Optical /rf System Deployment And Testing In The State Of Qatar
Authors: Syed Jawad Hussain, Abir Touati, Mohammad Elamri, Hossein Kazemi, Farid Touati and Murat UysalI.BACKGROUND & OBJECTIVES Owing to its high-bandwidth, robustness to EMI, and operation in unregulated spectrum, free-space optical communication (FSO) is uniquely qualified as a promising alternative or complementary technology to fiber optic and wireless radio-frequency (RF) links. Despite the vibrant advantages of FSO technology and the variety of its applications, its widespread adoption has been hampered by rather disappointing link reliability for long-range links due to atmospheric turbulence-induced fading and sensitivity to detrimental climate conditions. A major challenge of such hybrid systems is to provide a strong backup system with soft-switching capabilities when the FSO link becomes down. The specific objective of this work is to study for the first time in Qatar and the GCC the link capacity, link availability, and link outage of an FSO system with RF back up (i.e. hybrid FSO/RF) under harsh environment. II.METHODS In this work, a practical demonstration of hybrid FSO/RF link system is shown. The system has a capacity of 1 Gbps and 100 Mbps for FSO and RF, respectively. It is installed in Qatar University at two different buildings 600 m away and 20 feet high. This system is basically a point-to-point optical link that uses Infrared laser lights to wirelessly transmit data. Moreover, the proposed system has capability to make parallel transmission between links. In order to analyze the two transport media, we used the tool IPERF. This Java based GUI (jperf) application can either act as a server or client, and is available on a variety of platforms. We have tested end-to-end throughput by running IPERF tool in server mode on one Laptop and in client mode on another. III.RESULTS Figure1 shows a block diagram of the system used. Initial results were obtained for the two links under same climatic and environmental conditions, where the average ambient temperature reached 50°C and RH above 80% (July-August 2014). Both FSO and RF links allowed transfer rates of around 80% of their full capacity. During all experiments while running both links simultaneously, there was no FSO link failure. In case of an FSO failure, the RF is expected to back up within 2 seconds (hard switching), which might cause a loss of data. Detailed results on FSO-to-RF switching and induced packet loss will be reported in the full manuscript and during the presentation. IV.CONCLUSION Tests on FSO/RF link have been carried for the first time in Qatar. Initial results showed that both FSO and RF links operated close to their capacity. During summer, Qatari weather did not induce FSO link outage. The team is focusing on developing a seamless FSO-RF soft switching using NetFPGA boards and raptor coding.
-
-
-
Wigest: A Ubiquitous Wifi-based Gesture Recognition System
Authors: Heba Abdelnasser, Khaled Harras and Moustafa YoussefMotivated by freeing the user from specialized devices and leveraging natural and contextually relevant human movements, gesture recognition systems are becoming popular as a fundamental approach for providing HCI alternatives. Indeed, there is a rising trend in the adoption of gesture recognition systems into various consumer electronics and mobile devices. These systems, along with research enhancing them by exploiting the wide range of sensors available on such devices, generally adopt various techniques for recognizing gestures including computer vision, inertial sensors, ultra-sonic, and infrared. While promising, these techniques experience various limitations such as being tailored for specific applications, sensitivity to lighting, high installation and instrumentation overhead, requiring holding the mobile device, and/or requiring additional sensors to be worn or installed. We present WiGest, a ubiquitous WiFi-based hand gesture recognition system for controlling applications running on off-the-shelf WiFi-equipped devices. WiGest does not require additional sensors, is resilient to changes within the environment, and can operate in non-line-of-sight scenarios. The basic idea is to leverage the effect of the in-air hand motion on the wireless signal strength received by the device from an access point to recognize the performed gesture. As shown in Figure 1, WiGest parses combinations of signal primitives along with other parameters, such as the speed and magnitude of each primitive, to detect various gestures, which can then be mapped to distinguishable application actions. There are several challenges we address WiGest including handling noisy RSSI values due to multipath interference and other electromagnetic noise in the wireless medium; handling gesture variations and their attributes for different humans or the same human at different times; handling interference due to the motion of other people within proximity of the user's device; and finally being energy-efficient to suit mobile devices. To address these challenges, WiGest leverages different signal processing techniques that can preserve signal details while filtering out the noise and variations in the signal. We implement WiGest on off-the-shelf laptops and evaluate frequencies on the RSSI, creating a signal composed of three primitives: rising edge, falling edge, and pause. We evaluate its performance with different users in an apartment and engineering building settings. Various realistic scenarios are tested covering more than 1000 primitive actions and gestures each in the presence of interfering users in the same room as well as other people moving in the same floor during their daily life. Our results show that WiGest can detect the basic primitives with an accuracy of 90% using a single AP for distances reaching 26 ft including through-the-wall non-line-of-sight scenarios. This increases to 96% using three overheard APs, a typical case for many WiFi deployments. When evaluating the system using a multimedia player application case study, we achieve a classification accuracy of 96%.
-
-
-
Physical Layer Security For Communications Through Compound Channels
Authors: Volkan Dedeoglu and Joseph BoutrosSecure communications is one of the key challenges faced in the field of information security as the transmission of information between legitimate users is vulnerable to interception by illegitimate listeners. The state-of-the -art secure communication schemes employ cryptographic encryption methods. However, the use of cryptographic encryption methods requires generation, distribution and management of keys to encrypt the confidential message. Recently, physical layer security schemes that exploit the difference between the channel conditions of the legitimate users and the illegitimate listeners have been proposed for enhanced communication security. We propose novel coding schemes for secure transmission of messages over compound channels that provides another level of security in the physical layer on top of the existing cryptographic security mechanisms in the application layer. Our aim is to provide secrecy against illegitimate listeners while still offering good communication performance for legitimate users. We consider the transmission of messages over compound channels, where there are multiple parallel communication links between the legitimate users and an illegitimate listener intercepts one of the communication links that is unknown to the legitimate users. We propose a special source splitter structure and a new family of low density parity check code ensembles to achieve secure communications against an illegitimate listener and provide error correction capability for the legitimate listener. First, the source bit sequence is split into multiple bit sequences by using a source splitter. The source splitter is required to make sure that the illegitimate listener does not have access to the secret message bits directly. Then, a special error correction code is applied to the bit sequences, which are the outputs of the source splitter. The error correction code is based on a special parity check matrix which is composed of some subblocks with specific degree distributions. We show that the proposed communication schemes can provide algebraic and information theoretic security. Algebraic security means that the illegitimate listener is unable to solve any of the individual binary bits of the secret message. Furthermore, information theoretic security guarantees the highest level of secrecy by revealing no information to the illegitimate listener about the secret message. The error correction encoder produces multiple codewords to be sent on parallel links. Having access to the noisy outputs of the parallel links, the legitimate receiver recovers the secret message. The finite length performance analysis of the proposed secure communications scheme for the legitimate listener shows good results in terms of the bit error rate and the frame error over binary input additive white Gaussian noise channel. The asymptotic performance analysis of our scheme for a sufficiently large block length is found via the density evolution equations. Since the proposed low density parity check code is a multi-edge type code on graphs, there are two densities that characterize the system performance. The thresholds obtained by the density evolution equations of our scheme show comparable or improved results when compared to the fully random low density parity check codes.
-
-
-
Sparsity-aware Multiple Relay Selection In Large Decode-and-forward Relay Networks
Authors: Ala Gouissem, Ridha Hamila, Naofal Al-dhahir and Sebti FoufouCooperative communication is a promising technology that has attracted significant attention recently thanks to its ability to achieve spatial diversity in wireless networks with only single-antenna nodes. The different nodes of a cooperative system can share their resources so that a virtual Multiple Input Multiple Output (MIMO) system is created which leads to spatial diversity gains. To exploit this diversity, a variety of cooperative protocols have been proposed in the literature under different design criteria and channel information availability assumptions. Among these protocols, two of the most-widely used are the amplify-and-forward (AF) and decode-and-forward (DF) protocols. However, in large-scale relay networks, the relay selection process becomes highly complex. In fact, in many applications such as device-to-device (D2D) communication networks and wireless sensor networks, a large number of cooperating nodes are used, which leads to a dramatic increase in the complexity of the relay selection process. To solve this problem, the sparsity of the relay selection vector has been exploited to reduce the multiple relay selection complexity for large AF cooperative networks while also improving the bit error rate performance. In this work, we extend the study from AF to large-scale decode-and-forward (DF) relay networks. Based on exploiting the sparsity of the relay selection vector, we propose and compare two different techniques (referred to as T1 and T2) that aim to improve the performance of multiple relay selection in large-scale decode-and-forward relay networks. In fact, when only few relays are selected from a large number of relays, the relay selection vector becomes sparse. Hence, utilizing recent advances in sparse signal recovery theory, we propose to use different signal recovery algorithms such as the Orthogonal Matching Pursuit (OMP) to solve the relay selection problem. Our theoretical and simulated results demonstrate that our two proposed sparsity-aware relay selection techniques are able to improve the outage performance and reduce the computation complexity at the same time compared with conventional exhaustive search (ES) technique. In fact, compared to ES technique, T1 reduces the selection complexity by O(K^2 N) (where N is the number of relays and K is the number of selected relays) while outperforming it in terms of outage probability irrespective of the relays' positions. Technique T2 provides higher outage probability compared to T1 but reduces the complexity making a compromise between complexity and outage performance. The best selection threshold for T2 is also theoretically calculated and validated by simulations which enabled T2 to also improve the outage probability compared with ES techniques. Acknowledgment This publication was made possible by NPRP grant #6-070-2-024 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
-
-
-
Inconsistencies Detection In Islamic Texts Of Law Interpretations ["fatawas"]
Authors: Jameela Al-otaibi, Samir Elloumi, Abdelaali Hassaine and Ali Mohamed JaouaIslamic web content offers a very convenient way for people to learn more about Islam religion and the correct practices. For instance, via these web sites they could ask for fatwas (Islamic advisory opinion) with more facilities and serenity. Regarding the sensitivity of the subject, large communities of researchers are working on the evaluation of these web sites according to several criteria. In particular there is a huge effort to check the consistency of the content with respect to the Islamic shariaa (or Islamic law). In this work we are proposing a semiautomatic approach for evaluating the web sites Islamic content, in terms of inconsistency detection, composed of the following steps: (i) Domain selection and definition: It consists of identifying the most relevant named entities related to the selected domain as well as their corresponding values or keywords (NEV). At that stage, we have started building the Fatwas ontology by analyzed around 100 fatwas extracted from the online system. (ii) Formal representation of the Islamic content: It consists of representing the content as formal context relating fatwas to NEV. Here, each named entity is split into different attributes in the database where each attribute is associated to a possible instantiation of the named entity. (iii) Rules extraction: by applying the ConImp tools, we extract a set of implications (or rules) reflecting cause-effect relations between NEV. As an extended option aiming to provide more precise analysis, we have proposed the inclusion of negative attributes. For example for word "licit", we may associate "not licit" or "forbidden", for word "recommended" we associated "not recommended", etc. At that stage by using an extension of Galois Connection we are able to find different logical associations in a minimal way by using the same tool ConImp. (iv) Conceptual reasoning: the objective is to detect a possibly inconsistency between the rules and evaluate their relevance. Each rule is mapped to a binary table in a relational database model. By joining obtained tables we are able to detect inconsistencies. We may also check if a new law is not contradicting existing set of laws by mapping the law into a logical expression. By creating a new table corresponding to its negation we have been able to prove automatically its consistencies as soon as we obtain an empty join of the total set of joins. This preliminary study showed that the logical representation of fatwas gives promising results in detecting inconsistencies within fatwa ontology. Future work includes using automatic named entity extraction and automatic transformation of law into a formatted database; we should be able to build a global system for inconsistencies detection for the domain. ACKNOWLEDGMENT: This publication was made possible by a grant from the Qatar National Research Fund through National Priority Research Program (NPRP) No. 06-1220-1-233. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the Qatar National Research Fund or Qatar University.
-
-
-
A Low Power Reconfigurable Multi-sensing Platform For Gas Application
Presence of toxic gases and accidental explosions in gas industries have turned the researcher to innovate an electronic nose system which can indicate the nature and the parameters of the gas passing through different vessels. Therefore, in this research we propose a low power Radio Frequency Identification (RFID) based gas sensor tag which can monitor the parameters and indicate the type of gas. The research work is divided in to three main parts. The first two parts cover the design and analysis of low power multi-sensors and processing unit, while the last part focuses on a passive RFID module which can provide communication between the sensor and the processing unit, as shown in Fig. 1. In passive RFID applications, power consumption is one of the most prominent parameter because most of the power is harvested from the coming RF signal. Therefore a ring-oscillator based low power temperature sensor is designed to measure the gas thermodynamic conditions. The oscillator is designed using the Thyristor based delay element [7], in which the current source present for temperature compensation has been displaced to make the delay element as temperature dependent. The proposed temperature sensor consumes 47nW power at 27 °C, which increases linearly with temperature. Moreover, a 4x4 array of tin-oxide gas sensor based on convex Micro hotplates (MHP), is also utilized to identify the type of gas. The array is designed such that each sensor of an array provide different pattern for the same gas. The power consumption caused by the temperature and gas sensor is in the order of few µW's. The prime advantage of MHP can be visualized by the 950 °C annealed MHP, which exhibit the thermal efficiency of 13 °C /mW. Moreover it requires a driving voltage of only 2.8V to reach 300 °C in less than 5ms, which make it compatible with power supplies required by CMOS ICs. The gas sensor will provide 16 feature points at a time, which can results in hardware complexity and throughput degradation of the processing unit. Therefore, a principle component analysis (PCA) algorithm is implemented to reduce the number of feature points. Thereafter, a binary decision tree algorithm is adopted to classify the gases. We implemented both algorithms on heterogeneous Zynq platform. It is observed that the execution of PCA on Zynq programmable SoC is 1.41 times faster than the corresponding software execution, with a resource utilization of only 23% . Finally, a passive ultrahigh-frequency (UHF) RFID transponder is developed for communicating between the sensing block and processing unit. The designed module is responsible to harvest the power from the coming RF signal and accomplish the power requirement of both sensors. The designed transponder IC achieved minimum sensitivity of -17dBm with a minimum operational power of 2.6µW.
-
-
-
Utilizing Monolingual Gulf Arabic Data For Qatari Arabic-english Statistical Machine Translation
Authors: Kamla Al-mannai, Hassan Sajjad, Alaa Khader, Fahad Al Obaidli, Preslav Nakov and Stephan VogelWith the recent rise of social media, Arabic speakers have started increasingly using dialects in writing, which has constituted research in dialectal Arabic (DA) as a field of interest in natural language processing (NLP). DA NLP is still in its infancy, both in terms of its computational resources and in its tools, e.g. lack of dialectal morphological segmentation tools. In this work, we present a 2.7M-token collection of monolingual corpora of Gulf Arabic extracted from the Web. The data is unique since it is genre-specific, i.e. romance genre, in spite of the various sub-dialects of Gulf Arabic that it covers, e.g., Qatari, Emirati, Saudi. In addition to the monolingual Qatari data collected, we use existing parallel corpora of Qatari (0.47M-token), Egyptian (0.3M-token), Levantine (1.2M-token) and Modern Standard Arabic (MSA) (3.5M-token) to English to develop a Qatari Arabic to English statistical machine translation system (QA-EN SMT). We exploit the monolingual data to 1) develop a morphological segmentation tool for Qatari Arabic, 2) generate a uniform segmentation scheme for the various variants of Arabic employed, and 3) build a Qatari language model in the opposite translation direction. Proper morphological segmentation of Arabic plays a vital role in the quality of a SMT system. Using the monolingual Qatari data collected in combination with the QA side of the small QA-EN existing parallel data, we trained an unsupervised morphological segmentation model for Arabic, i.e. Morfessor, to create a word segmenter for Qatari Arabic. We then extrinsically compare the impact of the resulting segmentation (as opposed to using tools for MSA) on the quality of QA-EN machine translation. The results show that this unsupervised segmentation can yield better translation quality. Unsurprisingly, we found that removing the monolingual data from the training set of the segmenter affects the translation quality with a loss of 0.9 BLEU points. Arabic dialect resources, when adapted for the translation of one dialect are generally helpful in achieving better translation quality. We show that a standard segmentation scheme can improve vocabulary overlap between dialects by segmenting words with different morphological forms in different dialects to a common root form. We train a generic segmentation model for Qatari Arabic and the other variants used using the monolingual Qatari data and the Arabic side of the parallel corpora. We train the QA-EN SMT system using the different parallel corpora (one at a time) in addition to the QA-EN parallel corpus segmented using the generic statistical segmenter. We show a consistent improvement of 1.5 BLEU points when compared with their respective baselines with no segmentation. In the reverse translation direction, i.e. EN_QA, we show that adding a small amount of in-domain data to the language model used results in a relatively large improvement compared to the degradation resulted by adding a large amount of out-of-domain data.
-
-
-
Securing The E-infrastructure In Qatar Through Malware Inspired Cloud Self-protection
Authors: Elhadj Benkhelifa and Thomas WelshWhilst the state of security within the Cloud is still a contentious issue, some privacy and security issues are well known or deemed to be a likely threat. When considering the ongoing threat of malicious insiders the promised security expertise might be deemed untrusted. The focus of our research is determining the extent of issues related to the underlying technology, which support Cloud environments, mainly virtualization platforms. It is often argued that virtualization is secure over conventional shared resources due to the inherent isolation. However much literature may be seen which cites examples to the contrary and as such it should be considered that, as with all software, virtualization applications are susceptible to exploitation and subversion. In fact, it might even be argued that the complexity and heterogeneous nature of the environment may even facilitate further security vulnerabilities. To illustrate and investigate this point we consider the security threat of malware within the context of cloud environments. With this evolution of malware combined with the knowledge that Cloud software is susceptible to vulnerabilities, it is argued that complex malware might exist for the Cloud and if it were successful, would shed light on the security of these technologies. Whilst there are many examples of state of the art malware detection and protection for Cloud environments, this work tends to focus on examining virtual machines (VMs) from another layer. The primary flaw identified in all of the current approaches is failing to take into account malware, which is aware of the Cloud environment; thus be in a position to subvert this detection process. Traditional malware security applications tend to take a defensive approach by looking for existing malware through signature analysis or behavior monitoring. Whilst such approaches are acceptable for traditional environments they become less effective for distributed and dynamic ones. We argued that due to this dynamic nature of the Cloud as well as its uncertain security concerns, a malware type application may be a suitable security defense and thus operate as a proactive, self-protecting element. We present an architecture for Multi-Agent Cloud-Aware Self-Propagating Agents for Self-Protection. By adapting this architecture to include constraints (such as a kill switch) the application may be effectively controlled and thus any negative effects minimized. This application will then cross the multiple layers within the network, having high privilege. Its dynamic and distributed architecture will allow it survive removal from malware whilst hunting down malicious agents and patching systems as necessary. In order to survive in the hostile and dynamic cloud environment, the software incorporates a multi-component and multi-agent architecture which has shown success in the past with malware that propagate in heterogeneous environments. The components consist of passive and active sensors to learn about the environment, distributed storage to provide redundancy and controller/constructor agents for localized coordination. The proposed architecture has been implemented with success and desired results were achieved. The research outputs hold significant potentials, particularly for complex and highly dynamic infrastructures such as those aimed for DigitalQatar.
-
-
-
Watch Health (w-health)
More LessSmart watches have been available for quite some time now. However,with the announcement of"Apple Watch"by Apple Inc., a strong buzz about smart watches has been created. A highly anticipated success of Apple watches can also be linked to an expected increase in the smart watch market in the very near future. Apart from Apple, other big companies such as Sony, Motorola, Nike have their own brand of smart watches. Therefore, a strong market competition would arise leading to competitive prices, technologies, and designs which would possibly lead to increased popularity of smart watches. Following the recent announcement of apple watch, several online and newspaper articles have suggested that the most important application of Apple watch would be in the field of healthcare. This is also backed by the applications available in the Apple watch which includes GPS tracking, gyroscope, accelerometer, pulse monitor, calorie counter, activity tracker, Siri voice tracker, and host of various other applications. Further, the Apple watch is backed by powerful operating systems and hardware processors. This buzz about the smart watches arises one question - How effective can these smart watches be used for providing healthcare solutions? The use of smart devices for healthcare services has been a topic of extensive research for the last decade which has resulted in developing several healthcare solutions for various types of disease management, patient monitoring especially for chronic lifetime diseases and old people. With the emergence of smart watches, now it is time to further explore the possibility of using smart watches for healthcare services. Some of the advantages of smart watches for healthcare services are: ?They are easily wearable and portable and can almost be a part of everyday attire similar to regular watches. ?They are relatively cheaper than other smart devices such as smart mobile phones and similar gadgets. ?With the advancements in hardware and software technologies they are now as powerful as a high-end smart phone and can host several types of applications. ?They can be adapted and customised to provide various disease diagnosis and management according to the individual patient needs. ?The smart watches can include several sensors and also provide a platform for software applications for patient health monitoring and reporting. ?With the use of voice based applications such as Siri, patients having difficulty to use modern gadgets or to read and write can also use the device more easily. There is no doubt that iWatch or other smart watches not only provide numerous possibilities of adapting and implementing existing smart healthcare solutions but also a new platform for developing novel solutions for healthcare. The research and development in the current mobile-health solutions should now also focus their attention towards research in developing smart watches based healthcare solutions. Hence, watch health (w-health) is a division of electronic health practice, which is defined as "a medical practice supported with the use of smart watches technology which includes smart watches, smart mobile phones, wireless devices, patent monitoring devices and many others"
-
-
-
An Enhanced Locking Range 10.5-ghz Divide-by-3 Injection-locked Frequency Divider With Even-harmonic Phase Tuning In 0.18-um Bicmos
Authors: Sanghun Lee, Sunhwan Jang and Cam NguyenA frequency divider is one of the key building blocks in phased-lock loops (PLL) and frequency synthesizers, which are essential subsystems in wireless communication and sensing systems. A frequency divider divides the output frequency of a voltage-controlled oscillator (VCO) in order to compare it with a reference clock signal. Among the different types of frequency dividers, the injection-locked frequency divider (ILFD) is becoming more popular due to its low power and high frequency characteristics. However, the ILFD has an inherent problem of narrow locking range, which basically defines a range over which a frequency-division operation is supported. In order to increase the locking range, one of the most obvious ways is to inject higher power. In fully integrated PLLs, however, the injection signal is supplied by an internal VCO, which typically has limited fixed output power and hence enhancing the locking range with large injection signal power poses difficulty. In this work, we present the development of a fully integrated 10.5-GHz divide-by-3 (1/3) injection-locked frequency divider (ILFD) that can provide extra locking range with a small fixed injection power. The ILFD consists of a previously measured on-chip 10.5-GHz VCO functioning as an injection source, a 1/3 ILFD core, and an output inverter buffer. A phase tuner implemented using an asymmetric inductor is proposed to increase the locking range through even-harmonic (the 2rd harmonic for this design) phase tuning. With a fixed internal injection signal power of only -18 dBm (measured output power of the standalone VCO with a 50-? reference), a 25% enhancement in the locking range from 12 to 15 MHz is achieved with the proposed phase tuning technique without consuming an additional DC power. The frequency tuning range of the integrated 1/3 ILFD is from 3.3 GHz to 4.2 GHz. The proposed 1/3 ILFD is realized in a 0.18-µm BiCMOS process, occupies 0.6 mm × 0.7 mm, and consumes 10.6 mA while the ILFD alone consumes 6.15 mA from 1.8-V supply. The main objective of this work is proposing a new technique of phase-tuning of the even-harmonics that can "further" increase the locking range with an "extra" amount beyond what can be achieved by other techniques. Since the developed technique can enhance the locking range further at a fixed injection power, it can be used in conjunction with other techniques for further enhancing the locking range. For instance, the locking range can be increased by using different injection powers and then further enhanced by tuning the phase of the even harmonics at each power level. The "extra" locking range, not achievable without the even-harmonic phase tuning, amounts to 25%, which is very attractive for PLL applications. Furthermore, additional tuning mechanisms such as use of a capacitor bank can be employed to achieve even wider tuning range for applications such as PLL.
-
-
-
Exprimental Study Of Mppt Algorithms For Pv Solar Water Pumping Applications
By Badii GmatiThe energy utilization efficiency of commercial photovoltaic (PV) pumping systems can be significantly improved by employing many MPPT methods available in the literature such as the constant voltage, short-Current Pulse, Open Voltage, Perturb and Observe, Incremental-Conductance and non-linear methods (Fuzzy Logic and Neural Networks). This paper presents a detailed experimental study of the two DSP implementation techniques: Constant Voltage and Perturb and Observe "P&O" used for standalone PV pumping systems. The influence of algorithm parameters on system behavior is investigated and the various advantages and disadvantages of each technique are identified for different weather conditions. Practical results obtained using dSpace DS1104 show excellent performance and optimal operating system is attained regardless of climate change.
-
-
-
Geometrical Modeling And Kinematic Analysis Of Articulated Tooltips Of A Surgical Robot
INTRODUCTION: The advent of da Vinci surgical robot (Intuitive Surgical, California, USA) has allowed complex surgical procedures in urology, gynecology, cardiothoracic, and pediatric to be performed with better clinical outcomes. The end effectors of these robots exhibits enhanced dexterity with improved range of motion leading to better access and precise control during the surgery. Understanding the design and kinematics of these end effectors (imitating surgical instruments' tooltips) would assist in replication of their complex motion in a computer-generated environment. This would further support the development of computer-aided robotic surgical applications. The aim of this work is to develop a software framework comprising of the geometric three-dimensional models of the surgical robot tool-tips along with their kinematic analysis. METHODS: The geometric models of the surgical tooltips were designed based on the EndoWristTM instruments of the da Vinci surgical robot. Shapes of the link and inter-link distances of the EndoWristTM instrument were measured in detail. A three-dimensional virtual model was then recreated using CAD software (Solidworks, Dassault Systems, Massachusetts, USA). The kinematic analysis was performed considering trocar as the base-frame for actuation. The actuation mechanism of the tool composed of a prismatic joint (T1) followed by four revolute joints (Ri ; i = 1 to 4) in tandem (Figure 1). The relationship between the consequent joints was expressed in form of transformation matrices using Denavit-Hartenberg (D-H) convention. Equations corresponding to the forward and the inverse kinematics were then computed using D-H parameters and applying geometrical approach. The kinematics equations of the designed tooltips were implemented through a modular cross-platform software framework developed using C/C++. In the software, the graphical rendering was performed using openGL and a multi-threaded environment was implemented using Boost libraries. RESULTS AND DISCUSSION: Five geometric models simulating the articulated motion of the EndoWristTM instruments were designed (Figure 2). These models were selected based on the five basic interactions of the surgical tooltip with the anatomical structures, which included: Cauterization of the tissue, Stitching using needles, Applying clips on vascular structures, Cutting using scissors, and Grasping of the tissues. The developed software framework, which includes kinematics computation and graphical rendering of the designed components, was evaluated for applicability in two scenarios (Figure 3). The first scenario demonstrates the integration of the software with a patient-specific simulator for pre-operative surgical rehearsal and planning (Figure 3a). The second scenario shows the applicability of the software in generation of virtual overlays of the tooltips superimposed with the stereoscopic video stream and rendered on the surgeon's console of the surgical robot (Figure 3b). This would further assist in development of vision-based guidance for the tooltips. CONCLUSION: The geometrical modeling and kinematic analysis allowed the generation of the motion of the tooltips in a virtual space that could be used for both pre-operatively and intra-operatively, before and during the surgery, respectively. The resulting framework can also be used to simulate and test new tooltip designs.
-
-
-
Relate-me: Making News More Relevant
Authors: Tamim Jabban and Ingmar WeberTo get readers of international news stories interested and engaged, it is important to show how a piece of far-away news relates to them and how it might affect their own country. As a step in this direction, we have developed a tool to automatically create textual relations between news articles and readers. To produce such connections, the first step is to detect the countries mentioned in the article. Many news sites, including Al Jazeera (http://www.aljazeera.com), use automated tools such as OpenCalais (http://opencalais.com) to detect place references in a news article and list those as a list of countries in a dedicated section at the bottom (say, List A: [Syria, Germany]). If not already included, relevant countries could be detected using existing tools and dictionaries. The second step is to use the reader's IP address to infer the country they are currently located in (say, Country B: Qatar). Knowing this country gives us a "bridge" to the reader as now we can try relate the countries from List A, to the reader's country, Country B. Finally, we have to decide which type of contextual bridges to build between the pairs of countries. Currently, we are focusing on four aspects: 1) Imports & Exports: this section displays imports and exports between the Country B and List A, if any. For instance: "Qatar exports products worth $272m every year to Germany, 0.27% of total exports." Upon clicking on this information, it will redirect the user to another website, showing a breakdown of these imports and exports. 2) Distances: this simply states the direct distance in kilometers from Country B to every country in List A. For instance: "Syria is 2,110km away from Qatar." Upon clicking on this information, it will navigate to a Google Maps display, showing this distance. 3) Relations: this provides a link to the designated Wikipedia page between Country B and every country in List A. For instance, "Germany - Qatar Relations (Wikipedia)." It also shows a link relating the countries using Google News: "More on Qatar - Germany (Google News)" 4) Currencies: this shows the currency conversions between Country B's currency and every other country in List A, for instance: "1 QAR = 0.21 EUR (Germany's currency)." Our current tool, which will be demonstrated live during the poster presentation, was built using JavaScript. With the use of tools such as Greasemonkey, this allowed us to test and display the results of the project on Al Jazeera (http://www.aljazeera.com), without having site ownership. We believe that the problem of making connections between countries explicit is of particular relevance to small countries such as Qatar. Whereas usually a user from, say, Switzerland, might not be interested in events in Qatar, showing information regarding trade between the two could change their mind.
-
-
-
Energy And Spectrally Efficient Solutions For Cognitive Wireless Networks
Authors: Zied Bouida, Ali Ghrayeb and Khalid QaraqeAlthough different spectrum bands are allocated to specific services, it has been identified that these bands are unoccupied or partially used most of the time. Indeed, recent studies show that 70% of the allocated spectrum is not utilized. As wireless communication systems evolve, an efficient spectrum management solution is required in order to satisfy the need of current spectrum-greedy applications. In this context, cognitive radio (CR) has been proposed as a promising solution to optimize the spectrum utilization. Under the umbrella of cognitive radio, spectrum-sharing systems allow different wireless communication systems to coexist and cooperate in order to increase their spectral efficiency. In these spectrum-sharing systems, primary (licensed) users and secondary (unlicensed) users are allowed to coexist in the same frequency spectrum and transmit simultaneously as long as the interference of the secondary user to the primary user stays below a predetermined threshold. Several techniques have been proposed in order to meet the required quality of service of the secondary user while respecting the primary user's constraints. While these techniques, including multiple-input multiple-output (MIMO) solutions, are optimized from a spectrally-efficiency perspective, they are generally not well designed to address the related complexity and power consumption issues. Thus, the achievement of high data rates with these techniques comes at the expense of high-energy consumption and increased system complexity. Due to these challenges, a trade-off between spectral and energy efficiencies has to be considered in the design of future transmission technologies. In this context, we have recently introduced adaptive spatial modulation (ASM), which comprises both adaptive modulation (AM) and spatial modulation (SM), with the aim of enhancing the average spectral efficiency (ASE) of multiple antenna systems. This technique was shown to offer high energy efficiency and low system complexity thanks to the use of SM while achieving high data rates thanks to the use of AM. Motivated by this technique and the need of such performance in a CR scenario, we study in this abstract the concept of ASM in spectrum sharing systems. In this work, we propose the ASM-CR scheme as an energy-efficient, spectrally-efficient, and low-complexity scheme for spectrum sharing systems. The performance of the proposed scheme is analyzed in terms of ASE and average bit error rate and confirmed with selected numerical results using Monte-Carlo simulations. These results confirm that the use of such techniques comes with an improvement in terms of spectral efficiency, energy efficiency, and overall system complexity.
-
-
-
Spca: Scalable Principle Component Analysis For Big Data
More LesssPCA: Scalable Principle Component Analysis for Big Data Web sites, social networks, sensors, and scientific experiments today generate massive amounts of data i.e Big Data. Owners of this data strive to obtain insights from it, often by applying machine learning algorithms.Thus, designing scalable machine learning algorithms that run on a cloud computing infrastructure is an important area of research. Many of these algorithms use the MapReduce programming model. In this poster presentation, we show that MapReduce machine learning algorithms often face scalability bottlenecks, commonly because the distributed MapReduce algorithms for linear algebra do not scale well. We identify several optimizations that are crucial for scaling various machine learning algorithms in distributed settings. We apply these optimizations to the popular Principal Component Analysis (PCA) algorithm. PCA is an important tool in many areas including image processing, data visualization, information retrieval, and dimensionality reduction. We refer to the proposed optimized PCA algorithm as sPCA. sPCA is implemented in the MapReduce framework. It achieves scalability via employing efficient large matrix operations, effectively leveraging matrix sparsity, and minimizing intermediate data.Experiments show that sPCA outperforms the PCA implementation in the popular Mahout machine learning library by wide margins in terms of accuracy, running time, and volume of intermediate data generated during the computation. For example, on a 94 GB dataset of tweets from Twitter, sPCA achieves almost 100% accuracy and it terminates in less than 10,000 s (about 2.8 hours), whereas the accuracy of Mahout PCA can only reach up to 70% after running for more than 259,000 s (about 3 days). In addition,both sPCA and Mahout PCA are iterative algorithms, where the accuracy improves by running more iterations until a target accuracy is achieved. In our experiments, when we fix the target accuracy at 95%, Mahout PCA takes at least two orders of magnitudes longer than sPCA to achieve that target accuracy. Furthermore, Mahout PCA generates about 961 GB of intermediate data, whereas sPCA produces about 131 MB of such data, which is a factor of 3,511x less data. This means that, compared to Mahout PCA, sPCA can achieve more than three orders of magnitudes saving in network and I/O operations, which enables sPCA to scale well.
-
-
-
How To Improve The Health Care System By Predicting The Next Year Hospitalization
By Dhoha AbidIt is very common to study the patient's data hospitalization to get useful information to improve the health care system. According to the American Hospital Association, in 2006, over $30 billion was spent on unnecessary hospital admissions. If patients that are likely to be hospitalized can be identified, the admission will be avoided as they will get the necessary treatments earlier. In this context, in 2013, the Heritage Provider Network (HPN) launched the $3 million Heritage Health Prize in order to develop a system that uses the available patient data (health records and claims)to predict and avoid unnecessary hospitalizations. In this work we take this competition data, and we try to predict the patient's hospitalization number. The data encompasses more than 2,000,000 of patient admission records over three years. The aim is to use the data of the ?rst and second year to predict the number of hospitalization of the third year. In this context, a set of operations mainly: data transformation, outlier detection, clustering, and regression algorithms are applied. Data transformation operations are mainly: (1) As the data is big enough to be processed, dividing the data into chunks is mandatory. (2) Missing values are either replaced or removed. (3) As the data is raw and cannot be labeled, different operations of aggregation are applied. After transforming the data, outlier detection, clustering, and regression algorithms are applied in order to predict the third year hospitalization number for each patient. Results show, by applying directly regression algorithms, the relative error is only 79%. However, by applying the DBSCAN clustering algorithm followed by the regression algorithm, the relative error decreased to be 67%. This is because the attribute that has been generated by the pre-processing clustering step helped the regression algorithm to predict more accurately the number of hospitalization; and this is why the relative error has dropped. The relative error can be decreased more if we apply the clustering pre-processing step twice. That means, the clusters generated in the first clustering step are re-clustered to generated sub-clusters. Then, the regression algorithm is applied to these sub-clusters. The relative error dropped significantly from 67% to 32%. Patients share common hospitalization history are grouped into clusters. This clustering information is used to enhance the regression results.
-