- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2014 Issue 1
- Conference date: 18-19 Nov 2014
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2014
- Published: 18 November 2014
301 - 350 of 480 results
-
-
Heme Oxygenase Isoform 1 Regresses Cardiomyocyte Hypertrophy Through Regressing Sodium Proton Exchanger Isoform 1 Activity
Authors: Ahmed Sobh, Nabeel Abdulrahman, Soumaya Bouchoucha and Fatima MraicheBackground: Pathological cardiac hypertrophy is a worldwide problem and an independent risk factor that predisposes the heart to failure. Enhanced activity or expression of the sodium proton exchanger isoform 1 (NHE1) has been implicated in conditions of cardiac hypertrophy. Induction of cGMP has previously been demonstrated to reduce NHE1 activity and expression, which could be through the expression of heme oxygenase isoform 1 (HO-1), a stress-induced enzyme that shows cardioprotective properties. In our study, we aimed to investigate the role of inducing HO-1 in a cardiac hypertrophy model that expresses active NHE1 to determine whether HO-1 could protect against NHE1 induced cardiomyocyte hypertrophy. Methods: H9c2 cardiomyocytes were infected with the active form of the NHE1 adenovirus in the presence and absence of protoporphyrin (CoPP). Which was used to induce HO-1. Protein and mRNA expression of HO-1 were invested in H9c2 cardiomyocytes in the presence and absence of the expression of the active form of the NHE1 adenovirus. The effects of HO-1 induction on NHE1 protein expression and cardiomyocyte hypertrophic markers were measured respectively by western blotting and analyzing the cell surface area of H9c2. Results: Our results showed a significant decrease in HO-1 mRNA expression in cardiomyocytes expressing active NHE1 (74.84 ± 9.19 % vs. 100 % normal NHE1 expression, p<0.05). However, we did not see any changes in NHE1 protein expression following HO-1 induction. A trend towards decrease in cardiomyocyte hypertrophy was observed in H9c2 cardiomyoblasts infected with the active form of NHE1 following stimulation with HO-1 (NHE1, 154.93 ± 14.87 % vs. NHE1 + CoPP, 109 ± 16.44 %). Conclusion: In our model, HO-1 maybe a useful means to reduce NHE1 induced cardiomyocyte hypertrophy, although the mechanism by which it does that requires further investigation.
-
-
-
Qatar´s Youth Is Putting On Weight: The Increase In Obesity Between 2003 And 2009
The profound economic growth in the Arabian Gulf states over the past few decades has had a great impact on the lifestyle of the Qatari population. There has been a rapid appearance of fast food restaurants and other hallmarks of western society. These influences have been accompanied by a higher energy intake and decreased levels of physical activity. The potential impact on the younger population is particularly alarming. Throughout the recent past, the prevalence of type 2 diabetes (T2DM) among children and adolescents has increased significantly due to the high prevalence of obesity. Obese children can have a higher risk of premature mortality due to consequent cardiometabolic morbidity. According to WHO, obesity-induced medical conditions have now led to excess mortality surpassing that associated with tobacco. However, data on the body weight status of Qatari children are lacking for the past ten years. This study estimates the magnitude of the increase in BMI among Qatari adolescents (aged 12-18 years), by comparing our data (obtained during 2009) with published results from 2003 [Bener et al, JHPN 2005;23(3):250-8]. The present data originate from a pilot study on lung function conducted in Qatar in 2009. The subjects were chosen by random sampling of Qatari students attending government schools (grade 7-12). For our study, only students aged 12-18 years are included, resulting in a total number of 705 participants (400 girls and 305 boys). Although a large variety of data was collected, our study focused only on height, weight and BMI. The results reveal a substantial increase in BMI during this 7 year period for both Qatari boys and girls. For boys aged 12 years, mean BMI increased by 2 Kg m-2 which became a 5 Kg m-2 increase at the age of 17 years, and possibly as much as 8 Kg m-2 by 18 years. By contrast, the increase in mean BMI for girls remained more or less constant between the age of 12 years and 17 years, fluctuating between 2 Kg m-2 and 4 Kg m-2, before reaching almost 7 Kg m-2 at the age of 18 years. Using International Obesity Task Force (IOTF) criteria, the overall prevalence of Qatari children who were overweight or obese was 26.5% (boys) and 23.1% (girls) in 2003 [Bener, Food Nutr Bull 2006;27(1):39-45], and 47.2% (boys) and 40.8% (girls) in 2009. For boys, this represents a 21% increase, with a corresponding increase of 18% for girls. These results show that during this 7-year period, the prevalence of overweight and obesity among both boys and girls has increased by more than 75%. Based on these figures, the prevalence of childhood obesity is alarmingly high and points to an acute need for intervention, and a need for local research into the most appropriate and effective actions. In addition, there is also a need to systematically collect regular and ongoing observational data regarding body weight status of adolescent Qataris in order to continue to monitor this situation.
-
-
-
Effect Of Diabetes On Gastric Stem Cell Lineage In Rat Models
Authors: Ali Abdullah Al Jabri and Sheriff KaramIn 2013, it was estimated that over 382 million people throughout the world suffered from diabetes. Despite the numerous treatment approaches to manage this condition, diabetic patients continue to suffer from various symptoms and complications. These include, but are not limited to, retinopathy, nephropathy, peripheral and autonomic neuropathy, gastrointestinal, genitourinary, cardiovascular, and cerebral symptoms. In this study, we attempt to investigate the variations in the gastric stem cell lineage in order to further understand the gastrointestinal symptoms experienced by diabetic patients and reveal possible treatment promises. Using rats as an animal model, we divide them into different age groups of 3, 6, 9, and 12 months. For each age group, there were twelve animals, six of which were kept as control. The other six were injected with Streptozotocin to destroy the Beta cells of the Islet of Langerhans in the pancreas. The diabetic rats' plasma glucose was closely monitored; those that naturally recovered from diabetes were researched separately. Antibodies against KI-67 and Oct3/4 are used to examine cellular proliferation, Ghrelin for cells secreting this hormone, H,K-ATPase for parietal cells, UEA and GSII lectins for the surface and neck mucus cells respectively. For more quantitative results, qRT-PCR using primers specific for genes of gastric stem cell differentiation pathways was used. Statistical calculations including One-tailed T-test were used to determine whether the changes between the control and diabetic groups were significant. Results suggest an increase in the proliferation activity of stem cells number of some cells like surface mucus cells, and a decrease in the number of ghrelin secreting cells. Future tests to be used in order to support these results include antibodies against gastrin, CCK, and LGR5.
-
-
-
Electronic Library Institute-seerq (elisq)
An electronic library is a computer-managed set of collections with services tailored for its user communities. The project team—a collaboration of four universities (Qatar University - QU, Virginia Tech, Pennsylvania State University, Texas A & M University), the Qatar National Library - QNL, and consultants—focused on the two project aims for Qatar: building community and building infrastructure (i.e., collections and information services). Thus we fit with Qatar's Thematic Pillar of Research on Computing and Information Technology, and overlap with a number of Research Grand Challenges (e.g., Cyber-security; Managing the Transition to a Diversified, Knowledge-based Society, and Culture, Arts, Heritage, Media and Language within the Arabic Context). With regard to our aim of building an electronic library community in Qatar, we have: 1. Participated in the Special Library Association Gulf Chapter, hosted in Qatar, to create awareness about electronic libraries; 2. Launched a consulting center at QU Library—with more than 30 new reference works, online educational resources, and specialized databases—and are sharing knowledge with librarians and information professionals to support those interested in collections and services; 3. Established a collaboration with Gulf Studies at QU, so we can identify and host content on this topic, and assist QU researchers and students; and 4. Collected citation-based and non-citation-based metrics (altmetrics), for Qatar and 35 nations that are competing with Qatar's annual scholarly production. We published a new approach for comparing the metrics and evaluating country-level scholarly impact. 5. Studied the evolving scholarly activities and needs of researchers in Qatar, and compared them with our findings from USA, informing ELISQ about requirements and solutions appropriate for international electronic libraries. With regard to our aim of building electronic library infrastructure in Qatar, we have built collections and provided related services: 1. Penn State's SeerSuite software is running at QU, allowing users to search the metadata and full-text of collections of PDF files from scholarly articles, e.g., QScience papers. SeerSuite gathers scholarly documents and automatically extracts metadata (authors, venues etc.) from crawled WWW content, allowing QNL and other libraries to harvest that metadata using OAI-PMH.. SeerSuite is being improved for searching on the content of the figures and tables in scholarly documents. 2. An historical collection of old Arabic documents has been assembled, indexed, and made accessible as well as data/text mined. 3. Using our QU server running Heritrix, gathered our first Arabic collection (8GB from 2,200 PDF files), from Qatari newspapers (Al-Rayah, Al-Watan, Qatar News Agency, Al-Arab, and Al-Sharq). This news collection was indexed with Apache Solr and is available for searching. Building upon the IPTC system we created a categorization system (taxonomy) for news stories, and then applied it through machine learning to train classifiers to aid browsing. 4. Both QNL and QU are building Web archives of portions of the WWW in Qatar, adapting Heritrix and the Wayback Machine, thus preserving history, culture, and Arabic content (including news, sports, government information, and university webpages) for future use and scholarly study.
-
-
-
Openadn: Middleware Architecture For Cloud Based Services
Authors: Mohammed Samaka, Deval Bhamare, Aiman Erbad, Subharthi Paul and Raj JainAny global enterprise, such as, Qatar National Bank, with branches in many countries is an example of an Application Service Provider (ASP) that uses multiple cloud data centers to serve their customers. Depending upon the time of the day, the number of users at different location changes and the ASPs need to rescale their operation at each data center to meet the demand at that location. ASPs are facing a great challenge to leverage the benefits provided by such multi-cloud distributed environments without service-centric Internet service Provider (ISP) infrastructure. In addition, each ASP's requirements are different and since these ASPs are large customers of ISPs, they want the network traffic handling to be tailored to their requirements. While the ASP wants to control the forwarding of its traffic on the ISP's network; the ISP does not want to relinquish control of its resources to the ASPs. In this work we present an innovative architecture, which facilitates ASPs to automate the deployment and operation of their applications over multiple clouds. We have developed Middleware Architecture for Cloud based applications using Software Defined Networking (SDN) concepts. Especially we discuss how the implementation of interface between ASP and ISP control planes as well as implementation of generic packet header abstraction is achieved. Using our system, ASPs may specify the policies in the control plane and the control plane is responsible for enforcing these policies in the data plane. In OpenADN architecture, each application consists of multiple workflows, which are dynamically created and the required virtual servers and middleboxes are automatically created at the appropriate clouds. OpenADN allows both new applications that are designed specifically for it as well as legacy applications. It implements "Proxy-Switch Port" (pPort) to provide an interface between OpenADN-aware and OpenADN-unaware services. Depending on the available resources in the host, the controller launches a pPort with a pre-configured number of workflows that it can support. The pPort automatically starts a proxy server. The proxy service acts as the interface between OpenADN-aware services and OpenADN-unaware applications. We support both packet level middleboxes (such as intrusion detection systems) and message level middleboxes (such as firewalls). A cross-layer design is proposed in the current architecture that allows application-layer flow of information to be placed in the form of labels at layer 3.5 (packet level) and at layer 4.5 (message level). Layer 3.5 is called as "Application Label Switching" (APLS) layer. APLS is used by the path policy (routing/switching) component while layer 4.5 information is used to initiate and terminate application sessions. In addition to traditional applications, OpenADN can also be used for other multi-cloud applications such as Internet of Things, Virtual Worlds, Online Games, and Smart Wide Area Network services.
-
-
-
Load Follows Generation: The New Paradigm For Future Power System Control In Presence Of High Penetration Of Variable Renewable Resources
More LessIn the 130 years since the invention of the legacy electric power system concept, electrical generation has been adjusted to match electrical consumption (i.e., the "load") as it varies throughout the time of day and seasons. This "generation follows load" paradigm is a major roadblock to the large-scale incorporation of renewable energy into the national power grid since energy sources such as wind and solar provide inconsistent, variable power that cannot easily be controlled to follow consumption. As a result, today's centrally planned and controlled power system design is no longer adequate. This paper is to introduce a new control approach to enable a "load follows generation" paradigm where a flexible engineered system with distributed control at the users' sites will revolutionize the power industry. Customers will be able to generate power on-site, purchase power from a variety of sources (including each other), sell power back to the grid, select the level of the supply reliability they wish to purchase, and choose how to manage their electricity use. The resulting solution will make the electricity grid significantly more efficient and robust by facilitating extensive use of renewable energy sources and reducing end-use losses in the system. With renewable energy sources widely distributed (e.g., roof-top solar panels and wind farms), the proposed approach will allow exchange of power among utilities, market service providers, consumers, and aggregators (services representing many loads), and also allow power exchange within a customer site. By incorporating this flexibility into their operations, utilities will be able to adjust the overall load they must serve to match available power. This "load follows generation" approach is essential to allow unhindered inclusion of low-emission renewables in the electricity grid. At the same time it makes consumers active agents in the energy exchange. Earlier research addressed discrete aspects of the scientific challenges, but the new approach called Flexible Load Energy eXchange (FLEX) will provide the system-level, holistic approach to achieve its vision. The overarching goal is to develop the science, technology, and control system design required to enable energy exchange between the customer and other parties in the electricity energy exchange ecosystem to unlock the huge potential of renewable generation. A central impediment is the stochastic variability of renewable energy sources such as wind and solar, prompting some to question if dependence on renewable sources will ever be viable. Even when wind (which tends to produce at night) and solar (produced during the day) are combined, severe variations in generation require huge adjustments (termed "ramping" within the industry), and spinning (on-line) reserves using conventional generation. This paper explores the missing critical component of smart grid development: smart and flexible loads. In addition, power interruptions, which are often caused by generation/load imbalances or faults on end-user radial connections, will be greatly reduced in a FLEX-enabled power grid by on-site or alternate customer generation. FLEX will also enable customer participation to create new market arrangements, such as wholesale and retail options, and incentives to increase energy efficiency not previously available.
-
-
-
Plate: Problem-based Learning Authoring And Transformation Environment
Authors: Mohammed Samaka, Yongwu Miao, John Imagliazzo, Disi Wang, Ziad Ali, Khulood Aldus and Mohamed AllyThe project entitled Problem-based Learning Authoring and Transformation Environment (PLATE) is housed at Qatar University. It is under the auspices of the Qatar National Priority Research Program (NPRP). The PLATE project seeks to improve student learning using innovative approaches to problem-based learning (PBL) in a cost-effective, flexible, interoperable, and reusable manner. Traditional subject-based learning that focuses on passively learning facts and reciting them out of context is no longer sufficient to prepare potential engineers and all students to be effective. Within the last two decades, the problem based learning approach to education has started to make inroads into engineering and science education. This PBL educational approach comprises an authentic, ill-structured problem with multiple possible routes to multiple possible solutions. A systematical approach to support online PBL is the use of a pedagogy-generic e-learning platform such as IMS Learning Design (IMS-LD 2003), which is an e-learning technical standard useful to script a wide range of pedagogical strategies as formal models. It seeks to research and develop a process modeling approach together with software tools to support the development and delivery of face-to-face, online, and hybrid PBL courses or lessons in a cost-effective, flexible, interoperable, and reusable manner. The research team seeks to prove that the PLATE authoring system optimizes learning and that the PLATE system improves learning in PBL activities. For this poster presentation, the research team will demonstrate the progress it has made within the second year of research. This includes the development of a PBL scripting language to represent a wide range of PBL models, the creation of transformation functions to map PBL models represented in the PBL scripting language into the executable models represented in IMS-LD, and the architecture of the PLATE authoring tool. In addition, the project team designed the run-time environment and developed an initial version of a run-time engine and a run-time user agent. A teacher can instantiate a PBL script and execute a script instance as a course. The user can manipulate the diagram-based script instance in the user agent and the engine will response to the users' actions. Because of this, the system supports the user in executing a course module according to the definition of the PBL script. The research team plans to illustrate that the research and development of a PBL scripting language and the associated authoring and execution environment can provide a significant thrust toward further research of PBL by using meta-analysis, designing effective PBL models, and extending or improving a PBL scripting language. The PLATE project can enable PBL practitioners to develop, understand, customize, and reuse PBL models at a high level by relieving the burdens of handling complex details to implement a PBL course. The research team believes that the project will stimulate the application and use of PBL in curricula with online learning practice by incorporating PBL support into popularly used e-learning platforms and by providing a repository of PBL models and courses.
-
-
-
World Cybersecurity Indicator Using Computational Intelligence
Authors: Ahmad Al Shami and Simeon ColemanThe aim of this research is to investigate the utilisation of Computational Intelligence (CI) methods for constructing a World Cybersecurity Indicator (WCI) to enable consistent and transparent assessments of the cybersecu- rity capabilities of nation's through the utilisation of Synthetic Composite Indicators (SCI's) concept for ranking their readiness and progress. SCI are assessment tools usually constructed to evaluate and contrast entities perfor- mance by aggregating intangible measures in many areas such as technology and innovation. SCI key value is inhibited in its capacity to aggregate com- plex and multi-dimensional variables into a single meaningful value. As a result, SCIs have been considered as one of the most important tools for macro-level and strategic decision making. Considering the shortcomings of the existing SCI, this study is proposing a CI approach to develop a new WCI. The suggested approach utilizes Fuzzy Proximity Knowledge Mining technique to build the qualitative taxonomy initially, and Fuzzy c-mean is employed to form a macro level cybersecurity indicator. To illustrate the method of construction a fully worked application is pre- sented. The application employs real variables of possible threats to the In- formation and Communication Technology (ICT). The weighting and aggre- gation results obtained were compared against classical approaches namely Principal Component Analysis, Factor Analysis and the Geometric Mean to weight and aggregate SCI's. The proposed model has the capability of weighting and aggregating major cybersecurity indicators into a single value that ranks nations even with limited data points. The validity and robustness of the WCI is evaluated using Monte Carlo simulation. In order to show the value added by the new cybersecurity index, the WCI is applied to the Middle East and North Africa (MENA) region as a special case study and then generalised. In total seventy-three countries were included, that are representative of developed, developing and under- developed nations. The nal and overall ranking results obtained, suggest novel and unbiased way compared to traditional or statistical methods when building, the WCI.
-
-
-
Residential Load Management System For Future Smart Energy Environment
Authors: Shady Samir Khalil and Haitham Abu-rubElectricity consumption has increased substantially over the last decade. According to the Gulf Research Center (2013), residential sector represents the largest portion (about 50%) of electricity consumption in the GCC region, due to substantial growth of electrical residential appliances. Therefore, we present a novel online smart residential load management system that is used to monitor and control power consumption of the loads for minimizing energy consumption, balancing electric power supply, reducing peak demand, and minimizing energy bill while considering residential customer preferences and comfort level. The presented online algorithm manages power consumption by assigning the residential load according to utilities power supply events. The input data to the management algorithm is set based on the categorized loads according to: importance (vital, essential, and non-essential electrical loads), electrical power consumption, electricity bill limitation, utilities power limitation, and load priority. The data are processed and fed to the presented algorithm, which accurately manages the power of Dwelling Loads using external controlled disconnectors. The proposed online algorithm yields to improve the overall grid efficiency and reliability, especially during the demand response periods. Simulation results demonstrate the validity of the proposed algorithm.
-
-
-
Using Social Computing For Knowledge Translation: Exploiting Social Network And Semantic Content Analyses To Facilitate Online Knowledge Translation Within An Online Social Community Of Medical Practitioners
Authors: Samuel Stewart and Syed Sibte Raza AbidiSocial computing has led to new approaches for Knowledge Translation (KT) by overcoming the temporal and geographical barriers experienced in face-to-face KT settings. Social computing based discussion forums allow the formulation of communities of practice whereby a group of professionals disseminate their knowledge and experiences through online discussions on specialized topics. In order to successfully build an online community of practice, it is important to improve the connectivity between like-minded community members and between like-topic discussions. In this paper we present a Medical Online Discussion Analysis and Linkages (MODAL) method to identify affinities between members of an online social community by applying: (a) social network analysis to understand their social communication patterns during KT; and (b) semantic content analysis to establish affinities between different discussions and professionals based on their communicated content. Our approach is to establish linkages between users and discussions at the semantic and contextual levels—i.e. we do not just link discussions that share exact medical terms, rather we link practitioners and discussions that share semantically and contextually similar medical terms, thus accounting for vocabulary variations, concept hierarchies and specialized clinical scenarios. MODAL incorporates two novel semantic similarity methods to analyze online discussions using: (i) the Generalized Vector Space Model (GVSM) that leverages semantic and contextual similarity to find similarities between discussion threads and between practitioners; and (ii) an extension of the Balanced Genealogy Model (BGM) so that we are able to address non-leaf mappings, issues of homonymity noted in medical terminologies, and further contextualization of the similarity measures using information content measures. We have implemented a similarity metric that captures the concept of "interest" between users or threads, i.e., a numeric measure of how interested user A is in user B, or how much of the information contained in thread A is related to thread B. MODAL measures the "interest" one professional has in another professional within the online community, and then uses this metric to identify those professionals that are sought by other professionals for expert advice—the content experts. Furthermore, by incorporating the interest measures with SNA, MODAL is able to identify the content experts within the community, and analyze the content of their conversations to determine their areas of expertise. Given the short and unstructured nature of online communications, we use the MeSH medical lexicons and the medical text analysis tools, i.e. Metamap, to map the unstructured narrative of online discussions to formal medical keywords based on the MeSH lexicon. MODAL is tested on two online professional communities of healthcare practitioners: (a) Pediatric Pain Mailing List is a community of 460 clinicians from around the world--over a four year period 2505 messages were shared on 783 different discussion threads; (b) SURGINET is a community of 865 clinicians from around the world that use the forum to discuss general surgical issues-it contains over 17000 messages on 2111 threads by 231 users. MODAL is able to identify content experts and link like-minded practitioners based on the content of their conversations rather than on direct ties between them.
-
-
-
Innovative Data Collection And Dissemination In V2x Networks
Authors: Wassim Drira and Fethi FilaliThe emergence of V2X (Vehicle-to-Vehicle and Vehicle-to-Infrastructure) networks lends a significant support for the Intelligent Transportation Systems to improve many applications for different purposes such as safety, traffic efficiency and added-value services. Typically, such environment is distinguished by its mobility and topology dynamics over space and time. Moreover, these applications need to collect and disseminate data reactively or proactively from vehicles or the traffic control center (TMC) to run efficiently. Thus, in this abstract paper, we will provide an extended framework to collect and disseminate data in V2X networks based on the new emerging network NDN (Named Data Networking) which treats content as a primitive - decoupling location from identity, security and access, and retrieving content by name. The communication paradigm in this network is based on the pattern of Request/Response where request messages are Interest packets and responses are Data packets. In order to provide an efficient reactive data collection mechanism, we propose an NDN Query mechanism (NDN-Q to allow any node to submit a query in the network to collect data which is built on the fly. Then, NDN-Q uses a reduce mechanism to aggregate data hop by hop towards the Query Source. Thus, in NDN-Q, the data collection is performed in two steps. The first one is the query dissemination towards data sources while the second one concerns data sources's response collection and aggregation. Then, we extended NDN with a Publish/Subscribe (Pub/Sub) capability in order to provide an efficient data collection and dissemination mechanism in V2X networks. Therefore, a node, a vehicle or the TMC, can subscribe to a content topic through a rendezvous node to receive zero or many messages as and when they are published without re-expressing its interest. Thus, many nodes may subscribe to the same topic which will allow the use of the NDN capabilities in terms of multicast to reduce the communication load in the network. The framework has been designed to meet the V2X communication requirements in terms of mobility and dynamics. Then, it has been implemented based on the NS-3 module ndnSim. Moreover, SUMO has been used to generate the vehicle mobility scenario. The evaluation argues that this extension reduces the number of packets disseminated to subscribers and efficiently handles the mobility of vehicles. Moreover, the data collection fulfills the delay requirement of traffic safety applications. The evaluation of NDN-Q argues that it reduces the number of packets collected from data sources, efficiently handles the mobility of vehicles, and deliver query results in a reasonable time, i.e. 280ms when a reduce process is performed in intermediate nodes or less than 50ms without any reduce process. In this abstract paper, we presented an innovative data collection and dissemination framework in V2X networks based on NDN. The framework gives a fully distributed query mechanism that does not require knowledge of the network organization and vehicles, and a Pub/Sub mechanism that takes into consideration V2X network characteristics and needs in terms of mobility, periodic disconnectivity, publish versioning and tagging with location, etc.
-
-
-
Maximum Power Transfer Of Pv-fed Inverter-based Distributed Generation With Improved Voltage Regulation Using Flywheel Energy Storage Systems
Authors: Hisham El Deeb, Ahmed Massoud, Ahmed Abbas, Shehab Ahmed, Ayman Abdel-khalik and Mohamed DaoudOne of the main issues accompanied with the high penetration of PV distributed generation (DG) systems in low voltage (LV) networks is the overvoltage challenge. The amount of injected power to the grid is directly related to the voltage at the point of common coupling (PCC), which necessitates limiting the amount of injected power to the grid to conservative values compared to the available capacity from the PV panels particularly at light loading. In order to mitigate the tradeoff between injecting the maximum amount of electrical power and voltage rise phenomena, many control schemes were suggested in order to optimize the operation of PV DG energy sources as well as maintaining safe voltage levels. Unlike these conventional methods, this paper proposes a combined PV inverter-based distributed generation and flywheel energy storage system to ensure improved voltage regulation as well as making use of the maximum available power from the PV source at any instant, decoupling its relation with the terminal voltage. The concluded assumptions were simulated through Matlab/Simulink and verified experimentally.
-
-
-
Enhancing Rpl Resilience Against Packet Dropping Insider Attacks
Authors: Karel Heurtefeux, Ochirkhand Erdene-ochir, Nasreen Mohsin and Hamid MenouarTo gather and transmit data, low cost wireless devices are often deployed in open, unattended and possibly hostile environment making them particularly vulnerable to physical attacks. Resilience is needed to mitigate such inherent vulnerabilities and risks related to security and reliability. In this work, Routing Protocol for Low-Power and Lossy Networks (RPL) is studied in presence of packet dropping malicious compromised nodes. Random behavior and data replication have been introduced to RPL to enhance its resilience against such insider attacks. The classical RPL and its resilient variants have been analyzed through simulations. Resilient techniques introduced to RPL have enhanced significantly the resilience against attacks providing route diversification to exploit the redundant topology created by wireless communications. In particular, the proposed resilient RPL exhibits better performance in terms of delivery ratio (up to 40%), fairness and connectivity while staying energy efficient.
-
-
-
Transformation Of Online Social Networks And Communication Tools: A Technological Point Of View
More LessOnline social networks and online real time communication tools gained large popularity in the last years. In the Arab spring they emerged as one of the main tools to exchange ideas and to organize activities. Many users were unaware that their communication and their interaction patterns are traced. After the Arab spring and after the information demystified by Edward Snowden, users are becoming security aware and look for alternative technologies to communicate securely. Our contribution in this presentation is twofold, first we present the current technological developments in computer science and networking technologies that envision to provide tools to the users, so that their communication is secured and impossible to track. Based on peer-to-peer-technologies, known from file-sharing applications, several solutions for secure, reliable and untraceable communication already exist. In addition, opportunistic and delay-tolerant networks, provide asynchronous "online" communication which do not even use the Internet infrastructure, thus making it even harder to identify and to trace. As a second contribution, we present a technological framework based on peer-to-peer networking, which allows to build platforms for online social network. In depth, we show how this framework is capable to provide common functionality of online social networks, but at the same time is privacy-aware and impossible to trace or to shut of. The framework uses fully decentralized data storage and sophisticated cryptography for the user management, secure communication and access control to the data of the users. The quality of the presented framework, in terms of performance and communication costs, has been evaluated in simulations with up to 10,000 network nodes as well as in test with up to 30 participants. Our findings are, that up to now the usability, economical pressure and the security of (centralized) communication tools were incompatible. This observation changes with decentralized solutions which do not come with economic pressure. Both secure and usable solutions are about to emerge, fully distributed and secure, which will be impossible to shut down or to trace. For the future it will be interesting, whether the presented decentralized technology will be ignored or whether it will disrupt the economics and the market of social networks, like it was the case with Napster and the music industry, a case in which decentralized peer-to-peer networking unfolded it potential.
-
-
-
Data Science at QCRI
Authors: Divy Agrawal, Laure Berti, Hossam Hammady, Prasenjit Mitra, Mourad Ouzzani, Paolo Papotti, Jorge Quiane Ruiz, Nan Tang, Yin Ye, Si Yin and Mohamed Zaki"The Data Analytics group at QCRI has embarked on an ambitious endeavor to become a premiere world-class research group in Data Science by tackling diverse research topics related to information extraction, data quality, data profiling, data integration, and data mining. We will present our ongoing projects to overcome different challenges encountered in Big Data Curation, Big Data Fusion, and Big Data Analytics. (1) Big Data Curation: Due to complex processing and transformation layers, data errors proliferate rapidly and sometimes in an uncontrolled manner, thus compromising the value of information and impacting data analysis and decision making. While data quality problems can have crippling effects and no end-to-end off-the-shelf solutions to (semi-)automate error detection and correction existed, we built a commodity platform, NADEEF that can be easily customized and deployed to solve application-specific data quality problems. This project implements techniques that exploit several facets of data curation including: * assisting users in the semi-automatic discovery of data quality rules; * involving users (and crowds) in the cleaning process with simple and effective questions; * and unifying logic-based methods, such as declarative data quality rules, together with quantitative statistical cleaning methods. Moreover, implementation of error detection and cleaning algorithms has been revisited to work on top of distributed processing platforms such as Hadoop and Spark. (2) Big Data Fusion: When data is combined from multiple sources, it is difficult to assure its veracity and it is common to find inconsistencies. We have developed tools and systems that tackle this problem from two perspectives: (a) In order to find the true value for two or more conflicting ones, we automatically compute the reliability (accuracy) of the sources and dependencies among them, such as who is copying from whom. Such information allows much higher precision than simple majority voting and ultimately leads to values that are closer to the truth. (b) Given an observed problem over the integrated view of the data, we compute explanations for it over the sources. For example, given erroneous values in the integrated data, we can explain which source is making mistakes. (3) Big Data Analytics: Data analysis tasks typically employ complex algorithmic computations that are hard/tedious to express in current data processing platforms. To cope with this problem, we are developing Rheem, a data processing framework that provides an abstraction on top of current data processing platforms. This abstraction allows users to focus only on the logics of their applications and developers to provide ad-hoc implementations (optimizations) over existing data processing platforms. We have already created two different applications using Rheem, namely data repair and data mining. Both have shown benefits in terms of expressivity of the Rheem abstraction as well as in terms of query performance through ad-hoc optimizations. Additionally, we have developed a set of scalable data profiling techniques to understand relevant properties of big datasets in order to be able to improve data quality and query performance."
-
-
-
Up & Away: A Visually-controlled Easy-to-deploy Uav Cyber Physical Testbed
Authors: Ahmed Saeed, Azin Neishaboori, Khaled Harras and Amr MohamedCyber-Physical Systems (CPS) rely on advancements in fields such as robotics, mobile computing, sensor networks, controls, and communications, to advance complex real-world applications including aerospace, transportation, factory automation, and intelligent systems. The multidisciplinary nature of effective CPS research, diverts specialized researchers' efforts towards building expensive and complex testbeds for realistic experimentation, thus, delaying or taking the focus away from the core potential contributions to be made. We present Up and Away (UnA), a cheap and generic testbed composed of multiple autonomously controlled Unmanned Ariel Vehicle (UAV) quadcopters. Our choice of using UAVs is due to the their deployment flexibility and maneuverability enabling a wide range of CPS research evaluations in areas such as 3D localization, camera sensor networks, target surveillance, and traffic monitoring. Furthermore, we provide a vision-based localization solution that uses color tags to identify different objects in varying light intensity environments and use that system to control the UAVs within a specific area of interest. However, UnA's architecture is modular so that the localization system can be replaced by any other system (e.g. GPS) as deployment conditions change. UnA enables interaction with real world objects, treating them as CPS input, and uses the UAVs to carry out CPS specific tasks while providing sensory information from the UAV's array of sensors as output. We also provide an API that allows the integration of simulation code that will obtain input from the physical world (e.g. targets to track) then provide control parameters (i.e. number of quadcopters and destinations coordinates) to the UAVs. The UnA architecture is depicted in Figure 1a. To demonstrate the promise of UnA, we use it to evaluate another research contribution we make in the area of smart surveillance, particularly that of target coverage using mobile cameras. Optimal camera placement to maximize coverage has been shown to be NP-complete for both area and target coverage. Motivated by the need for practical computationally efficient algorithms to autonomously control mobile visual sensors, we propose efficient near-optimal algorithms for finding the minimum number of cameras to cover a high ratio of targets. First, we develop a basic method, called cover-set coverage to find the location/direction of a single camera for a group of targets. This method is based on finding candidate points for each possible camera direction and spanning the direction space via discretizing camera pans. We then propose two algorithms, (1) Smart Start K-Camera Clustering (SSKCAM) and (2) Fuzzy Coverage (FC), which divide targets into multiple clusters and then use the cover-set coverage method to find the camera location/direction for each cluster. Overall, we were able to integrate the implementation of these algorithms with the UnA testbed to witness real-time assessment of our algorithms as shown in Figure 1b.
-
-
-
قياس فاعلية بناء وتطوير برنامج محوسب لإدارة وتحليل الاستشهادات المرجعية العربية
Authors: Saleh Alzeheimi and Akram Zekiإن من المؤشرات العلمية التي خرجت بها بعض الدراسات حول أسباب ضعف الانتاج الفكري العربي وضعف المحتوى الرقمي في قواعد البيانات العالمية هو غياب البرامج المحوسبة التي تعنى بتحليل المراجع والاستشهادات المرجعية (العربية) الورادة في هذا الانتاج، وانعدام البرامج الآلية التي تقدم مؤشرات احصائية (ببليومترية) تبرز التوجه البحثي العربي، وتعين صناع القرار في المؤسسات البحثية والأكاديمية والباحثين أيضا في توجيه مسار البحث العلمي العربي ومعرفة نقاط القوة والضعف في التخصصات العربية المبحوث فيها، وعرض الخريطة القياسية للعلوم العربية Systematic Map وتفرعاتها. وفي المقابل توجد أدوات متقدمة وبرامج عالمية تقدم حلولاً لتحليل النتاج الفكري الأجنبي (المنشور باللغة الانجليزية) مثل Scopus وJournal Citation Report (JCR) إلا أن هذه البرامج لا تدعم النتاج الفكري المنشور باللغة العربية، كما أنها غير مجانية، ولا تتيح الاستخدام الحر والمفتوح للمؤسسات والأفراد. لذا تهدف الدراسة إلى قياس فاعلية بناء وتطوير برنامج محوسب لإدارة وتحليل الدراسات الببليومترية والاستشهادات المرجعية العربية. وستعمد الدراسة إلى طريقتين لتحقيق أهداف الدراسة وهما: - بناء برنامج محوسب (مفتوح المصدر) باستخدام لغةPHP وMYSQL من أجل اتاحة الفرصة للمطورين للتطوير المستمر واتاحته مجانا للمتخصصين في مجال الدراسات الببليومترية وبهذا يكون أول برنامج عربي مفتوح المصدر في مجال التحليل الآلي للدراسات الببليومترية. - تجربة البرنامج على دوريات عربية محكمة وقياس فاعلية التقارير والاحصائيات الببليومترية ومدى تطابقها مع القوانيين الببليومترية مثل قانون لوتكان وقانون برادفورد، ومدى استيفاء البرنامج للخصائص والمؤشرات التي تقدمها البرامج العالمية الداعمة للانتاج الفكري الأجنبي مثل Scopus و Google Scholar و Science Direct. وعليه فإن الدراسة ستعتمد على المنهج البنائي والتجريبي لقياس فاعلية برنامج محوسب مفتوح المصدر في مجال الدراسات الببليومترية العربية Arabic Citation Engine وسيتم تقييم البرنامج من خلال ادخال بيانات عينة من المقالات العربية المنشورة في الدروريات التي تصدرها الجامعة الاسلامية العالمية بماليزيا MUII وفحص التقارير والمؤشرات العلمية التي يستخرجها البرنامج ومدى دقة المعطيات بناء على القوانيين الببليومترية.
-
-
-
Energy Storage As An Enabling Technology For The Smart Grid
Authors: Omar Ellabban and Haitham Abu-rubIn today's world, the need for more energy seems to be ever increasing. The high cost and limited sources of fossil fuels, in addition to the need to reduce greenhouse gasses, have made renewable energy sources (RES) attractive in today's world economy. However, the fluctuating and intermittent nature of these RES causes variations of power flow that can significantly affect the stability and operation of the electrical grid. In addition, the power output of these RES is not as easy to adjust to changing demand cycles as the output from the traditional power sources. To overcome these problems, energy from these RES must be stored when excess is produced and then released, when production levels are less than the required demand. Therefore, in order for RES to become completely reliable as primary sources of energy, energy storage systems (ESS) is a crucial factor. The impact of the ESS in future grid is receiving more attention than ever from system designers, grid operations and regulators. Energy storage technologies have the potential to support our energy system's evolution, they can be used for multiple applications, such as: energy management, backup power, load leveling, frequency regulation, voltage support, and grid stabilization. In this work, an overview of the current and future energy storage technologies used for electric power applications is carried out. Furthermore, an assessment of the dynamic performance of energy storage technologies for the stabilization and control of the power flow of emerging smart grid will be presented. The EES can enhance the operation security and ensure the continuity of energy supply in future smart grids.
-
-
-
A Novel Solution For Addressing Network Firewall Issues In Remote Laboratory Development
Authors: Ning Wang, Xuemin Chen, Michael Ho, Hamid Parsaei and Gangbing SongThe increased use of remote laboratories for online education has made network security issues ever more critical. To defend against numerous new types of potential attacks, the complexity of network firewalls has been significantly increased. Consequently, the network firewall will inevitably limit the real time remote experimental data transmission. To solve the issue of traversing network firewalls, we designed and implemented a novel real-time experimental data transmission solution which includes two parts, real-time experiment control data transmission and real-time experiment video streaming transmission, for remote laboratory development. To implement real-time experiment control data transmission, a new web server software architecture was designed and developed to allow the traversing of network firewalls. With this new software architecture, the public network port 80 can be shared between Node.js, which is a stable server-side software engine to support real-time communication web applications, and the Apache web server software system. With this new approach, the Apache web server application still listens to the public network port 80, but any client requests for the Node.js web server application through the port will be forwarded to a special network port which Node.js web server application is listening to. Accordingly, a new solution in which both Apache and Node.js web server applications work together via HTTP proxy developed by the Node-HTTP-Proxy software package is implemented on the server-side. With this new real time experiment control and data transmission solution, the end user can control remote experiments and view experimental data on the web browser without firewall issues and without the need of third party plug-ins. It also provides a new approach for the remote experiment control and real time data transmission based on pure HTTP protocol. To implement real-time experiment video transmission part, we developed a complete novel solution via HTTP Live Streaming (HLS) protocol and FFMPEG that is a powerful cross-platform command line video trans-code/encoding software package on the server side. In this paper, a novel, real-time video streaming transmission approach based on HLS for the remote laboratory development is presented. With this new solution, the terminal users can view the real-time experiment live video streaming on any portable device without any firewall issues or the need for a third party plug-in. We have successfully implemented this novel real-time experiment data transmission solution in the remote SVP experiment and remote SMA experiment development. End users can now conduct the SVP and SMA remote experiment and view the experiment data and video in real time through web browsers anywhere that has internet connection without any third party plug-in. Consequently, this novel real-time experiment data transmission solution gives the unified framework significant improvements.
-
-
-
Power System Stabilizer Design Based On Honey-bee Mating Optimization Algorithm
Authors: Abolfazl Halvaei Niasar, Dariush Zamani and Hassan MoghbeliPower system stability is one of the main factors in performance of electrical system. A control system must retain frequency and voltage size under any distortion such as sudden increase in load, to leave a generator from circuit or cut off a transmission line in a constant level. In this paper, Honey-Bee Mating Optimization (HBMO) algorithm has been used to design power system stabilizer. It is based on the mating between queen and bees. Meta-heuristics honey-bee mating optimization algorithm is considered to be as intelligent algorithm. Simulation results show that HBMO algorithm is simple to solve optimization issues. It works based on the changes in the system to adapt with reality and increase flexible parameters in comparison with other methods. Furthermore, considering performance level, it has suitable standard deviation and convergence of approximation.
-
-
-
Sensorless Direct Power Control Of Induction Motor Drive Using Artificial Neural Network
Authors: Abolfazl Halvaei Niasar, Hassan Moghbeli and Hossein Rahimi KhoeiThis paper proposes design of sensorless induction motor drive based on direct power control (DPC) technique. Principle of DPC technique is presented and possibilities of direct power control (DPC) for induction motors (IMs) fed by a space vector pulse-width modulation inverter (SV-PWM) are studied. The simulation results show that the DPC technique enjoys all advantages of pervious method such as fast dynamic and ease of implementation, without having the previous method problems. Some simulations are carried out for the closed-loop speed control systems under various load conditions to verify proposed methods. Results show that DPC of IMs works well with output power and flux control. Moreover, to reduce cost of drive and enhancing reliability, an effective sensorless strategy based on artificial neural network (ANN) is developed to estimate rotor's position and motor speed. Developed sensorless scheme is a new model reference adaptive system (MRAS) speed observer for direct power control of induction motor drives. The proposed MRAS speed observer uses the current model as an adaptive model. The neural network has been designed and trained online by employing a back propagation network (BPN) algorithm. The estimator is simulated in Matlab/Simulink. Simulation results confirm the performance of ANN-based sensorless DPC induction motor drive in various conditions.
-
-
-
Sensor Fault Detection And Isolation System
Authors: Cheng-ken Yang, Alireza Alemi and Reza LangariThe purpose of this paper is mainly aimed to provide an energy security strategy for the petroleum production and processing in the grand challenges. Fault detection and diagnosis is the central component of abnormal event management (AEM) [1]. According to the International Federation of Automatic Control (IFAC), a fault is defined as an unpermitted deviation of at least one characteristic property or parameter of the system from the acceptable/usual/standard condition [2-4]. Generally, there are three parts in a fault diagnosis system, detection, isolation, and identification [5, 6, 7]. Depending on the performance of fault diagnosis systems, they are called FD (for fault detection) or FDI (for fault detection and isolation) or FDIA (for fault detection, isolation and analysis) systems [5]. As the increasing needs for energy grows rapidly, energy security becomes an important issue especially in petroleum production and processing. The importance can be mainly considered in following perspectives: higher system performance, product quality, human safety, and cost efficiency [5, 8]. With this in mind, the purpose of this research is to develop a Fault Detection and Isolation (FDI) system which is capable to diagnosis multiple sensor faults in nonlinear cases. In order to lead this study closer to real world applications in oil industries, the system parameters of the applied system are assumed to be unknown. In the first step of the proposed method, phase space reconstruction techniques are used to reconstruct the phase space of the applied system. This step is aimed to infer the system property by the collected sensor measurements. The second step is to use the reconstructed phase space to predict future sensor measurements, and residual signals are generated by comparing the actually measured measurements to the predicted measurements. Since, in practice, residual signals will not perfectly equal to zero in the fault-free situation, Multiple Hypothesis Shiryayev Sequential Probability Test (MHSSPT) is introduced to further process those residual signals, and the diagnostic results are presented in probability. In addition, the proposed method is extended to a non-stationary case by using the conservation/dissipation property in phase space. The proposed method is examined by both of simulated data and real process data. The three tank model is modeled according to a nonlinear laboratory setup DTS200 and introduced for generating simulated data. On the other hand, the real process data collected from a sugar factory actuator system are also used to examine the proposed method. According to our results obtained from simulations and experiments, the proposed method is capable to indicate both of healthy and faulty situations. In the end, we have to emphasize that the proposed approach is not limited in the applications of petroleum production and processing. For example, our approach can also apply to enhance the quality of water and avoid the discharges, such as leakage, in the process of water resource management. Therefore, the proposed approach not only benefits the issue of energy security but also other issues in the grand challenges.
-
-
-
Accurate Characterization Of Bicmos Bjt Across Dc-67 Ghz With On-wafer Measurement And Em De-embedding
Authors: Juseok Bae, Scott Jordan and Nguyen CamThe complementary metal oxide semiconductor (CMOS) and bipolar-complementary metal oxide semiconductor (BiCMOS) technologies offer low power dissipation, good noise performance, high packing density, et cetera in analog and digital circuit design. They have contributed significantly to the advancement of wireless communication and sensing systems and are presently inevitable devices in these systems. As the technologies and device performance have advanced into the millimeter-wave regime over last two decades, accurate S-parameters of CMOS and BiCMOS devices at millimeter-wave frequencies are highly demanded for millimeter-wave radio-frequency integrated-circuit (RFIC) design. The accuracy of these S-parameters is absolutely essential for extracting accurately the device parameters and small- and large-signal models. The conventional extraction techniques using both impedance standard substrate and de-embedding technique have been replaced by on-wafer calibration techniques implementing calibration standards fabricated on the same wafer together with the device under test (DUT) in virtue of accurate characterization over wide frequency and at high frequencies. However, some challenges for the on-wafer calibration still remain where the calibration is conducted over a wide frequency range covering millimeter-wave frequencies with a DUT such as bipolar junction transistor (BJT). The ends of the interconnects for the open and load standards are inherently very close to each other since it depends on the spacing between base (or collector) and emitter of the BJT (about 0.25 μm), hence not only causing significant gap and open-end fringing capacitances, which leads to substantial undesired effects for device characterization at millimeter-wave frequencies, but also making it impossible to place resistors within such narrow gaps for the load standard design. In order to resolve the structural issue of the conventional on-wafer calibration standards, a new method implementing both on-wafer calibration and electromagnetic (EM)-based de-embedding has been developed. In the newly developed technique, appropriate spacing in the on-wafer calibration standards, which minimizes the parasitic capacitance between the close open-ends and sets enough space to place the load standard's resistors, is determined based on EM simulations, and the non-calibrated part within the spacing consisting of interconnects and vias is extracted by the EM-based de-embedding. The developed procedure with the on-wafer calibration and the EM-based de-embedding characterizes the S-parameters of BJTs in 0.18-µm SiGe BiCMOS technology from DC to 67 GHz. The measured results show sizable differences in insertion loss and phase between the on-wafer characterizations with and without the EM-based de-embedding, demonstrating that the developed on-wafer characterization with EM-based de-embedding is needed for accurate characterization of devices at millimeter-wave frequencies, which is essential for the design of millimeter-wave wireless communication and sensing systems.
-
-
-
Visual Scale-adaptive Tracking For Smart Traffic Monitoring
Authors: Mohamed Elhelw, Sara Maher and Ahmed SalaheldinThis paper presents a novel real-time scale adaptive visual tracking framework and its use in smart traffic monitoring where the framework robustly detects and tracks vehicles from a stationary camera. Existing visual tracking methods often employ a semi-supervised appearance modeling where a set of samples are continuously extracted around the vehicle to train a discriminant classifier between the vehicles and the background. While proving their advantage, many issues are still to be addressed. One is a tradeoff between high adaptability (prone to drift) and preserving original vehicle appearance (susceptible to tracking loss with significant appearance variations). Another issue is vehicle scale changes due to perspective camera effect which increases the potential for an inaccurate update and subsequently visual drifting. Still, scale adaptability has received little attention in vision-based discriminant trackers. In this paper we propose a three-step Scale Adaptive Object Tracking (SAOT) framework that adapts to scale and appearance changes. The framework is divided into three phases: (1) vehicle localization using a diverse ensemble, (2) scale estimation, and (3) data association where detected and tracked vehicles are correlated. The first step computes vehicle position by using an ensemble based on a compressed low-dimensional feature subsets projected from high-dimension feature space by random projections. This provides the diversity needed to accommodate for individual classifiers errors and different adaptability rates. The scale estimation step, applied after vehicle localization, is computed based on matched points between a pre-stored template and the localized vehicle. This doesn't only estimate the new scale of the vehicle but also serves as a correction step to prevent drifting by estimating the displacements between correspondences. The data association step is subsequently applied to link detected vehicle of current frame with the tracked vehicles. Data association must consider factors like the absence of detected target, false detections and ambiguity. Figure 1 illustrates the framework in operation. While the vehicle detection phase is executed per frame, the continuous tracking procedure ensures that all vehicles in the scene, no matter how complex it is, are correctly accounted for. The performance of the proposed Scale Adaptive Object Tracking (SAOT) algorithm is further evaluated with a different set of sequences with scale and appearance changes, blurring, moving camera and illumination. SAOT was compared to three established trackers in the literature: Compressive Tracking , Incremental Learning for Visual Tracking and Random Project with Diverse Ensemble Tracker using standard visual tracking evaluation datasets [4]. The initial target position for all sequences was initialized using manual ground truth. Centre Location Error (CLE) and Recall are calculated to evaluate the methods. Table 1 presents the CLE errors and recall in parentheses measured on a set of 2 sequences with different challenges. It clearly demonstrates that SAOT performs better than the other trackers.
-
-
-
“ElectroEncephaloGram “EEG” Mental Task Discrimination”,digital Signal Processing -- Master Of Science , Cairo University
More Less"ElectroEncephaloGram "EEG" Mental Task Discrimination", Master of Science dissertation, Cairo University, 2010. Recent advances in computer hardware and signal processing have made possible the use of EEG signals or "brain waves" for communication between humans and computers. Locked-in patients have now a way to communicate with the outside world, but even with the last modern techniques, such systems still suffer communication rates on the order of 2-3 tasks/minute. In addition, existing systems are not likely to be designed with flexibility in mind, leading to slow systems that are difficult to improve. This Thesis is classifying different mental tasks through the use of the electroencephalogram (EEG). EEG signals from several subjects through channels (electrodes) have been studied during the performance of five mental tasks: Baseline task for which the subjects were asked to relax as much as possible, Multiplication task for which the subjects were given nontrivial multiplication problem without vocalizing or making any other movements, Letter composing task for which the subject were instructed to mentally compose a letter without vocalizing (imagine writing a letter to a friend in their head),Rotation task for which the subjects were asked to visualize a particular three-dimensional block figure being rotated about its axis, and Counting task for which the subjects were asked to imagine a blackboard and to visualize numbers being written on the board sequentially. The work presented here maybe a part of a larger project, whose goal is to classify EEG signals belonging to a varied set of mental activities in a real time Brain Computer Interface, in order to investigate the feasibility of using different mental tasks as a wide communication channel between people and computers.
-
-
-
Designing A Cryptosystem Based On Invariants Of Supergroups
Authors: Martin Juras, Frantisek Marko and Alexander ZubkovThe work of our group falls within the area of Cyber Security, which is one of Qatar's Research Grand Challenges. We are working on designing a new public key cryptosystem that can improve the security of communication networks. The most widely used cryptosystem at present (like RSA) are based on the difficulty of factorization of numbers that are constructed as product of two large primes. The security of such systems was put in doubt since these systems can be attacked with a help of quantum computers. We are working on a new cryptosystem that is based on different (noncommutative) structures, like algebraic groups and supergroups. Our system is based on the difficulty of computing invariants of actions of such groups. We have designed a system that uses invariants of (super)tori of general linear (super)groups. Effectively, we are building a "trapdoor function" enabling us to find a suitable invariant of high degree and do the encoding of the message quickly and efficiently but which provides an attacker with computationally very expensive and difficult task to find an invariant of that high degree. As with every cryptosystem, the possibility of its break have to be scrutinized very carefully and the system has to be investigated independently by other researchers. We have established theoretical results about minimal degrees of invariants of a torus that are informing possible selection of parameters of our system. We continue getting more general theoretical results and we are working towards an implementation and testing of this new cryptosystem. A second part of our work is an extension from the classical case of algebraic groups to the case of algebraic supergroups. We are concentrating especially on general linear supergroups. We have described the center of the distribution superalgebras of general linear groups GL(m|n) using the concept of an integral in the sense of Haboush and computed explicitly all generators of invariants of the adjoint action of the group GL(1|1) on its distribution algebra. The center of the distribution algebra is related via the Harish-Chandra map to infinitesimal characters. Understanding of these characters and blocks would lead us to the description of the linkage principle, that is of composition factors of induced modules. Finding and proving linkage principle for supergroups over the field of positive characteristics is one of our main interests. This extends classical results from the representation theory that are giving scientists, mathematicians and physicists, a tool to find a theoretical model where the fundamental rules of symmetries of the space-continuum are realized. Better theoretical background could lead to better understanding of the experimental data and predictions confirming or contradicting our current understanding of the universe. As happened many times in the past, finding the right point of view and developing new language can often lead to different level of understanding. Therefore we value the theoretical part of our work the same way as the practical work related to the cryptosystems.
-
-
-
Experimental Results On The Performance Of Visible Light Communication Systems
Authors: Mohamed Kashef, Mohamed Abdallah and Khalid QaraqeEnergy efficient wireless communication networks become essential due to the associated environmental and financial benefits. Visible light communication (VLC) is a promising candidate to reach energy efficient communications. Light emitting diodes (LEDs) have been introduced as energy efficient light sources and their light signal intensity can be modulated to transfer data wirelessly. As a result, VLC using LEDs can be considered an energy efficient solution that exploits the illumination energy, which is already consumed, in data transmission. We set up a fully operative VLC system testbed composed of both the optical transmitters and receivers. We use this system in testing the performance of VLC communication systems. In this work, we discuss the results obtained by running the experiment using different system parameters. We apply different signaling schemes for the LED transmitter including optical orthogonal frequency division modulation (O-OFDM). We also validate our previously obtained analytical results for applying power control in VLC cooperative networks.
-
-
-
Cerebral Blood Vessel Segmentation Using Gauss-hermite Quadrature Filtering And Automatic Seed Selection
More LessBackground & Objective: Blood vessel segmentation has various applications such as proper diagnosis, surgical planning, and simulation. However, the common challenges faced in blood vessel segmentation are mainly vessels of varying width and contrast changes. In this paper, we propose a segmentation algorithm, where: (1) a histogram-based approach is proposed to determine the initial patch (seeds) and (2) on this patch, a Gauss- Hermite quadrature filter is applied across different scales to handle vessels of varying width with high precision. Subsequently, a level set method is used to perform segmentation on the filter output. Methods: In spatial domain, a Gauss-Hermite quadrature filter is basically a complex filter pair, where the real component is a line filter that can detect linear structures, and the imaginary component is an edge filter that can detect edge structures; the filter pair is used for combined line-edge detection. The local phase is the argument of the complex filter response that determines the type of structure (line/edge), and the magnitude of the response determines the strength of the structure. Since the filter is applied in different directions, all filter responses are then combined to produce an orientation invariant phase map by summing filter responses for all directions. We use 6 filters with center frequency pi/2. To handle vessels of varying width, a multi-scale integration approach is implemented. Vessels of different width appear both as lines and edges across different scales. These scales are combined to produce a global phase map that is used for segmentation. The resulting global phase map contains detailed information about line and edge structures. For blood vessel segmentation, a local phase of 90 degree indicates edge structures. Therefore, it is necessary to consider only the real part of the quadrature filter response. Edges will be found at zero crossing whereas positive and negative values will be obtained for inside and outside of line structures. Therefore, level set (LS) approach is utilized that uses the real part of the phase map as a speed function to drive the deforming contour towards the vessel boundary. In this way, the blood vessel boundary gets extracted. An initial patch on the desired image object is a requirement in this algorithm to start calculating the local phase map. It is obtained by first selecting a few possible partitions using peaks (local maxima) in the intensity histogram. Then, optimal number of seeds is obtained by an iterative clustering of these peaks using their histogram heights and grey scale difference. The seeds around the object form the patch. Results & Conclusion: The proposed method has been tested on 6 subjects of Head MRT Angiography having resolution 416×512×112. We use 6 filters of size 7×7×7 and 4 scales in this experiment. The average time required by MATLAB R14 to perform segmentation is 3 m for one subject by a 2 GB RAM and core2duo processor (without optimization). The resulted segmentation is promising and robust in terms of boundary leakage as can be observed from the Figure.
-
-
-
Self-learning Control Of Thermostatically Controlled Loads: Practical Implementation Of State Of The Art In Machine Learning.
More LessOptimal control of thermostatically controlled loads such as air conditioning and hot water storage plays a pivotal role in the development of demand response. Demand response being an enabling technology in a society with an increased electrification and growing production from intrinsically stochastic renewable energy. Optimal control however, often relies on the availability of a system model in combination with an optimizer, a popular approaches being model predictive control. Building such a controller, is considered a cumbersome endeavor requiring custom expert knowledge, making large scale deployment of similar solutions challenging. To this end we propose to leverage on recent developments in machine learning, enabling a practical implementation of a model-free controller. This model-free controller interacts with the system within safety and comfort constraints and learns from this interaction to make near-optimal decisions. All of this within a limited convergence time on the order of 20-40 days. When successful, self-learning control allows for a large scale cost-effective deployment of demand response applications supporting a future with increased uncertainty in the energy production. To be more precise, recent results in the field of batch reinforcement learning and regression algorithms such as extremely randomized trees open the door for practical implementations. To support this, we intend in this work to show our most recent experimental results in implementing generic self-learning controllers for thermostatically controlled loads showing that indeed near optimal policies can be obtained within a limited time.
-
-
-
Self-powered Sensor Systems Based On Small Temperature Differences: Potential For Building Automation And Structural Health Monitoring
Authors: Jana Heuer, Hans-fridtjof Pernau, Martin Jägle, Jan D. König and Kilian BartholoméSensors are the eyes and ears in the service of people - especially in inaccessible areas where regular maintenance or battery replacement is extremely difficult. By using thermoelectric generators, which are capable of directly converting heat flux into electrical energy, self-powered sensor systems can be established wherever temperature differences of a few Kelvin exist. After installation of the sensors, they collect and transmit their data without any need for further maintenance like battery replacement. Intelligent building automation for instance is the key for significant energy reduction in buildings. Through precise control of sun blinds and set temperature for thermostats and air conditioning, radio signal sensors help to increase a building's efficiency massively. Thermoelectric self-powered sensors have the additional potential to introduce more flexibility in the area of building technology as complex wiring is avoided. Buildings can easier be adapted to altered utilization. Structural health monitoring is another field where energy autarkic sensors could be of vital use. In-situ measurements of e.g. temperature, humidity, strain and cracks of buildings are essential in order to determine the condition of construction materials. Respective sensors are hardly accessible, wiring or battery replacement is costly or even impossible. Sensors that are driven by thermoelectric generators are maintenance-free and can help enhancing the longevity of buildings as well as reducing the maintenance costs. Furthermore, leakages in water transport systems can be reduced by in-situ monitoring by self-powered sensors and thus, reduce unnecessary water losses. The large progress in the development of low-power sensors, power management and radio communication combined with the availability of high efficiency thermoelectric generators opens the possibility to run a self-powered sensor node with temperature gradients as low as 0.8K. This potential will be presented with respect to selected fields of application.
-
-
-
Smart Consumers, Customers And Citizens: Engaging End-users In Smart Grid Projects
Authors: Pieter Valkering and Erik LaesThere is no smart grid without a smart end-user! Smart grids will be essential in future energy systems to allow for major improvements in energy efficiency and for the integration of solar energy and other renewables into the grid, thereby contributing to climate change mitigation and environmental sustainability at large. End-users will play a crucial role in these smart grids that aim to link end-users and energy providers in a better balanced and more efficient electricity system. The success of smart grids depends on effective active load and demand side management facilitated by appropriate technologies and financial incentives, requiring end-user, market and political acceptance. However, current smart grid pilot projects typically focus on technological learning and not so much on learning to understand consumer needs and behaviour in a connected living environment. The key question thus remains: how to engage end-users in smart grid projects so as to satisfy end-user needs and stimulate active end-user participation, thereby realizing as much as possible the potential of energy demand reduction, energy demand flexibility, and local energy generation? The aim of European S3C project (www.s3c-project.eu) is to further understanding of engaging end-users (households and SMEs) in smart grid projects and ways this may contribute to the formation of new 'smart' behaviours. This research is based upon three key pillars: 1) the analysis of a suite of (recently finished or well-advanced) European smart-grid projects to assess success factors and pitfalls; 2) the translation of lessons learned to the development of concrete engagement guidelines and tools, and 3) the further testing of the guidelines and tools in a collection of ongoing smart grid projects leading to a finalized 'toolbox' for end-user engagement. Crucially, it differentiates findings for three key potential end-user roles: 'Consumer' (a rather passive role primarily involving energy saving), 'Customer' (a more active role offering demand flexibility and micro-scale energy production), and 'Citizen' (the most pro-active role involving community-based smart grid initiatives). Within this context, this paper aims to deliver a coherent view on current good practice in end-user engagement in smart grid projects. Starting from a brief theoretical introduction, it highlights the success factors - like underscoring the local character of a smart energy projects - and barriers - like the lack of viable business cases - the S3C case study analysis has revealed. It furthermore describes how these insights are translated into a collection of guidelines and tools on topics such as understanding target groups, designing adequate incentives, implementing energy monitoring systems, and setting up project communication. Also, an outlook towards future testing of those guidelines and tools within on-going smart grid projects is given. Consequently, we argue for each one of the three typical end-user roles which principles of end-user engagement should be considered good (or bad) practice. We conclude with highlighting promising approaches for end-user engagement that require further testing, as input for a research agenda on end-user engagement in smart grids.
-
-
-
Illustrations Generation Based On Arabic Ontology For Children With Intellectual Challenges
Authors: Abdelghani Karkar, Amal Dandashi and Jihad Al Ja'amDigital devices and computer software have the prospect to help children with intellectual challenges (IC) in learning capabilities, profession growth, and self-consciousness living. However, most tools and existing software applications that these children utilize are prepared without observance of their particular deficiency. We conduct an Arabic ontology-based learning system that presents automatically illustrations to characterize the content of stories for children with IC in the state of Qatar. We utilize different mechanisms in order to produce these illustrations which comprise: Arabic natural language processing, animal domain-based ontology, word-to-word based relationship extraction, automatic online search-engine querying. The substantial purpose of our proposed system is to ameliorate children with IC the educational, comprehension, perception, and reasoning through the generated illustrations.
-
-
-
Application Of Design For Verification To Smart Sensory Systems
Authors: Mohammed Gh Al Zamil and Samer SamarahWireless Sensor Networks (WSNs) have unleashed researchers and developers to propose a series of smart systems that serve the needs of societies and enhance the quality of services. WSNs consist of a set of sensors that sense the environmental variables, such as temperature, humidity, speed of objects, and report them back to a central node. Although such architecture seems simple, it is lack of many limitations that might affect its scalability, modularity of coded programs, and correctness in terms of synchronization problems such as nested monitor lockouts, missed or forgotten notifications, or slipped conditions. This research investigated the application of design for verification approach in order to come with a design for verification framework that takes into account the specialized features of smart sensory systems. Such contribution would facilitate 1) verifying coded programs to detect temporal and concurrent problems, 2) automating the verification process of such complex and critical systems, and 3) modularizing the coding of these systems to enhance their scalability. Our proposed framework relies on separating the design of the system's interfaces from the coded body; separation of concerns. For instance, we are not looking for recompiling the coded programs but, instead, we are looking for discovering design errors resulted from the concurrent temporal interactions among different programming objects. For this reason, our proposed framework adapts the concurrency controller design pattern to model the interactions modules. As a result, we were able to study the interactions among different actions and automatically recognize the transitions among them. Therefore, such recognition guarantees building a finite-state automaton that formulate the input description to a model-checker to verify some given temporal properties. To evaluate our proposed methodology, we have verified a real-time smart irrigation system that consists of a set of different sensors, which are controlled by a single controller unit. The system has already installed at our research field to control the irrigation process for the purpose of saving water. Further, we designed a set of specifications, temporal specifications, to check whether the system confirms to these specification during the interactions among heterogeneous sensors or not. If not, the model-checker returns a counter example of a sequence of states that violate a given specification. Thus, the counter example would be a great gift to fix the design error, which would minimize the risk of facing such error during run-time. The results showed that applying the proposed framework facilitates producing scalable, modular, and error-free sensory systems. The framework allowed us to detect critical design errors and fix them before deploying the smart sensory system. Finally, we also were able to check the power consumption model of our installed sensors and the effect of data aggregation on saving more power during future operations.
-
-
-
NEGATIVE FOUR CORNER MAGIC SQUARES OF ORDER SIX WITH A BETWEEN 1 AND 5
More LessIn this paper we introduce and study special types of magic squares of order six. We list some enumerations of these squares. We present a parallelizable code. This code is based on the principles of genetic algorithms. A magic square is a square matrix, where the sum of all entries in each row or column and both main diagonals yields the same number. This number is called the magic constant. A natural magic square of order n is a matrix of size n×n such that its entries consist of all integers from one to square of n. We define a new class of magic squares and present some listing of the counting carried out over two years.
-
-
-
Arabic Natural Language Processing: Framework For Translative Technology For Children With Hearing Impairments
Authors: Amal Dandashi, Abdelghani Karkar and Jihad AljaamChildren with hearing impairments (HI) often face many educational, communicational and societal challenges. Arabic Natural Language Processing can be used to develop several key technologies that may alleviate cognitive and language learning difficulties children with HI face in the Arab world. In this study, we propose a system design that provides the following component functionalities: (1) Multimedia translation elements that can be dynamically generated based on Arabic text, (2)3D Avatar based text-to-video translation (from Arabic text to Qatari Sign Language),involving manual and non manual signals, (3)Emergency phone based system that translates Arabic text to Qatari Sign Language Video and vice versa, and (4) Multi component system designed to be mobilized and used on various platforms. This system involves the use of Arabic Natural Language Processing, Arabic word and video Ontologies, and customized engine querying. The objective of the presented system framework is to provide translational and cognitive assistive technology to individuals with HI and empower their autonomous capacities.
-
-
-
Optimized Search Of Corresponding Patches In Multi-scale Stereo Matching: Application To Robotic Surgery
Authors: Amna Alzeyara, Jean-marc Peyrat, Julien Abinahed and Abdulla Al-ansariINTRODUCTION: Minimally-invasive robotic surgery benefits the surgeon with increased dexterity and precision, more comfortable seating, and depth perception. Indeed, the stereo-endoscopic camera of the daVinci robot provides the surgeon with a high-resolution 3D view of the surgical scene inside the patient body. To leverage this depth information using advanced computational tools (such as augmented reality or collision detection), we need a fast and accurate stereo matching algorithm, which computes the disparity (pixel shift) map between left and right images. To improve this trade-off between speed and accuracy, we propose an efficient multi-scale approach that overcomes standard multi-scale limitations due to interpolation artifacts when upsampling intermediate disparity results from coarser to finer scale. METHODS: Standard stereo matching algorithms perform an exhaustive search of the most similar patch between the reference and target images (along the same horizontal line when images are rectified). This requires a wide search range in the target image to ensure finding the corresponding pixel in the reference image (Figure 1). To optimize this search, we propose a multi-scale approach that uses the disparity map resulting from previous iteration at lower resolution. Instead of directly using the pixel position in the reference image to place the search region in the target image, we shift it by the corresponding disparity value from previous iteration and reduce the width of the search region as it is expected to be closer to the optimal solution. We also add two additional search regions shifted by disparity values at left and right adjacent pixel positions (Figure 2) to avoid errors typically related to interpolation artifacts when resizing disparity map. To avoid important overlaps between different search regions, we only add them where the disparity map has strong gradients. MATERIAL: We used stereo images from the Middlebury dataset (http://vision.middlebury.edu/stereo/data/) and stereo-endoscopic images captured at full HD 1080i resolution using a daVinci S/Si HD Surgical System. Experiments were performed with a GPU implementation on a workstation with 128GB RAM, an Intel Xeon Processor E5-2690, and an NVIDIA Tesla C2075. RESULTS: We compared the accuracy and speed between standard and proposed methods using ten images from the Middlebury dataset that has the advantage to provide ground truth disparity maps. We used the sum of square difference (SSD) as a similarity metric between patches of size 3x3 in left and right rectified images, resized to half their original size (665x555). For the standard method, we set the search range offset and width to respectively -25 and 64 pixels. For the proposed method, we initialize the disparity to 0 followed by five iterations with a search range width of 16. Results in Table 1 show that we managed to improve the average accuracy by 27% without affecting the average computation time of 120ms. CONCLUSION: We proposed an efficient multi-scale stereo matching algorithm that significantly improves accuracy without compromising speed. In future work, we will investigate the benefits of a similar approach using temporal consistency between successive frames and its use in more advanced computational tools for image-guided surgery.
-
-
-
On The Use Of Pre-equalization To Enhance The Passive Uhf Rfid Communication Under Multipath Channel Fading
Authors: Taoufik Ben Jabeur and Abdullah KadriOn the Use of Pre-Equalization to Enhance the Passive UHF RFID Communication under Multipath Channel Fading Dr. Taoufik Ben-Jabeur & Dr. Abdullah Kadri Qatar Mobility Innovations Center (QMIC), Qatar Science and Technology Park, Doha, Qatar taoufikb,[email protected] Background: We consider a monostatic passive UHF RFID system where it is composed from one RFID reader with one antenna, for transmission and reception, and RF tags. The energy of the continuous signals transmitted by the RFID reader is used to power up the internal circuitry of the RF tags and to backscatter their information to the reader. In the passive UHF RFID, we note the absence of source of the energy other than that is coming from the continuous wave. Experiences show that the presence of the multipath channel fading can reduce dramatically the received power at the tag. Therefore, the received energy isn't sufficient to power up the RF tag. To remedy this problem, we propose a pre-equalizer applied on the transmitted reader in order to maintain a useful received power able to power-up the tag. Objectives: This work aims to design a specific a pre-equalizer for passive UHF RFID systems able to combat the effect of the multipath channel fading and then maintain a high received power on the tag. Methods: a.First stage, we assume the knowledge of the multipath channel fading and the continuous wave. Then, the pre-equalizer is designed for a fixed Rayleigh multipath channel in order to maximize the energy of the receiver signal on the tag. b.Proprieties are extracted from the designed pre-equalizer associated to the fixed channel. c.More general equalizer uses these proprieties to design an equalizer that can be applied for any unknown multipath Rayleigh channel. Simulation results: Simulation results are provided to show that the proposed pre-equalizers allow combating the effect of the multipath channel fading and thus increasing the received power at the RF tag. The energy consumption of the tag is still the same and all operations are made at the RFID reader side.
-
-
-
Determination Of Magnetizing Characteristic Of A Single-phase Self Excited Induction Generator
Authors: Mohd Faisal Khan, Mohd Rizwan Khan, Atif Iqbal and Moidu ThavotThe magnetizing characteristic of a Self Excited Induction Generator (SEIG) defines relationship between its magnetizing reactance and air-gap voltage. The characteristic is essential for steady state, dynamic or transient analysis of SEIGs as the magnetizing inductance is the main factor responsible for voltage build-up and its stabilization in these machines. In order to obtain essential data to get this characteristic the induction machine is subjected to synchronous speed test. The data yielded by this test can be utilised to extract complete magnetizing behaviour of the test machine. In this paper a detailed study is carried out on a single phase induction machine to obtain its magnetizing characteristic. The procedure of performing synchronous speed test to record necessary data has been explained in detail along with relevant expressions for the calculation of different parameters. The magnetizing characteristic for the investigated machine is reported in the paper.
-
-
-
Control Of Packed U-cell Multilevel Five-phase Voltage Source Inverter
Authors: Atif Iqbal, Mohd Tariq, Khaliqur Rahman and Abdulhadi Al-qahtaniA seven level five-phase voltage source inverter with packed U-cell topology is presented in this paper. This is called Packed U-cell because each unit of the inverter is of shape U. Fig. 1 presents a five-phase seven-level inverter power circuit configuration using Packed U-cell. Depending upon the number of capacitors in the investigated topology different level of voltages can be achieved. In the presented topology two capacitors have been used to obtain seven levels (Vdc, 2Vdc/3, Vdc/3, 0, - Vdc/3, -2Vdc/3, -Vdc ). The Voltage across second capacitor (C) must be maintained at one-third of the voltage of the dc link.
-
-
-
An Ultra-low-power Processor Architecture For High-performance Computing And Other Compute-intensive Applications
Authors: Toshikazu Ebisuzaki and Junichiro MakinoGRAPE-X processor is an experimental processor chip designed to achieve extremely high performance per watt. It was made using TSMC's 28 nm technology, and has achieved 30 Gflops/W. This number is three times higher than the performance of best GPGPU cards announced so far, using the same 28 nm technology. The power consumption has been the main factor which limits the performance improvement of HPC systems. This is because of the break of the so-called CMOS scaling law. Until early 2000's, or when the design rule of the silicon device was larger than 130nm, shrinking the transistor size by a factor of two results in: for times more transistors, two times higher clock frequency, half the supply voltage, and the same power consumption. Thus, one could achieve 8x performance improvement. However, with transistors smaller than 130nm design rules, it has become difficult to reduce the supply voltage, resulting in only a factor-of-two performance improvement for the same power consumption. As a result, reduction in the power consumption of the processor, when it is fully in operation, has become the most important issue. In addition, it has also become important to achieve high parallel efficiency on relatively small-sized problems. With large parallel machines, high peak performance is realized, but that peak performance is in many cases not so useful, since it is achieved only for unrealistically large problems. For the problems of practical interest, the efficiencies of large scale parallel machines are sometimes surprisingly low. In order to achieve high performance-per-watt and high parallel efficiency on small problems, we developed a highly streamlined processor architecture. In order to reduce the communication overhead and improve parallel efficiency, we adopted an SIMD architecture. To reduce the power consumption, we adopted the distributed-memory-on-chip architecture, in which each of SIMD processor core has its own main memory. Based on the GRAPE-X architecture, an exa-flops (10^18 flops) system with the power consumption less than 50 MW will be possible in 2018-2019 time-frame. For many real applications including those in the cyber security area, which requires 10TB or less memory, a parallel system based on our GRAPE-X architecture will provide the highest parallel efficiency and the shortest time to the solution at the same time. Oral presentation is preferred
-
-
-
Energy Storage System Sizing For Peak Hour Utility Applications In Smart Grid
Authors: Islam Safak Bayram, Mohamed Abdallah and Khalid QaraqeEnergy Storage Systems (ESS) are expected to play a critical role in future energy grids. ESS technologies are primarily employed for reducing the stress on grid and the use of hydrocarbons for electricity generation. However, in order for ESS option to become economically viable, proper sizing is highly desired to recover the high capital cost. In this paper we propose a system architecture that enables us to optimally size the ESS system according to the number of users. We model the demand of each customer by a two-state Markovian fluid and the aggregate demand of all users are multiplexed at the ESS. The proposed model also draws a constant power from the grid and it is used to accommodate the customer demand and charge the storage unit, if required. Then, given the population of customers and their stochastic demands, and the power drawn from the grid we provide an analytical solution for ESS sizing using the underflow probability as the main performance metric, which is defined as the percentage of time that the system resources fall short of demand. Such insights very important in designing the system planning phases of future energy grid infrastructures.
-
-
-
An Enhanced Dynamic-programming Technique For Finding Approximate Overlaps
Authors: Maan Haj Rachid and Qutaibah MalluhiThe next generation sequencing technology creates a huge number of sequences (reads), which constitute the input for genome assemblers. After prefiltering the sequences, it is required to detect exact overlaps between the reads to prepare the necessary ingredients to assemble the genome. The standard method is to the find the maximum exact suffix-prefix match between each pair of reads after executing an error-detection technique. This is applied in most assemblers, however, a few studies worked on finding the approximate overlap. This direction can be useful when error detection and prefiltering techniques are very time consuming and not very reliable. However, there is a huge difference in term of complexity between finding exact and approximate matching techniques. Therefore, any improvement in time could be valuable when approximate overlap is the target. The naive technique to find approximate overlaps applies a modified version of dynamic programming (DP) on every pair of reads, which consumes O(n2) time where n is the total size of all reads. In this work, we take advantage of the fact that many reads share prefixes. Accordingly, it is obvious that some work is continuously repeated. For example, consider the sequences in Figure 1. If dynamic programming is applied on S1 and S2, assuming S2 and S3 share a prefix of length 4, then it is easy to notice that calculation of a portion of DP table of size |S1| X 5 can be avoided when applying the algorithm on S1 and S3 (the shaded area in Figure 1). Figure 1. DP table for S1,S2 alignment. We assume the following: gap = 1, match =0 and mismatch=1. no calculation for the shaded area is required when calculating S1,S3 table since S2,S3 share the prefix AGCC. The modification is based on the above observation: first, the reads are sorted in lexicographical order and the largest common prefix (LCP) between every two consecutive reads is found. Let group G denote the reads after sorting. For every string S, we find the DP table for S and every other string in G. Since the reads are sorted, a portion of DP table can be skipped for every string, depending on the size of LCP, which has already been calculated in the previous step. We implemented the traditional technique to find approximate overlap with and without the proposed modification. The results show that there is an improvement of 10-61% in time. The interpretation for this wide range is that the gain in performance depends on the number of strings. The larger the number of strings is, the better the gain in performance since the sizes of LCPs are typically larger.
-
-
-
Practical Quantum Secure Communication Using Multi-photon Tolerant Protocols
More LessThis paper presents an investigation of practical quantum secure communication using multi-photon tolerant protocols. Multi-photon tolerant protocols loosen the limit on the number of photons imposed by currently used quantum key distribution protocols. The multi-photon tolerant protocols investigated in this paper are multi-stage protocols that do not require any prior agreement between a sender Alice and a receiver Bob. The security of such protocols stems from the fact that the optimal detection strategies between the legitimate users and the eavesdropper are asymmetrical, allowing Bob to obtain measurement results deterministically while imposing unavoidable quantum noise to the eavesdropper Eve's measurement. Multi-photon tolerant protocols are based on the use of transformations known only to the communicating party applying them i.e. either Alice or Bob. In this paper multi-photon tolerant protocols are used in order to share a key or a message between a sender Alice and a receiver Bob. Thus such protocols can be either used as quantum key distribution (QKD) protocols or quantum communication protocols. In addition, multi-stage protocols can be used to share a key between Alice and Bob, followed by the shared key used as a seed key to a single-stage protocol, called the braiding concept. This paper presents a practical study of multi-photon tolerant multi-stage protocols. Security aspects as well as challenges to the practical implementation are discussed. In addition, secret raw key generation rates are calculated with respect to both losses and distances over a fiber optical channel. It is well-known that raw key generation rates decreases with the increase in channel losses and distances. In this paper, coherent non-decoying quantum states are used to transfer the encoded bits from Alice to Bob. Raw key generation rates are calculated for different average photon numbers µ and compared with the case of µ=0.1, which is the average number of photons used in most single-photon based QKD protocols. Furthermore, an optimal average number of photons to be used within the secure region of the multi-photon tolerant protocols is calculated. It is worth noting that, with the increased key generation rates and distances of communication offered by the multi-photon tolerant protocols, quantum secure communication need not be restricted to quantum key distribution; it can be elevated to attain direct quantum secure communication.
-
-
-
Power Grid Protection
Authors: Enrico Colaiacovo and Ulrich OttenburgerDue to its inherent short-term dynamic, the power grid is a critical component of the energy system. When a dangerous event occurs in a section of the grid (i.e. a power line or a plant fails), the overall system is subject to the risk of a blackout. The time available to counteract the risk is very short (only a few milliseconds) and there are no tools to ensure the power to a number of selected critical facilities. A way to tackle the blackout risk and to implement a smart management of the remaining part of the grid is a distributed control system with preemptive commands. It's based on the idea, that in case of dangerous events, there will be definetly no time to inform the control center, to make a decision and to send the commands to the active components of the power grid where, finally, they will be executed. The idea consist in the implementation of an intelligent distributed control system that continuously controls the critical components of the power grid. It monitors the operational conditions and evaluates the ability of single components to work well and their probability of an outage. In parallel, the control system continuously imparts preemptive commands to eventually counteract the outages expected on a probabilistic base. The preemptive commands can be defined taking into account the sensitivity to specific outages by different network elements and of course, on the base of a priority rule that preserve the power for the strategic sites. In case of a dangerous event, the monitoring device directly sends messages to all the actuator devices, where the action will be performed only if a preemptive command was previously delivered. This means that the latency of the traditional control chain will be reduced to the latency of communications between monitoring and actuator devices. The first consequence of this policy is that an event, which is potentially the cause for a complete blackout will affect only a limited portion of the grid. The second consequence is , that the control system will choose the network elements that will be involved in the emergency procedure, preserving the strategic plants. The third consequence is that with this kind of control, the power grid goes from a N-1 stable status to another N-1 stable status. The system loses contributions of generation and load, but it keeps its stability and its standard operations.
-
-
-
A Conceptual Model For Tool Handling In The Operation Room
Authors: Juan Wachs and Dov DoriBackground & Objectives: There are 98,000 deaths in the US annually due to errors in the delivery of healthcare causing inpatient mortality and morbidity. Among these errors, ineffective team interaction in the operating room (OR) accounts for one of the main causes. Recently, it has been suggested that developing a conceptual model of verbal and non-verbal exchanges in the OR could lead to a better understanding of the dynamics among the surgical team, and this in turn, could result in a reduction in miscommunication in the OR. In this work, we describe the main principles characterizing the Object-Process Methodology (OPM). This methodology enables to describe the complex interactions between surgeons and the surgical staff while delivering surgical instruments during a procedure. The main objective of such a conceptual model is to assess when and how errors occur during the requests and delivery of instruments, and how to avoid those. Methods: The conceptual model was constructed from direct observations of surgical procedures and eventual miscommunications cases in the OR. While the interactions in the OR are rather complex, the compact ontology of OPM allows stateful objects and processes to interact mutually and generate measurable outcomes. The instances modeled are related to verbal and non-verbal communication (e.g. gestures, proxemics) and the potential mistakes are modeled as processes that deviate for the “blue ocean” scenario. The OPM model was constructed through an iterative process of data collection through observation, modeling, brainstorming, and synthesis. This conceptual model provides the basis for new theories and frameworks needed to characterize operating OR communication. Results: The model adopted can accurately express the intricate that take place in the OR during a surgical procedure. A key component of the conceptual model is the ability to specify the features at various levels of detail, and each level represented through a different diagram. Nevertheless, each diagram is contextually linked to all the others. The resulting model, thus, provides a powerful and expressive ontology of verbal and non-verbal communication exchanges in the OR. Concretely, the model is validated through structured questionnaires, which allows assessing the level of consensus for criteria such as flexibility, accuracy, and it generality. Conclusion: A conceptual model was presented describing the tools handling processes during operations conducted at the OR. The focus is placed on communication exchanges between the main surgeon and the surgical technician. The objective is to create a tool to "debug" and identify the exact circumstances in which surgical delivery errors can happen. Our next step is the implementation of robotic assistant for the OR, which can deliver and retrieve surgical instruments. A necessary requirement for the introduction of such cybernetic solution is the development of a concise specification of these interactions in the OR. The development of this conceptual model can have a significant impact in both the reduction in tool-handling-related errors, and the formal designing robots which could complement surgical technicians in their routine tool handling activities during surgery.
-
-
-
Efficient Multiple Users Combining And Scheduling In Wireless Networks
Authors: Mohammad Obaidah Shaqfeh and Hussein AlnuweiriWireless networking plays a vital role in our daily life style and has tremendous applications in almost all fields of the economy. The wireless medium is a shared medium and, hence, user-scheduling is needed to allow multiple users access the channel jointly. Furthermore, the wireless channel is characterized by its time-based and location-based variations due to physical phenomena such as multi-path propagation and fading, etc. Therefore, unlike the traditional persistent round-robin scheduling schemes, the current standards of telecommunication systems support channel-aware opportunistic scheduling in order to exploit the varying channels of the users when they are at their peak conditions. The advantages of these schemes in enhancing the prospected throughput of the networks are evident and demonstrated. However, these schemes are basically based on selecting a single user to access a certain frequency sub-channel at a given time in order to avoid creating interference if more than one user access the same channel. Nevertheless, allowing multiple users to access the same channel can be feasible by using special coding techniques such as superposition coding with successive interference cancellation at the receivers. The main advantage of this is to improve the spectral efficiency of the precious wireless spectrum and to enhance the overall throughput of the network while maintaining the quality-of-service requirements of all users. Despite their advantages, multiple-users scheduling schemes require the use of proper resource allocation algorithms to process the channel conditions measurements in order to decide which users should be served in a given time slot and frequency sub-channel and the allocated data rate and power of each link in order to maximize the transmission efficiency. Failure to use a suitable resource allocation and scheduling scheme can degrade the performance significantly. We design and analyze the performance of efficient multiple-users scheduling schemes for wireless networks. One scheme is proven theoretically to be the most efficient one. However, the algorithm computation load is significant. The other scheme is a sub-optimal scheme that has low computation load to run the algorithm and it achieves very good performance which is comparable to the optimal scheme. Furthermore, we evaluate the performance gains of multiple-user scheduling over the conventional single-user scheduling under different constraints such as hard fairness and proportional fairness among the users and for fixed merit weights of the users based on their service class. In all of these cases, our proposed schemes can achieve a gain that may exceed 10% in terms of the data rate (bits/sec). This gain is significant taken into consideration that we use the same air-link and power resources of the conventional single-user scheduling schemes.
-
-
-
Maximizing The Efficiency Of Wind And Solar-based Power Generation By Gis And Remotely Sensed Data In Qatar
Authors: Ramin Nourqolipour and Abu Taleb GhezelsoflouQatar has a high potential to develop renewable energy generating systems spatially through solar and wind-based technologies. Although, substantial initiatives have been undertaken in Qatar to reduce the high per capita emissions of the Greenhouse Gases (GHG), solar and wind-based energy generation can also significantly contribute to the mitigation of climate change. The mean Direct Normal Irradiance (DNI) of Qatar is about 2008 kWh/m2/y, which is suitable to develop solar power systems, knowing that 1800 kWh/m2/y is enough to establish Concentrated Solar Power (CSP) plants. Although, the cost factor for developing the solar based power generation systems is about twice the gas based power generation, it generates environmental friendly energy along with keeping the limited gas resources. Moreover, being aware that 3 m/s is the critical wind speed to generate power, Qatar experiences wind speed over the critical speed in almost 80% of time that is a great potential to develop wind-based energy systems. In terms of economic feasibility, the minimum requirement of number for full load hours is 1400 while the number for Qatar is higher than the critical value. Furthermore, establishing wind power plant is cheaper than the gas-based one in off-shore locations even though the power generation is lower. This paper explains a methodology to determine the most suitable sites for developing the solar and wind-based power plants in order to maximize the efficiency of power generation using remote sensing and GIS. Analyses are carried out on two sets of spatial data derived from a recent Landsat 8 image such as land cover, urban and built-up areas, roads, water sources, and constraints, along with bands 10 and 11 (thermal bands) of same sensor for the year 2014, a DEM (Digital Elevation Model) derived from SRTM V2 (Shuttle Radar Topography Mission) to generate slope, aspect, and solar maps, and wind data obtained from Qatar meteorology department. The data are used to conduct two parallel Multi-Criteria Evaluation (MCE) techniques based on each objective of development (solar, and wind power plant development) through the following stages: (1) data preparation and standardization using categorical data rescaling, and fuzzy set membership function, (2) Logistic Regression-based analysis to determine suitability of each pixel for desired objective of development. The analysis produces two distinct suitability maps such that each one addresses suitable areas to establish solar, and wind power plants. The obtained suitability maps then are processed under a multi-objective land allocation model to allocate the areas that show the highest potential to develop both solar and wind-based power generation. Results show that the off-shore suitable sites for both objectives are mainly distributed in the north and north-west regions of Qatar.
-
-
-
An Efficient Model For Sentiment Classification Of Arabic Tweets On Mobiles
Authors: Gilbert Badaro, Ramy Baly, Hazem Hajj, Nizar Habash, Wassim El-hajj and Khaled ShabanWith the growth of social media and online blogs, people express their opinion and sentiment freely by providing product reviews, as well as comments about celebrities, and political and global events. These texts reflecting opinions are of great interest to companies and individuals who base their decisions and actions upon them. Hence, opinion mining on mobiles is capturing the interest of users and researchers across the world with the growth of available online data. Many techniques and applications have been developed for English while many other languages are still trying to catch up. In particular, there is an increased interest in easy access to Arabic opinion from mobiles. In fact, Arabic presents challenges similar to English for opinion mining, but also presents additional challenges due to its morphological complexity. Mobiles on the other hand present their own challenges due to limited energy, limited storage, and low computational capability. Since some of the state-of-the-art methods for opinion mining in English require the extraction of large numbers of features, and extensive computations, these methods are not feasible for real-time processing on mobile devices. In this work, we provide a solution to address the limitation of the mobile, and the required Arabic resources to derive opinion mining on mobiles. The method is based on matching stemmed tweets to our own developed Arabic sentiment lexicon (ArSenL). While there have been efforts towards building Arabic sentiment lexicons, they suffer from many deficiencies including limited size, unclear usability plan given Arabic's rich morphology, or non-availability publicly. ArSenL is the first publicly available large scale Standard Arabic sentiment lexicon (ArSenL) developed using a combination of English SentiWordnet (ESWN), Arabic WordNet, and the Standard Arabic Morphological Analyzer (SAMA). A public interface to browsing ArSenL is available at http://me-applications.com/test. The scores from the matched stems are then aggregated and processed through a decision tree for determining the polarity. The method was tested on a published set of Arabic tweets, and an average accuracy of 67% was achieved versus a 50% baseline. A mobile application was also developed to demonstrate the usability of the method. The application takes as input a topic of interest and retrieves the latest Arabic tweets related to this topic. It then displays the tweets superimposed with colors representing sentiment labels as positive, negative or neutral. The application also provides visual summaries of searched topics and a history showing how the sentiments for a certain topic has been evolving.
-
-
-
Email Authorship Attribution In Cyber Forensics
More LessEmail is one of the most widely used forms of written communication over the Internet, and its use has increased tremendously for both personal and professional purposes. The increase in email traffic comes also with an increase in the use of emails for illegitimate purposes to commit all sort of crimes. Phishing, spamming, email bombing, threatening, cyber bullying, racial vilification, child pornography, viruses and malware propagation, and sexual harassments are common examples of email abuses. Terrorist groups and criminal gangs are also using email systems as a safe channel for their communication. The alarming increase in the number of cybercrime incidents using email is mostly due to the fact that email can be easily anonymized. The problem of email authorship attribution is to identify the most plausible author of an anonymous email from a group of potential suspects. Most previous contributions employed a traditional classification approach, such as decision tree and Support Vector Machine (SVM), to identify the author and studied the effects of different writing style features on the classification accuracy. However, little attention has been given on ensuring the quality of the evidence. In this work, we introduce an innovative data mining method to capture the write-print of every suspect and model it as combinations of features that occur frequently in the suspect's emails. This notion is called frequent pattern, which has proven to be effective in many data mining applications, but has not been applied to the problem of authorship attribution. Unlike traditional approaches, the extracted write-print by our method is unique among the suspects and, therefore, provides convincing and credible evidence for presenting it in a court of law. Experiments on real-life emails suggest that the proposed method can effectively identify the author and the results are supported by a strong evidence.
-
-
-
Msr3e: Distributed Logic Programming For Decentralized Ensembles
Authors: Edmund Lam and Iliano CervesatoIn recent years, we have seen many advances in distributed systems, in the form of cloud computing and distributed embedded mobile devices, drawing more research interest into better ways to harness and coordinate the combined power of distributed computation. While this has made distributed computing resources more readily accessible to main-stream audiences, the fact remains that implementing distributed software and applications that can exploit such resources via traditional distributed programming methodologies is an extremely difficult task. As such, finding effective means of programming distributed systems is more than ever an active and fruitful research and development endeavor. Our work here centres on the development of a programming language known as MSR3e, designed for implementing highly orchestrated communication behaviors of an ensemble of computing nodes. Computing nodes are either traditional main-stream computer architectures or mobile computing devices. This programming language is based on logic programming, and is declarative and concurrent. It is declarative in that it allows the programmer to express the logic of synchronization between computing nodes without describing any form of control flow. It is concurrent in that its operational semantics is based on a concurrent programming model known as multiset rewriting. The result is a highly expressive distributed programming language that provides a programmer with a high-level abstraction to implement highly complex communication behavior between computing nodes. This allows the programmer to focus on specifying what processes need to synchronize between the computing nodes, rather than how to implement the synchronization routines. MSR3e is based on a traditional multiset rewriting model with two important extensions: (1) Explicit localization of predicates, allowing the programmer to explicitly reference the locations of predicates as a first-class construct of the language (2) Comprehension patterns, providing the programmer a concise mean of writing synchronization patterns that matches dynamically sized sets of data. This method of programming often result to more concise codes (relative to main-stream programming methodologies) that are more human readable and easier to debug. Its close foundation to logic programming also suggests the possibilities of effective automated verification of MSR3e programs. We have currently implemented a prototype of MSR3e. This prototype is a trans-compiler that compiles a MSR3e program into two possible outputs: (1) a C++ program that utilizes the MPI libraries, intended for execution on traditional main-stream computer architectures (e.g., ×86, etc..) or (2) a Java program that utilizes WiFi direct libraries of the android SDK, intended for execution on android mobile devices. We have conducted preliminary experimentations on a small set of examples, to show that MSR3e works in practice. In future, we intend to refine our implementation of MSR3e, scaling up the experiment suites, as well as developing more non-trivial applications in MSR3e, as further proof of concept.
-