- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2016 Issue 1
- Conference date: 22-23 Mar 2016
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2016
- Published: 21 March 2016
451 - 500 of 656 results
-
-
Identification and Structural Analysis of Natural Product Inhibitors of Human Alpha-Amylase to Target Diabetes Mellitus
More LessIntroduction: Noninsulin dependent diabetes mellitus (NIDDM) or Diabetes Type 2 is a major health challenge worldwide. In 2014 the World Health Organization (WHO) revealed that the percentage of adults with fasting glucose ≥ 7.0 mmol/L accounts for 9% globally. Moreover, the highest percentage was in the Eastern Mediterranean Region with 26.8 ± 0.4%. Type 2 diabetes is a complex metabolic disorder associated with high level of glucose in blood (hyperglycemia) which lead to long-term pathogenic conditions such as: neuropathy, retinopathy, nephropathy and a consequent decrease in quality of life and increased mortality rate.
Starch is the main source of energy for most living organisms; in humans it is digested over several stages that involve different amylolytic enzymes, such as α-Amylase and α-Glucosidase. Alpha Amylase is the major secretory product (about 5–6%) of the pancreas and salivary glands, playing a core role in starch and glycogen digestion. The control of postprandial hyperglycemia is an important strategy in the management of Type 2 diabetes; lifestyle modification and/or the use of medications such as insulin and α-Glucosidase inhibitors are the available treatments to date. Acrabose is a prominently used α-Glucosidase inhibitor for diabetes and obesity control, however it has many side effects and limitations. Numerous in vivo studies (REFS) have shown that many plant extracts inhibit the key enzymes of digestion (α-Amylase and α-Glucosidase) and the use of naturally occurring inhibitors is potentially the most effective and safest approaches for treating diabetes. Aims: Short-term aim: To clone, express and purify human α-Amylase protein using different yeast expression systems. Followed by protein (c0)crystallization and structural analysis.
Long-term aim: Screening natural products (plant extracts) based on their traditional use followed by co-crystallisation of selected inhibitors. This will be complemented by inhibitors designed in silico based on the ANCHOR.QUERY approach (REF). Finally, identified compounds will be characterised by biophysical and kinetic studies. Methodology: Human α-Amylase has been cloned into two vectors pHIPZ-4 and pPIC9k, each with its own set of primers, restriction enzymes and dedicated expression host (pHIPZ-4: Hansenula polymorpha and pPIC9k: Pichia pastoris, respectively). After transformation, the grown colonies were tested for the presence of the target gene by colony PCR and digestion with cloning enzymes. Positive colonies were re-inoculated in growth media and recombinant plasmid was recovered. The plasmids were then transformed into competent yeast cells by electroporation. For H. polymorpha, cells were grown in minimal media with glucose (MM/G) for two days followed by induction of expression with 0.5% (v/v) MeOH. Protein was purified by lysing the cells and passing the lysate through Ni-NTA beads. Finally protein was identified with by Western blot using HisProbe-HRP antibody. Results: Human α-Amylase gene was successfully cloned in both vectors pHIPZ-4 and pPIC9k according to colony PCR and digestion judgment on Agarose gel. However, the expression of α-Amylase protein in H. polymorpha system is insufficient to support the downstream work. Analysis of secreted expression using Pichia pastoris is currently underway and the results will be reported at the meeting. Conclusion: Human α-Amylase was successfully cloned in both vectors and stably transformed to E. coli competent cells. The positive colonies were confirmed for the present of targeted gene then transformed to yeast cells.
-
-
-
Elucidating T7-like Bacteriophage Isolated from Qatar's Sand
By Aya AbdelaalA bacteriophage is a virus that infects a bacteria, it uses the bacteria as a host to further replicate by controlling the replication and protein synthesis machinery of the cell. They are composed of a protein capsule and DNA or RNA genome. Bacteriophages are used in the treatment of bacterial infections, known as the phage therapy. This results from phages invading the bacteria, as they undergo a lytic cycle, where the replication and protein synthesis machinery is used to produce virions, that later cause the cell to lyse, thus killing the bacteria. This technique has higher efficiency compared to the use of antibiotics as bacteria can develop a resistance against antibiotics, while remaining susceptible to the infection of the phage. In addition, phages can evolve and adapt to new mutations that might arise in the bacteria. Lastly, phages might be engineered to contain a survival gene needed by the bacteria to ensure that the bacteria will replicate and synthesize the new virons, leading to lysis and death.
This research explores the potential of the use of bacteriophages in the treatment of water, to disinfect the water from microbial bacterial and other bacteria that were used to detoxify the water. Phages were previously extracted from Qatar's sand and were used as the infecting agents. The model host chosen is Arthobacter bacteria, which is a denitrification bacteria, as it is similar to the bacteria used in water treatment. Denitrifying bacteria reduce nitrates to nitrogen-containing gases, allowing for the recycling of nitrogen back to the atmosphere.
The goal of my research is to analyze the genome of a phage extracted from Qatar's Sand. A culture of Arthobacter grown in Smeg media was used as a host. Unlike the normal optimal growth temperature of Arthobacter at 32 °C, it was shown that higher phage titer was produced when the culture was grown to saturation in smeg media at 37 °C. In addition, the culture was initially grown in luria Broth which is rich in nutrients including tryptone, yeast extract and NaCl. However, low phage titer was produced compared to switching the media to smeg media, which contains Middlebrook 7H9 broth base with supplements of albumin, dextrose and salts.
After manipulating the growth conditions of the host culture to obtain the highest titer, the optimum conditions were found to be a saturated culture of Arthobacter grown in Smeg media at 37 °C. Later, lysis conditions of the phage were optimized by varying several factors. First, the type of the top agar and plates used for pour plating were varied. Several top agars were used this includes LB top agar, LB top agar in 1 mM CaCl2 and PYCA top agar. It was concluded that the top agar that resulted in the highest titer was LB top agar in 1 mM CaCl2. Several plates have been used for pour plating this include LB plates and PYCA plates. Plaques formed using LB plates produced unclear plaques, this unclearness is an indication of persistence of the lysogenic state and not the lytic state. In the lysogenic state the phage genome gets incorportated into the host genome and replicates along with the cell cycle, thus remaining in a dormant state. On the other hand, in the lystic cycle, the phage uses the replication and protein synthesis machinery to produce more phages, that will later lead to lysis. In contrast, PYCA plates containing CaCl2 formed clearer plaques, indicating dominance of the lytic state. Therefore, addition of calcium in the top agar and the plates aided in the phages’ shift from the lysogenic cycle to the lytic cycle. This correlates with previous work, where calcium was shown to be essential for the penetration of the phage's genome into the host.
Finally, the incubation temperature of the plates containing the infected host Arthobacter by Qatar's sand's phage was varied. Initially the plates were incubated at 32 °C, however no plaques were formed. When the temperature was later increased to 37 °C, lysis was observed and a high phage titer was obtained.
After obtaining a high titer lysate, the DNA was isolated and a sample of the phage was sent to obtain an electron micrograph image of the phage. A restriction enzyme digest was preformed on the isolated DNA and the profile created was compared to the profile of lambda phage. Both the phages are lytic and lyse at similar temperatures, which allows for comparisons.
Lastly, motifs were identified between the Qatar's sand bacteriophage and T7 bacteriophage using blast tools in a previously extracted and sequenced Qatar's sand phage, primers will then be designed to verify if they exist in the actual phage.
Future work will focus on the analyzing the sequence of the phage to identify potential lysis genes, In addition, strengthening the lytic ability of the phage by cloning the gene of lysozyme into a T7-based vector system and test its adaptability to temperature using Arthrobacter as the host system.
-
-
-
A Digitally Controlled Pseudo-Hysteretic Buck Converter for Low Power Biomedical Implants
Authors: Paul Jung-Ho Lee, Amine Bermak and Man Kay LawBackground: Low power biomedical implants usually harvest energy from a small inductor coil or optical energy sources. Those sources can supply very limited amount of energy to the target system because of poor power transfer efficiency and size limitation. Thus, we have to supply as much as energy directly to the load, without wasting energy from auxiliary devices from electrical driving. To save the energy dissipated as heat, when the power supply voltage is excessively large compared to the voltage at the load, we can choose a class-H amplifier-like strategy, where supply voltage tracks the voltage waveform at the load. Among many power conversion topologies that can modulate the supply voltage, the SMPS (Switching Mode Power Supply) is the most promising one; because reverse energy recovery can be used by taking back the charge accumulated on the load capacitor. The CCM buck converter, shown in Fig. 1, can possibly work as a voltage tracking power supply modulator. However, we must employ a complicated auxiliary circuit components, such as the Type-III compensator, which greatly hampers its application for biomedical implant applications due to several external passive components. Proposed Power Converter: Therefore we propose a proposed digitally controlled hysteretic buck converter that is composed of 3 parts: power conversion, digital control, and pulse generation. Its controller can be implemented without bulky external passive components, but can quickly adapt the fast transient with a simple digital controller which incorporates just 1 comparator.
Figure 2 shows the power conversion part of the proposed buck converter. It is composed of power PMOS (W/L = 2 mm/0.5 μm), NMOS (W/L = 1 mm/0.5 μm), an active diode amplifier driving the NMOS, a 1 uH inductor, and a 1 uF capacitor. The power supply is 3.3 V. It adopts a typical buck converter configuration, where target switching frequency is 10 MHz. The NMOS active diode circuit is employed for minimizing the conduction loss across the NMOS body diode, when the energy stored in the inductor is released. The first stage of the active diode is common gate differential amplifier, of which the positive terminal is connected to GND and the negative input terminal is connected to the switching side of the inductor. The second stage of the active diode is common source amplifier stage, which serves to boost gain and increase the slew rate. Because it uses the negative feedback, the stability should be carefully checked. The simulated gain of the active diode was around 60 dB, with a 3 dB bandwidth of about 100 kHz.
In biomedical implant applications, the fast transient response is important, because the required power supply voltage can abruptly change, i.e. when an electrical stimulator changes the phase from anodic pulse to cathodic pulse. Thus, we propose a digital controller which can support such a fast transient response, which can make a voltage discursion of 1 V in less than 1 us. Figure 3 shows the proposed pseudo-hysteretic controller, for driving the power PMOS of the proposed power converter. It receives the reference voltage and the current output voltage as inputs, and then compares them. It asserts ‘1’ to report to the digital pulse generator (fsm_pulse_gen), when the reference voltage is higher than output voltage. The digital pulse generator increases the duty ratio when the comparator output is ‘1’, and decreases the duty ratio when the comparator output is ‘0’. The key of the control mechanism is binary weighted duty control. In this scheme, initially, the duty can jump by a predetermined maximum, being 16, and then, when the SMPS output approaches to the initial target voltage, the incremental amount decreased by half, being 8, and then again by half, being 4, and so on. The simulation result of the proposed power converter with the pseudo-hysteretic controller, which tracks reference voltage is shown in Fig. 4. In the beginning the converter operates in CCM, when the output voltage rapidly catch up the reference voltage; this is done by the digital controller which crank up the duty cycle to the maximum in a short period. Once the output supply voltage begins to be stabilized by crossing the reference voltage line, the converter changes the operation mode from CCM to DCM. Conclusion: We introduce a digitally controlled hysteretic buck converter with an active diode, intended for biomedical implant applications, featuring low-power consumption and fast transient response. For achieving these features, the associated digital controller employs the binary weighted duty update scheme, with overhead of a simple input processing comparator. An NMOS active diode can further decrease the wasted energy source by removing the loss that can be incurred from body diode conduction.
-
-
-
Diabetes Awareness Among High School Students in Qatar
By Sara AmaniDiabetes is a disease that occurs when there is an abundance of glucose in the blood stream and the body cannot produce enough insulin in the pancreas to transfer the sugar from the blood to other areas in the body for energy. Type 1 and Type 2 diabetes are both prevalent in Qatar however, Type 2 diabetes is more common and it is mainly what is causing the epidemic Qatar is facing. Type 1 diabetes is inherited and is not related to eating habits or lifestyle, and it is diagnosed in juvenile years. However, Type 2 diabetes can be caused by obesity, unhealthy eating habits, and lack of exercise and overall fitness. It is treatable but not curable and can be treated by maintaining a healthy diet, regular exercise, insulin intakes, and oral medication.
Diabetes is the 3rd most common cause of death in Qatar after road accidents and heart disease (http://www.worldlifeexpectancy.com/country-health-profile/qatar). In fact, in 2011, Qatar ranked 6th for the highest percentage of diabetes in the world. The amount of overweight adolescents in Qatar is very large resulting in higher diabetes rates amongst children and young adults. The percentage of overweight children doubled from 17% in 1997 to 35% in 2007 according to The Peninsula newspaper. Awareness is crucial at this age because most of the human body fat is put on during teenage years when the body is constantly growing and it is very important to maintain a healthy weight at this time.
Students at the secondary school level should understand whether or not they are at risk of developing this disease compared to ordinary people. Each student should know their chances of risk due to their family relations, history, and habits. In Qatar we find that the parents of many students our age face diabetes because of the habits of their parents and so on, therefore all of us are at a higher risk. The fact that fast food is so available and practical keeps people, especially those at a young age constantly consuming such unhealthy products. With the statistics of obesity so alarmingly high, education and awareness is the only key to solving the problem. We plan on increasing the diabetic awareness at our school by conducting surveys, presenting posters, interviewing students, as well as working hand-in-hand with the Qatar Diabetes Association (QDA) to be as successful as possible.
Although many efforts are being made by the government, many students do not know enough about the danger that they may be putting themselves in, and the consequences that result because of their actions. Many students at this age are allowed to choose what they eat and their diets are not regulated by parents anymore, therefore they need to learn to make proper choices. The high temperatures in Qatar also make it inconvenient to exercise outside, however there are numerous alternatives. The goal of this research is to enlighten our peers with this given information and compare their knowledge of diabetes before and after. Awareness is the first step to prevention, and that needs to begin at a young age. Research Methods: The first step to our project on spreading awareness about diabetes is to know the level at which the students are already aware. Therefore, using Google Forums, we will create a survey with questions about how much the students already know about diabetes, if they know of family history, if they have been tested, if they know the symptoms, etc. Once we have fully created the survey, we will email the link to all of the high school students at our high school asking them to each fill out the survey with the appropriate responses. In order to insure that we have accurate gatherings, we will each interview an equal amount of students randomly and record their answers to work from there.
After receiving 189 responses from the high school students we began to analyze the results. Only 22% of the students had been tested for or against diabetes. Out of those who had been checked 48% had checked more than a year ago. 58% of the students knew of family members with diabetes. Most had checked the correct symptoms for diabetes along with a lot of unrelated ones like coughing or vomiting. Also for the things you could do to prevent diabetes, 8% responded with “washing your hands”, 30% responded with “not smoking”, and surprisingly even 3% responded with “dressing modestly”. After witnessing the high percentage of wrong responses, it was made clear that we definitely had to increase our effort in educating our peers.
Once discussing the open ended question of “What do you know about diabetes?” we ranked each student's response out of 10 (Based on the below scale). Then, we performed a chi-squared statistical test to compare our expected value of 7/10 to 3.323/10 which was the average student rank. This proved a significant difference between the levels of awareness we expected from the students, to what we actually received.
The data from the survey proved the indeed lack of awareness among the students of the high school level. We created fun awareness posters and posted them on glass doors to the cafeteria, science wing and specific lockers where they were most visible to the student population. Results and Discussion: Upon the completion of our presentation to the 10th grade health class, we conducted a second survey regarding what the students had learned and the quality of our presentation. The table below shows the responses received from the students after the presentation. Conclusions: Unfortunately at the beginning of this research, we were shocked at noticing the extreme lack of awareness amongst our student population. Considering the high rates of diabetes in the State of Qatar, we expected a basic understanding about the disease especially since many people here are prone to it. It is very alarming that which current trends it is expected that 73% of men and 69% of women would have reached obesity in the country by 2030. Indeed it can be stated that a lot of work needs to be done.
One of the major purposes of this research was to gain insight about this issue and it is quite obvious that these high rates are leading to a huge problem in the nation. Fortunately Qatar has been working hard to increase awareness with events like National Sports Day and hosting events like the 2022 world cup to promote fitness in the country. As mentioned previously, since diabetes type 2 is the major concern because it is developed later in life and is preventable, young students are the target for spreading information too. Habits develop during the adolescent years so spreading the word among students at this age is critical.
This research was very successful in promoting awareness in our own school, but from what we have learned, there is a long way to go in order to decrease the high statistical quantities of diabetes rates. Benefit to students: The Qatar National Research Strategy (QNRS) principle were designed to increase the quality of research occurring in Qatar. The third most important area of research in Qatar was health and under the Addressing National Health Priorities column, Type II diabetes was first in importance. We feel that by following the research plan addressed by H.H. Sheikha Moza that our research could have a strong impact on the research culture in Qatar indeed.
Some of the specific benefits to the students participating in this project include the following;
- How to conduct literature survey
- An increased knowledge and understanding about Diabetes, its causes, preventions and symptoms
- The seriousness of the issue worldwide and in Qatar
- The realization that the awareness of the students they spoke to increased so much that they can now give a lecture about the topic to their peers
- Presentation skills, video, poster, lecture
- Survey design
- How to conduct statistical analysis using statistical software
- Incorporating creative ways in educating their peers
- Communicating with professional societies such as Qatar Diabetes Association
- Took part in the joint program of Qatar Diabetes Association Sports Day event for the Action on Diabetes Campaign
- Interviewing skills as the asked questions of students and health care professionals working with diabetes
- Time management with creating a Letter of Intent and Proposal Report on time
- How to work as a team and how to divide the tasks to cover all requirements efficiently and effectively
- It is noteworthy to mention that the entire research methodology and the final report were fully prepared and conducted by the two students involved. The teacher's role was solely advisory
Qatar National Research Strategy (QNRS) mentions that diabetes is the 3rd most important area of research for QNRS.
Presenting the information about Diabetes to the high school student body, will not only benefit the students and teachers themselves in insuring their health and being aware of what may happen if they do not control some unnecessary desires, but it will inform them about the SSREP program in Qatar. As students participating in a research project during our secondary school years, presenting to our peers will allow them to realize the benefits that come from conducting research projects and inspire them to contribute in the following years.
Acknowledgement
Qatar National Research Fund – SSREP Program
-
-
-
Is it the time for Hepatitis E virus (HEV) Testing for Blood Donors in Qatar?
Authors: Gheyath Nasrallah, Laila Hedaya, Fatima Ali, Abdellatif Alhusaini, Enas Al Absi, Mariam Sami and Sara TalebBackground: HEV is the etiologic agent of acute hepatitis E. Although HEV usually causes a self-limiting infection, the disease may develop into a chronic or fulminant form of Hepatitis. Sporadic HEV infections spread in several developed countries; however, outbreaks usually occur in regions where sanitation is low, in particular, in developing countries where water flooding frequently occurs. In addition, religious background, life style, hygienic practices, and the economic status have been linked to HEV infection. Fecal-oral is the established route of transmission, however, infections through blood transfusion were recently documented in many developed and developing countries. This recent finding raises the question: is there is a need for HEV screening prior transfusion or transplantation? Studies related to this issue, in the Middle East are scarce. Although the CDC HEV epidemiological map, classifies the Arabian Gulf countries including Qatar as endemic or highly endemic, to the best of our knowledge, no HEV population –based epidemiological study were conducted in Qatar. HEV infection is usually detected using IgM and IgG serological tests and confirmed by molecular tests for detection of viral RNA. Yet, commercially available HEV serological kits are not validated, and needs further investigation. Aim and Methods: Qatar has a diverse population due to increased number of expatriate workers. The majority of these workers usually come from low economic status countries that are highly endemic for HEV such as Egypt, Sudan, India and other South East Asian countries. This fact highlights the need for an epidemiological study concerning HEV prevalence in Qatar. Accordingly, we hypothesize that HEV seroprevalence in Qatar is elevated, and therefore, there is a risk of HEV transfusion transmitted infections in Qatar's blood bank. The goals of this study are (i) to investigate the seroprevalence of HEV (anti-HEV IgM/IgG) among healthy blood donors in Qatar and (ii) to evaluate performance of 5 common commercially available anti-HEV IgG and IgM kits (manufactures by Wantai Biological Pharmacy, China; MP Biomedicals & Diagnostic Automation, USA; and Euroimmun & Mikrogen Diagnostik; Germany). All of these kits are solid phase ELISA based, except the Mikrogen kit which is immunoblot based. A total of 4056 blood donor samples were collected from healthy blood donors visited the Blood Donation Center at Hamad Medical Corporation (HMC) over a period of three years (2013-2015). For seroprevalence study, plasma were separated and tested for the presence of HEV IgG and IgM using Wantai ELISA kit, which, is the most commonly used serological kit according to literature. For statistical analysis, chi-square test was performed and results were considered statistically significant when the p-value were less than 0.05. Results: Out of total 4056 analyzed samples, almost one quarter of blood donors, 829 (20.45%) tested positive for anti-HEV IgG and only 21 (0.52%) blood samples tested positive for anti-HEV IgM. As shown in Fig. 1, HEV seroprevalence was associated with age group (P
-
-
-
Antimicrobial and Cytotoxic Activity of Streptomyces sp. Isolated from Desert Soils, Qatar
More LessStreptomyces are Gram positive aerobic bacteria from the phylum Actinobacteria with close to 570 known species. They are popular for providing a variety of compounds having medicinal properties including - antibiotics, antifungals, antitumor amongst others. Various researches in the past have tested these properties of Streptomyces sp. and species including Streptomyces avermitilis, Streptomyces venezuelae, Streptomyces aureofaciens, Streptomyces clavuligerns and Streptomyces erythrens have been found effective in producing these varied compounds. For instance, S. avermitilis produces avermectins which are used to treat river blindness while S. venezuelae secretes chloramphenicol. Additionally S. venezuelae has been suggested as ideal test organism for studies based on physiology and also for analysis of differentiation on biochemical basis (Chater, 2013). Although a high number of metabolites of Streptomyces are now available in the health care industry as effective drugs for a variety of diseases, increasing number of cases of antibiotic resistance is threatening global public health. The emergence of resistance has resulted in drug ineffectiveness and there is a wide search for suppressing these strains. Such resistance has been identified to have occurred due to both phenotypic and genotypic modifications (Suzuki, Horinouchi, & Furusawa, 2015). Properties of Streptomyces and increasing cases of antibiotic resistance have fuelled research to identify more and more species of Streptomyces and to look out for novel metabolites released from them. Other reasons for the need of identifying newer compounds include the breakout of new diseases in the second half of the last century, incompetence in fighting naturally resistant bacteria such as P.aerogenosa that causes fatal infections and the toxic effects resulting from consumption of currently available antibiotic drugs (Sanchez & Demain, 2011).
Hence this research attempted to study antimicrobial properties of three Streptomyces species isolated from the desert soil of Qatar. The antimicrobial properties were assessed firstly through direct testing against five test organisms - Escherichia coli and Pseudomonas sp. as Gram-negative bacteria, Candida albicans as fungi, Staphylococcus aureus and Streptococcus faecolis as Gram-positive bacteria. The three strains designated as sp. A, sp. B and sp. D exhibited good inhibition of the test organisms. Acetone, ethanol, ethyl acetate and methanol were used to prepare extracts of the three species and were used to re-assess antibacterial properties and also determine anticancer and antifungal properties. Antimicrobial properties were re-tested using disc-diffusion and puncture method while anticancer properties were studied by subjecting HCT-116 cancer cells to two different concentrations of extracts – 0.05%(v/v) and 0.5%(v/v). Acetone extracts showed some kind of inhibitory pattern hence a third concentration of 5%(v/v) was tested. Antifungal properties were examined by testing all extracts at 10%(v/v) against Aspergillus niger and Penicillium sp. Acetone extracts of all three species A,B and D displayed high inhibition of Aspergillus niger with 99.07% ± 0.12, 99.2% ± 0.01 and 99.19% ± 0.00 inhibition percentages respectively, and also inhibited growth of Penicillium sp. with 82.62% ± 1.62, 79.63% ± 0.11 and 87.44% ± 0.2 inhibition percentage respectively (% as compared to acetone control). These extracts were then re-tested at two other concentrations of 2.5%(v/v) and 5%(v/v). While the extracts at these concentrations were effective against Aspergillus niger, they could not inhibit growth of Penicillium sp.
References
Chater, K. F. (2013). Streptomyces. In S. M. Hughes (Ed.), Brenner's Encyclopedia of Genetics (Second Edition) (pp. 565–567). San Diego: Academic Press.
Sanchez, S., & Demain, A. L. (2011). 1.12 – Secondary Metabolites. In M. Moo-Young (Ed.), Comprehensive Biotechnology (Second Edition) (pp. 155–167). Burlington: Academic Press.
Suzuki, S., Horinouchi, T., & Furusawa, C. (2015). Suppression of antibiotic resistance acquisition by combined use of antibiotics. Journal of Bioscience and Bioengineering(0). doi: http://dx.doi.org/10.1016/j.jbiosc.2015.02.003
-
-
-
A Close Look at the Genome and the Unbalanced Stratification of the Qatari Population
Microsatellites are segments of the DNA comprised of repeated sequences of 4 to 8-base pair units that are found throughout the genome of eukaryotes. Most microsatellites are located at non-coding regions of the genome and consequently mutations in the microsatellite regions are often not causatives of disease. This allows these regions to be highly polymorphic in a population and gives a signature DNA marker for each individual. At the same time, it is often expected to see a wide genetic diversity of alleles present in the populations. In humans, microsatellites, or short tandem repeats (STRs) are standard genetic markers used for human identification in forensic cases and parentage determination.
Databases of allele frequencies from various ethnic groups have been established in various parts of the world. In Qatar, as close-kin marriages are customary, homozygosity and possible reduced genetic variability have been a concern. A previous study, however, has concluded that the standard forensic markers are a valid tool for human identification because no substantive reduction of genetic variation has been observed as a result of consanguinity in the Qatari community.
In a more recent study, it has been determined that the Qatari population is subdivided into three main ethnic groups, of Bedouin, African or Persian ancestry. This segregation has been genetically significant through studies in single nucleotide polymorphism, or SNP studies. Since SNPs and microsatellite DNA are inherited in a similar fashion, it is expected that there are different allele frequencies for assessed microsatellite loci for each of the populations. Moreover, the allelic heterogeneity in a population is closely linked to interbreeding. Since the Qatari Bedouin population has been closely associated with the practice of consanguinity as evidenced through SNP studies, it is therefore also expected to see higher homozygosity in the Bedouin subpopulation as compared to the other two subpopulations.
In recent years diabetes occurrences in the Qatari populations have reached an epidemic level. Like many other diseases where both lifestyle as well as genetics may play a role in the onset of this disease, the microsatellite loci may serve as markers genetically linked to some of the non-communicable disease such as diabetes.
The main aim of this study is to understand the genetic variability across the subpopulations of the Qatari nationals. The result can be used to develop new forms of personalized health care that is specific to members of the stratified Qatari subpopulations. The information allows for more efficient treatments and better management to the growing Qatari populations. To accomplish these goals, blood samples are collected from 300 individuals, with 100 from each subpopulation. AmpFISTR® Identifiler® Plus PCR Amplification Kit is used for a multiplex analysis of 15 tetranucleotide loci. The resulting data are analyzed to produce allele frequencies of each of the loci for the corresponding subpopulations. The gene diversity within and among the subpopulations are analyzed and the detection of consanguinity through the application of Hardy-Weinberg is discussed. The sub-profiles for each of the three Qatari subpopulations – Bedouin, African and Persian – are presented. Finally, the concept of personalized health care with respect to diabetes is introduced and the clinical applications relevant to the populations are discussed.
-
-
-
Reflex System for Intelligent Robotics
Authors: Ahmad Yaser Alhaddad and John-John CabibihanBackground and Purpose: Great advances have occurred in the field of robotics in the past few years. The integration of robotics in our daily life became not only limited to manufacturing or industrial usage, but also in health care delivery, aerospace, humanitarian aids and others. Most of the existing robots systems rely on the programmer to set the rule it plays within the working environment or rely on a trainer to teach the system what should and need to be done and where they are ought to move. Other robotic systems might involve more intelligent systems to explore and handle tasks within their environment. Most of these systems are usually situated to work within well organized and planned environment. Having modifications on any of the parameters of the environment might produce unpredictable consequences. Depending on the complexity of the system and how intelligent it is, the consequences might be unfavorable in achieving the goals intended and reducing oneself-damages.
Species in nature represents rich source of innovative ideas and creative concepts that can be investigated by researchers. Nature has been inspiring scientists into developing new ways of looking at things, by observing the various living organisms' behaviors in their own habitats. Behavior-based roboticists are concerned with the development of robots based on observing and the studying of neuroscience, psychology and ethology of animals in nature. Humans, animals and plants physiology is yet another rich source of researching potential (Fig. 1). For example, reflexes in living organisms represent a means of survival in the outer environment and means of regulating internal body operations. If we could observe and try to mimic some of the reflex behaviors, we could end up with a machine (E.g. Robots) that has the ability to avoid dangerous situations and keep the outer structure intact.
Figure 1: The potential of reflex systems in intelligent robotics. Objective: Adopting an intelligent reflex system in the robot system similar to that found in humans, animals, and plants can have significant advantages on the overall behavior of the system. A reflex system can improve the risk avoiding capabilities in the unfavorable scenarios. Design: The approach toward reflex based robotic system involves the intensive investigation and review of the fundamental concepts found in the reflex systems of human, animals, and plants. Attention to details, such as the behavior of the organism when subjected to a certain stimulus and the latency it takes for the reflex arc to execute the right response, are among the most important things to consider when trying mimicking the behavior of a living organism. A deduced conceptual model should be based on the distinguishing components found in the reflex arc. An actual design based on this proposed model, will include the basic components that can be achieved by using electronic/mechanical components that are at the same time analogous in function to the ones found in the reflex arc. For example, to mimic the temperature sensing capabilities of a human hand, a simple one-point temperature sensor will not be sufficient to give a desirable realistic result. Instead, a sophisticated flexible array that is capable to sense the temperature at any point must be used. Another design consideration is the controlling method to be used. Will it be centralized or decentralized or a mix of both? Regardless of the answer the controlling mechanism involved should be independent of a central controller (i.e. the brain) and it must be localized to achieve the desirable fast response as that founded in the reflex arc. Conclusion: The reflex based robotic system will be unique and innovative for the applications intended. The system can be incorporated with pre-existing systems to add value especially in the field of medical robotics and more specifically in prosthetics. Artificial reflex systems will add great value, protective feature and life-like sensation for a smarter prosthetic artefacts. With the implementation of the reflex arc at the right latencies and order, the gap between artificial and the actual hand should get narrower.
Acknowledgment
This publication was made possible by the support of an NPRP grant from the Qatar National Research Fund (NPRP 7-673-2-251). The statements made herein are solely the responsibility of the authors.
Keywords
Intelligent Robotics, Reflex System, Prosthetics
-
-
-
Preparation and Characterization of Letrozole-Loaded Poly (D,L-Lactide) Nanoparticles for Breast Cancer Therapy
Authors: Bayan Alemrayat, Abdelbary Elhissi and Husam YounesIntroduction: Breast cancer has been ranked first as the most prevalent type of cancer and the leading cause of cancer-related mortality among women worldwide. Letrozole (LTZ), an aromatase inhibitor, has been shown to be an effective and relatively safe agent for the treatment of hormonally-positive breast cancer in postmenopausal women. However, the drug suffers from poor water solubility and rapid metabolism, leading to low oral bioavailability, and thus, lower anticancer effects at target sites. Interestingly, polymer-based nanoparticles (NPs) have been reported to be effective drug delivery systems as integrating drugs into these carriers have presented substantial improvements in drugs tissue distribution and tissue selectivity with superior pharmacokinetic profiles. Therefore, this study was designed to incorporate LTZ into an FDA approved polymer; poly (D,L-Lactide) (PDLLA) nanoparticles to improve its physiochemical properties and bioavailability. Methods: Emulsion-solvent evaporation technique was used to produce LTZ-PDLLA NPs. Briefly, 250 mg PDLLA was mixed with different w/w ratios of LTZ (10-30%) in 20 ml dichloromethane. The prepared solution was slowly poured via a syringe into an aqueous phase (140 ml) to form an emulsion which was followed by a two-step sonication. The emulsion was sonicated using Branson® B5510 ultrasonic cleaner at 40 KHz frequency for 30 minutes, vortexed for 2 minutes, then sonicated again for another 30 minutes. Solvent was allowed to evaporate completely by stirring for 2 hours at room temperature. The resultant dispersed particles were centrifuged at 8500 rpm, 5 °C for 2 hours. Supernatant was discarded and the pellet comprising the NPs was dried under vacuum over 48 hours. The obtained NPs were characterized using Scanning Electron Microscope (SEM), Zetasizer, Differential Scanning Calorimeter (DSC), X-Ray Diffractometer (XRD), & Ultra Performance Liquid Chromatography (UPLC). Results: LTZ-PDLLA nanoparticles were prepared with a high yield that reached 85%. The NPs were spherical in shape with smooth surfaces across all LTZ loadings. An increase was seen in particle size from 242 nm to 365 nm upon increasing LTZ concentration from 0% to 30% w/w. Such finding was expected since larger contents of LTZ would definitely contribute to the increase in the diameter of the enclosing polymer. Particles were polydisperse in general with a polydispersity index (PDI) ranging from 0.38 to 0.44 and this was mainly due to the fact that non-uniform force was applied to each droplet injected into the aqueous medium while producing the emulsion. DSC and XRD analyses confirmed the crystalline nature of LTZ that was lost after being incorporated into the amorphous polymer, PDLLA. This will have a great impact on the dissolution rate and later on the release rate from PDLLA in which amorphous particles tend to be released easily and in a more controlled fashion than crystalline counterparts. The actual content of LTZ loaded inside PDLLA was expressed as entrapment efficiency and calculated via UPLC analysis through subtracting the amount of LTZ present in the supernatant from the initial amount of LTZ added in each formulation. Very high entrapment efficiency was obtained with all formulations ranging from 87.9% with 10% LTZ-PDLLA up to 96.7% with 30% LTZ-PDLLA. As such, high concentrations of LTZ can be delivered to the target sites with minimum drug loadings. Conclusion: LTZ-PDLLA nanoparticles were successfully prepared with high entrapment efficiency using emulsion-solvent evaporation technique. The physiochemical properties and entrapment efficiency were dependent on LTZ concentration. Future work should focus on reducing the wide size distribution by formulating monodisperse particles which would allow uniform tissue distribution and longer sustained release actions upon administration. Additionally, in-vitro testing is needed to evaluate the efficacy and safety of the new formulations.
-
-
-
Favoring Inhibitory Synaptic Drive Mediated by Gabaa Receptors in the Basolateral Nucleus of the Amygdala Efficiently Reduces Pain Symptoms in Neuropathic Mice
More LessPain is an emotion and neuropathic pain symptoms are modulated by supraspinal structures such as the amygdala. If the central nucleus of the amygdala is often named as the “nociceptive amygdala”, little is known on the role of the basolateral amygdala. Here, we monitored the mechanical nociceptive thresholds of in a mouse model of neuropathic pain and infused modulators of the glutamate/GABAergic transmission in the BLA via chronically-implanted can nulas. First, we found that NMDA-type glutamate receptor antagonist (MK-801) exerted a potent anti-allodynic effect whereas a transient allodynia was induced after perfusion of bicuculline, a GABAA receptor antagonist. Potentiating GABAA receptor function using diazepam (DZP) or etifoxine (EFX, a nonbenzodiazepine anxiolytic) fully but transiently alleviated mechanical allodynia. Interestingly, the anti-allodynic effect of EFX disappeared in animals incapable of producing 3alpha-steroids. DZP had a similar effect but of shorter duration. As indicated by patch-clamp recordings of BLA neurons, these effects were mediated by a potentiation of GABAA R-mediated synaptic transmission. Together with a presynaptic elevation of miniature IPSC frequency, the duration and amplitude of GABAA mIPSCs was also increased (postsynaptic effect). The analgesic contribution of endogenous neurosteroid seemed to be exclusively postsynaptic. This study highlights the importance of BLA and of the local inhibitory/excitatory neuronal network activity while setting the mechanical nociceptive threshold. Furthermore, it appears that promoting inhibition in this specific nucleus could fully alleviate pain symptoms. Therefore BLA could be a novel interesting target to develop pharmacological or non pharmacological therapies.
-
-
-
Cathepsin B Induced Cardiomyocyte Hypertrophy Requires Activation of the Na+/+ Exchanger Isoform-1
By Sadaf RiazBackground: Progression of the heart to failure is primarily caused due to significant remodeling of both the extracellular matrix (ECM) and subcellular organelles, a hallmark of cardiac hypertrophy (CH). Uncontrolled ECM remodeling occurs as a result of the activation and increased proteolytic activities of proteases such as Cathepsin B (Cat B) and matrix metalloproteinase-9 (MMP-9) (1, 2). Previous studies have suggested that the activation of Cat B is induced by the acidification of the peri and extracellular space (3-5). In various forms of carcinomas, this pericellular acidification coincides with the activation of the cardiac specific pH regulator, the Na+/H+ exchanger isoform-1 (NHE1) (5, 6). Increased activation of NHE1, similar to Cat B, is involved in the pathogenesis of various cardiac diseases including CH (7-10). Moreover, the activation NHE1 has been shown to activate Cat B in various reports. CD44 was shown to interact with NHE1 which created an acidic microenvironments leading to Cat B activation in a breast cancer model (5). Moreover, NHE1 and Cat B have shown to directly interact with each other and cause ECM degradation in another breast cancer model (4). Taken together, the evidence suggests that NHE1, through its pH regulating property, might be mediating the activity of Cat B in pathological states. A previous report has demonstrated that pericellular acidification redistributed the Cat B containing lysosomes to the cell surface and caused the secretion of Cat B into the extracellular compartment (3). Interestingly, the NHEs have also shown to cause acidic extracellular pH which induced lysosome trafficking and subsequent release of Cat B into the ECM in a prostate cancer cells (11). Moreover, several broad and specific NHE inhibitors were able to inhibit this effect (11). Once into the extracellular compartment, Cat B can degrade the ECM (12) and facilitate further ECM degradation by activating other proteases such as MMP-9 (13, 14). MMP-9 activity has been shown to be increased in various models of heart failure (15, 16) (17, 18). Previous studies have also shown that MMP-9 activity was increased in CCL39 cells upon the stimulation of NHE1 with phenylephrine (19). Interestingly, Cat B and MMP-9 were shown to directly interact with NHE1 and cause ECM degradation in breast cancer (4). Whether NHE1 induces the activation of Cat B, which in turn activates MMP-9 and contributes to cardiomyocyte hypertrophy remains unclear. Methods: H9c2 cardiomyocytes were treated with 10 μM Angiotensin (Ang) II for 24 hours to stimulate NHE1 and to induce cardiomyocyte hypertrophy. Cells were further treated with or without 10 μM EMD, a NHE1 inhibitor, or 10 μM CA-074 methyl ester (CA-074Me), a Cat B inhibitor, for 24 hours. After treatments, Cat B messenger ribonucleic acid (mRNA) levels were measured through Reverse Transcription-Polymerase Chain Reaction (RT-PCR). Furthermore, changes in the cardiomyocyte hypertrophic marker, ANP mRNA, were also assessed by RT-PCR analysis. The localization of Cat B in lysosomes was measured using LysoTracker Red dye. Autophagy was measured through the analysis of the autophagic marker, microtubule associated light chain 3-II (LC3-II). The secretion of Cat B from the intracellular to the extracellular space was assessed by measuring Cat B protein expression in the media. MMP-9 activity was also measured in the media by gelatin zymography and assessed for its contribution to the Cat B hypertrophic response. Results: Immunoblot analysis revealed that Cat B protein expression, both pro and active forms, was significantly elevated at the 10 μM Ang II concentration (136.56 ± 9.4% Ang II vs. 100% control, 37 kDa and 169.84 ± 14.24% Ang II vs. 100% control, 25 kDa; P
-
-
-
The Social and Spiritual Factors Affecting Chronic Renal Dialysis Patients in Gaza Strip
More LessBackground: End-Stage Renal Disease (ESRD) is a progressive worsening of kidney function over a period of months or years. It is a complex debilitating disease that needs a lifelong treatment. Because patients with ESRD cannot be cured of their underlying conditions and mostly underwent hemodialysis program, it usually leads to many physical and medical consequences and complications, and beside them, there are lots of concealed social and spiritual factors that can affect people who have this disease or are on renal dialysis. Some studies about medical and clinical consequences of ESRD and renal dialysis were conducted but this study will be the first one to determine the factors affecting the social and spiritual wellbeing of patients who are on renal dialysis in Gaza Strip. Objectives: It is important to give a detailed picture about the social and spiritual wellbeing of patients who are on renal dialysis to help the medical professionals to recognize the social and spiritual variables, so early intensive intervention can be performed once necessary. Methods: A total of 120 patients who had ESRD and were treated with hemodialysis completed face to face questionnaires. A self-designed questionnaire has been used; the end result is a questionnaire consisted of 6 sections including demographic data, physical, social, psychological and spiritual wellbeing, degree of coping with current condition, uncertainties about health in future, self-esteem and dependency, and the impact on marital relationship. Results: Among 120 participants, 55% were females and the mean age was 48.5 (SD: 16.7).
An eighty one point seven percent were unemployed and 81.7% of the participants were of low educational level. Thirty percent of the patients have family history of hemodialysis; 55.6% of them are first degree relatives and 44.4% for the second and third degree relatives. Seventy two point two of patients have co-morbidities, mostly hypertension (49.4%). Fatigue (93.8%) and insomnia (56.2%) are the two major physical complaints after the process of hemodialysis, however, (53.3%) of the patients felt more comfortable after it.
Seventy seven percent of the patients suffered from a financial impact and 60.3% had weak social relationships. Sixty percent considered that the process of hemodialysis makes their life restless to the extent that makes their daily activities to be negatively affected by 73.8%.
Among 85 married patients, the sexual performance and the sexual desire were negatively affected by 54.2% and 52.2% respectively. Only 50% of the patients stated that they have a goal they want to achieve in their life. Seventy eight percent of the patients were uncertain about their health and 67.3% were worried from about the future. However, 70% of the participants claimed that spiritual devotions and stronger faith has made them more able to accept their disease and deals in a positive manner towards being involved in the hemodialysis program. Conclusion: Social and spiritual well-being should be considered as important predictive factors for a better quality of life in hemodialysis patients. Results also suggest that assessing and addressing social and spiritual well-being among hemodialysis patients may help in providing a holistic medical care.
-
-
-
Origanum Syriacum Inhibits Proliferation, Migration, Invasion and Induces Differentiation of Human Aortic Smooth Muscle Cells
Authors: Sara AlDisi and Ali EidCardiovascular diseases (CVDs) are still the number one cause of morbidity and mortality both in Qatar and worldwide. A major risk factor of CVDs is atherosclerosis, the hardening of blood vessels caused by decreased diameter and formation of plaque. A key player in atherosclerosis prognosis is the switch of vascular smooth muscle cells (VSMCs) phenotype from their undifferentiated state to a synthetic one. The synthetic state of VSMCs is characterized by an increase in proliferation, migration and invasion to the lumen of blood vessels, contributing to the atherosclerotic plaque. Ineffectiveness of current treatments has lead to an increasing interest in herbal medicine, possibly because they are cheap and produce little side effects. Origanum syriacum, commonly known as Zataar, is an important constituent of the Mediterranean diet; a diet correlated with lower risk of CVDs. O. syriacum is also reported for its antioxidant and anti-inflammatory activities, an indication of its possible anti-atherosclerotic activities. However, O. syriacum effect on atherosclerosis or CVDs is not well studied. This is why we chose to study the effect of the ethanolic extract of O. syriacum (OSEE) on the proliferation, migration, invasion and differentiation of human aortic smooth muscle cells (HASMCs). Cell Titer-Glu assay was used to study OSEE effect on HASMCs viability. Cells were incubated with OSEE (0, 0.5, 0.1 and 0.2 mg/ml) for 24, 48 and 72 hours. OSEE has showed to exert a significant anti-proliferative effect on HASMCs. This effect, though, seems to be concentration-dependent, but not time-dependent. The optimum concentration, 0.2 mg/ml, significantly decreased HASMCs viability at 24 and 72 hours by 52.5 ± 10.39% and 47.6 ± 9.83% compared to control, respectively. A scratch-wound assay was used to determine OSEE's effect on migration of HASMCs. A monolayer of cells was scratched and wound size was measured every 2 hours for 24 hours. OSEE significantly inhibited the migratory capacity of HASMCs compared to untreated cells. Cells that were incubated with 0.2 mg/ml of OSEE for 24 hours showed 65.07 ± 12.58% less migration than the control. To measure the invasive capacity of HASMCs, Matrigel-coated BD BioCoatTM filter inserts were used. Cells were incubated in serum free media with or without 0.2 mg/ml of OSEE, and number of invasive cells was counted after 24 hours. OSEE has shown to significantly decrease the invasive capacity of HASMCs by 79.82 ± 5.69% compared to control. To study effect of OSEE on differentiation of HASMCs, western blotting was used to measure calponin-h1 activity. Cells were incubated with or without 0.2 mg/ml OSEE for 24 hours and lysate was analyzed. OSEE increased the expression of calponin-h1 by 147.19 ± 72.33% compared to control. These results indicate that OSEE possess anti-atherosclerotic abilities by modulating the phenotype of HASMCs. This modulation returns HASMCs to their differentiated state, as shown by calponin-h1 increase. It also exhibits this modulation by inhibition of the synthetic state phenotypes of proliferation, migration and invasion of HASMCs. This anti-atherosclerotic effect should be further studied by possibly investigating OSE's effect on specific pathways that leads to migration and invasion of HASMCs, such as ERK1/2 and MAPK pathways, as well as MMPs expression.
-
-
-
Self-assessed Attitude, Perception, Preparedness and Perceived Barriers to Provide Pharmaceutical Care in Practice Among Final Year Pharmacy Students: A Comparative Study between Qatar and Kuwait
Authors: Rasha Abdullkader Mousa Bacha and Alaa Talal El-GergawiBackground: Pharmaceutical care (PC) is changing pharmacy practice in to patient -centered care and personalized medicine approach. It focuses on maximizing drug therapy outcomes and improving patient's quality of life (QOL). Pharmacy students who are the future pharmacy practitioner, need to have adequate knowledge, skills and positive attitudes to apply PC when they graduate. However, comparative studies are limited among pharmacy students from different pharmacy schools within the Middle East region about the PC teaching received, preparedness to deliver the service in practice, and expected barriers. Objectives: This study's aims are exploring the attitudes and perceptions of final year pharmacy students in the College of Pharmacy at Qatar University (QU-CPH) and Faculty of Pharmacy in Kuwait University (KU-FoP) towards PC, assessing students’ preparedness to provide PC when they graduate and investigating the perceived barriers against the application of PC. Methods: Descriptive, cross-sectional, web based survey was used to collect data. The study instrument was developed based on validated tools: Pharmaceutical Care Attitude Survey (PCAS) and Preparedness to Provide Pharmaceutical Care (PREP). The data were analyzed using the IBM Statistical Package for Social Sciences (SPSS®) Version 22. Chi-Square tests and independent t-test were used to compare between the 2 universities for categorical data. P ≤ 0.05 was considered statistical significant. The results were summarized using tables and figures generated by Excel. The final questionnaire included five sections: demographics of the students (6 items); perception of PC (7 items); attitudes towards PC (13 items); preparedness to deliver PC (25 items) and; barriers that may affect applying PC (5 items). The survey was administered using SurveyMonkey®. Results: Of a total of 77 students, 63 students completed the questionnaire (21 students from QU and 42 students from KU) overall response rate 82%. The mean age of the students was similar between the two universities. The majority of the respondents (95.2%) from both universities were female (QU-CPH is female-only college). KU-FoP had significantly higher proportion of national students than QU-CPH. Both QU-CPH and KU-FoP students prefer to work in the hospital setting in the future (57.1%, 64.3% respectively). No significant differences between the two universities in terms of students’ confidence and perception in applying PC (P ≥ 0.05). There were no significant differences between the students’ attitude in the two programs about the provision of PC (P ≥ 0.05), and all respondents believed that PC services will improve health outcomes. There was a statistically significant differences in documenting information related to detecting, resolving and preventing drug-related problem (p = 0.044). Some of the barriers identified by students from both institutions include: lack of private counseling area, limited pharmacists time, lack of patients records, lack of policy for pharmacists’ patient care role. No differences in opinions between QU and KU students regarding the most important barrier that may affect PC provision above. There was statistical significant difference between students’ opinions in considering poor image of pharmacists role in society as barrier. Conclusion: Final year pharmacy from Qatar and Kuwait had demonstrated positive attitudes towards PC and its potential application in practice when they graduate. They have, however, identified some potential barriers. Students in KU-FoP placed low expectation of the pharmacist's role by the society and within health care team as important barrier; while students in QU-CPH thought that documentation and communication between pharmacists and healthcare providers can have an impact on PC services. More efforts should be directed to resolve the perceived barriers in order to optimize PC provision and ultimately patient care outcomes.
-
-
-
Applying a Novel Smart Insole System that Reduces Re-Ulceration Risk among Diabetics with Peripheral Neuropathy: Do Users Adhere and Comply?
Authors: Eyal Ron, Javad Razjouyan, Bijan Najafi and David ArmstrongBackground & Aim: People with diabetes carry a 25% lifetime risk of foot ulceration. It is well-established that high plantar pressures increase the risk of developing foot ulcers, and that managing peak pressure is an important strategy in reducing such risk. This study tested a novel smart insole system designed to reduce ulceration risk by alerting patients using a smartwatch when their plantar pressure was too high. The device was tested for degree of adherence, compliance, and successful offloading responses among users. Outcomes of users triggering many alerts were compared to those triggering few alerts to see if alert frequency affected adherence and compliance to a novel mobile health device. Method: Participants with diabetes, peripheral neuropathy, and a history of foot ulcers were instructed to wear a smart insole system. Pressure sensors inside the insole were placed in strategic areas where foot ulceration risk has been shown to be high. The sensors were wirelessly connected to a smartwatch through a transmitter. The smartwatch alerted participants when plantar pressure exceeded 50 mmHg over 95% of a moving 15 minute window. Adherence, defined as the number of hours the device was worn, was determined with sensor data and via questionnaires. A successful response to an alert was recorded when patient-initiated offloading occurred within 20 minutes. The length of time an alert lasted (measured as the median time between alert onset and successful offloading) served as a measure of compliance. Results: Participants who increased adherence over time tended to have more alerts (0.82 ± 0.31 alerts/hr) than those who did not improve (0.36 ± 0.46 alerts/hr, p = 0.09). Users receiving a high number of alerts (HA) began with similar levels of successful response to those receiving a low number of alerts (LA), but the HA group successfully offloaded significantly more often than the LA group by the last segment of the study (55.0 ± 6.6% vs. 16.6 ± 11.9%, p < 0.01). Median alert durations increased for LA relative to HA (p = 0.10). Participants tended to overestimate their adherence compared to objective sensor measurements (7.60 ± 2.50 hours/day vs. 5.38 ± 3.43 hours/day, p = 0.10). Conclusion: The results of this study suggests that there appears be to a minimum number of alerts that a user must experience (1 alert every 2 hours of wear time) to maintain adherence and successful response to alerts over time. Above this level, median alert durations decrease, user adherence improves, and successful response rates increase. This suggests that within the range of alerts typically received by someone wearing a smart insole system, relatively more alerts may be preferred, and increasing the number of alerts a user receives by lowering the pressure threshold may be a viable path to maintaining adherence. In addition, self-reported adherence measures may exaggerate usage of novel mobile health devices.
-
-
-
Applying Novel Body-Worn Sensors to Measure Stress: Does Stress Affect Wound Healing Rates in the Diabetic Foot?
Authors: Eyal Ron, Javad Razjouyan, Talal K Talal, David G Armstrong and Bijan NajafiBackground and Aim: In the United States alone, diabetic limb complications and amputations are estimated to cost $17 billion. Significant risk factors that may lead to amputation of the diabetic foot include ineffective wound healing and infection of a wound or ulcer. Previous studies have shown that wound healing is slowed and patient's susceptibility to infection is increased when a patient is under chronic stress. To date, objective measures of stress have not been used to determine if stress affects the rate at which wounds heal. Our study used novel real-time monitoring of patient's heart rate variability to objectively determine the stress levels of patients visiting a surgery clinic for wound dressing changes. The wound healing rates of patients with high stress levels were compared to healing rates of low-stress individuals to assess the effect of stress on rates of wound healing among diabetics with a history of foot ulceration. Methods: Twenty patients (age: 56.7 ± 12.2 years) with diabetic foot ulcers were equipped with a chest-worn sensor (Bioharness 3, Zephyr Technology Corp., Annapolis, MD) during their 45-minute appointments where the patients’ wound was re-dressed. The chest sensor contained a uni-channel ECG recorder, and a novel algorithm was developed to determine heart rate variability from sensor output. Low frequency (0.04 to 0.15 Hz) HRV signals were isolated from high frequency signals (0.15 to 0.40 Hz), and the ratio of their amplitudes was used as a measure of stress. Patients were categorized as low-stress if the ratio of the signals was less than 1, and were otherwise categorized as high-stress individuals. Regardless of classification, each patient's wound size (length, width, depth) was recorded at baseline and in follow-up visits. High and low-stress patients were compared to see if wound sizes decreased more rapidly in either group. Results: Results indicate that patients with low levels of stress reduced their wound size by 79% between baseline and the first follow-up appointment (1.36 mm3 to 0.28 mm3). In contrast, patients with high levels of stress had adverse outcomes, with their wound sizes increasing nearly four times between baseline and follow-up (0.17 mm3 vs 0.84 mm3). Although high stress individuals had smaller wound sizes than low stress individuals initially (0.17 mm3 vs. 1.36 mm3, p < 0.05), the wound sizes of high stress individuals were nearly 3 times larger by the first follow-up (0.84 mm3 vs. 0.28 mm3, p = 0.10). Conclusion: Our research proposes that an individual's stress level can be objectively measured using an algorithm that processes ECG data from a single body-worn sensor that is lightweight and comfortable to wear. The stress levels measured with our algorithm are predictive of positive clinical outcomes. Specifically, individuals with low levels of recorded stress at baseline have faster healing rates and greater reductions in wound size by their second clinical appointment. This indicates that real-time patient stress monitoring using body-worn sensors may help clinicians identify risk factors that prolong wound healing times. In addition, it can be inferred that managing stress in diabetic patients will quicken the pace of wound healing. Surprisingly, however, our results suggest that initial wound sizes are not good indicators of stress levels in patients during initial clinical appointments. In fact, wound sizes of high stress individuals were significantly lower than low-stress individuals at baseline.
-
-
-
Dexamethasone-induced MicroRNA Regulation for Pancreatic Cancer Progression
More LessPancreatic cancer is one of the leading causes of cancer-related mortality worldwide and is highly therapy-resistant, e.g. toward the standard chemotherapy gemcitabine. Glucocorticoids like dexamethasone (DEX) are often co-medicated to reduce inflammation and side effects of tumor growth and therapy. Our group showed DEX to be a potent stimulator of epithelial to mesenchymal transition (EMT), cancer progression and metastasis, but the underlying mechanisms are poorly understood. MicroRNAs are a group of small non-coding RNAs that post transcriptionally regulate gene expression. In this study, I evaluated the effect of DEX on the microRNA profile of pancreatic cancer cell lines. By microRNA array I observed a deregulation of several miRNAs. The most significantly deregulated miRNA, miR-XYZ was predicted to target key members of the TGFß pathway. Forced expression of this miR-XYZ by liposomal transfection of mimicsresulted in significant repression of TGFß-2 mRNA and protein levels. 3’UTR luciferasereporter- and site-directed mutagenesis assays confirmed TGFß-2 to be a direct target of miR-XYZ. Functionally, I found that miR-XYZ significantly reduced proliferation, migration and colony formation. My preliminarily in vivo data show that miR-XYZ reduces tumor xenografttumor growth and abolishes the teratogenic effect of DEX. I conclude that miR-XYZ is a tumor suppressor gene that inhibits EMT by regulating oncogenes and/or genes that control EMT, and DEX is able to activate EMT by suppressing miR-XYZ.
-
-
-
Gliptins: Does this New Class of Antidiabetic Drugs Possess Endothelial-Vasculoprotective Effects?
Background & objective: The gliptins, dipeptidyl peptidase inhibitors (DPP-4 inhibitors) are a relatively new class of antidiabetic drugs that, via their inhibition of DPP-4, a cell membrane–associated serine-type protease enzyme, promote the effects of the endogenous incretins such as glucagon-like peptide 1 (GLP-1) and enhance glucose disposal. The advantage of the gliptins over GLP-1 analogues is that they are orally effective whereas the clinically used incretins, exenatide and liraglutide, have to be administered via sub-cutaneous injection. The first gliptin, sitagliptin, was approved by the FDA in 2006 and six other gliptins have subsequently been approved – vildagliptin (2007 in Europe); saxagliptin (2009, FDA); linagliptin (2011, FDA); anagliptin (2012, Japan); teneligliptin (2012, Japan); alogliptin (2013, FDA). There is a close association between diabetes and cardiovascular disease (CVD) and vascular complications associated with diabetes are responsible for 75% of the deaths of diabetics. Therefore it is important that for any new class of antidiabetic drugs introduced for clinical use that we determine whether such drugs not only show therapeutic efficacy as antidiabetic drugs, but also that they are vasculoprotective and reduce both cardiovascular morbidity and mortality. Endothelial dysfunction that can be functionally defined as a reduced vasorelaxation response to an endothelium-dependent vasodilator, such as acetylcholine, or, at the molecular level, a reduction in the bioavailability of nitric oxide (NO) and/or reduced activity of the enzyme responsible for the generation of NO, namely, endothelial nitric oxide synthase (eNOS). Endothelial dysfunction is a very early indicator of the onset of vascular disease and thus determining whether the gliptins also reduce endothelial dysfunction is very important. The literature concerning the vasculoprotective effects of the gliptins is contradictory with some of the clinical data suggesting a negative effect of the gliptins on vascular function. Thus, the objective of this study was to determine whether the gliptins possess positive or negative effects on endothelial function. Materials & methods: In the present study we used a cell culture protocol with mouse vascular endothelial endothelial cells (MS1-VEGF; CRL-2460, from ATCC, USA) of micro-vascular endothelial origin. The endothelial cells, MMECs, were either cultured under normoglycaemic (NG) conditions for a mouse, 11 mM, or high glucose 40 mM (HG) – a level that equates to the plasma glucose levels that are seen in mouse models of type 2 diabetes, such as the db/db leptin receptor mutant model of type 2 diabetes. The gliptin, alogliptin was chosen for this study and the protocols were designed to determine whether this gliptin reduced, or prevented, the high glucose induced reduction in eNOS phosphorylation at serine 1177 (p-eNOSser1177) as determined by western immunoblot densitometry quantification. A reduction in p-eNOSser1177 will result in a reduced activity of eNOS and hence a reduction in the generation of NO. Thus, the quantification of p-eNOSser1177 serves as a measure of endothelial function. The band densities of the western blot images for eNOS and p-eNOSser1177 were quantified using the basic Quantity One software (Biorad, Inc. CA, USA). Statistical analysis was performed using one-way analysis of variance (ANOVA) and post-hoc comparisons between groups were performed by Tukey's multiple comparison tests. ‘p’ values less than 0.05 were considered to be statistically significant. Results: Our data indicates that a 24-hour exposure to HG reduced p-eNOSser1177 phosphorylation, but the presence of alogliptin reverses the effects of HG and significantly increased the phosphorylation of eNOS, suggesting that this gliptin does protect the microvascular endothelium against hyperglycaemia-induced endothelial dysfunction. Furthermore, the effects of alogliptin were concentration-dependent and were significant with 50 or 100 μM, but not 10 μM alogliptin. Conclusion: Our findings indicate that, in a concentration-dependent manner, alogliptin protects endothelial cells against the negative effects that hyperglycaemia (high glucose) has on endothelial function as measured by alogliptin-induced changes in the phosphorylation of eNOS at serine 1177. Further studies are underway, using a functional myograph assay, to determine whether alogliptin can also prevent hyperglycaemia-induced endothelial dysfunction in mouse aortic vessels.
Acknowledgement
This work was supported by a Summer Student Research Fellowship (SSRF) from Weill Cornell Medicine-Qatar and an Undergraduate Research Experience Program grant, UREP 18-055-3-012 from the Qatar National Research Fund (QNRF). The statements made herein are solely the responsibility of the authors.
-
-
-
Analysis of Date Pits and Food Product Development from Date Pits
Authors: Eman Faisal Mushaima, Rehab Hussain Taradah and Sara Mohammed AlhajiriBackground: In Qatar and GCC countries, palm trees are considered as a symbol of culture due the great number of palm trees. Qatar classified as the third country import the dates for consumption among the world, it produces around 16,500 tons of dates every year mostly for local consumption. Though the high production and consumption of dates in GCC countries, there are no enough studies and investigations of the date's pits despite its high nutritional values and content that the few studies and analysis proved in some countries. This study aim to produce innovative food products from date pits as there are no investment for date pits and its nutritional value in Qatar. Therefore, proximate analysis and minerals analysis were done before developing a product containing date pits powder as the main ingredient to have accurate data about the chemical composition of each product. The main purpose of mineral analysis is to measure and determine mineral composition in general and lead content in date pits to which is related to safety of the food product that we will produce.
Objective: The main objectives of this research are:
1. To do proximate and minerals analysis of date pits (Phoenix dactylifera) that are cultivated in GCC countries (three varieties: khalas, khunaizi, sagei)
2. Developing a food product from date pits powder (cookies and muffin).
Method: • Minerals: mineral constituents present in the date pits of three varieties (Saqei, Khalas and Khunaizi) were analyzed using Inductively Coupled Plasma Spectrometer.
• Proximate analysis:
1. Seed Material and sample preparation: Date pits were obtained from Qatar and Bahrain. The pits of the two countries under investigation (Saqei and Khunaizi) (Khalas dates pit were excluded due to the high content of lead), they were directly isolated from 60 kg of date fruit, collected at the “Tamr” stage which is in the full ripeness. Date pits of each variety have been separated and milled in a heavy duty grinder to get homogenous blended mixture. Date pits powder was kept in a durable and leak proof containers.
2. Analytical methods: All analytical determination were performed in triple trials for each analysis. Values were expressed in the form of mean ± standard deviation (Chemical analysis of powdered pits (determined by AOAC- Association of Official Analytical Chemists)).
3. Fat content: The weight of total fat extracted from date pits was determined using Soxhlet Extraction Method. Results were expressed as percentages.
4. Protein content: The total protein was determined by Kjeldahl method and it was calculated by using the general factor (6.25).
5. Ash content: Ash was determined by removing carbon through taking 2 g of each variety, then it has been incinerated in the muffle furnace for 30 min at 600°C. After breaking up the ash with drops of water, put in the furnace for 3 hours.
6. Carbohydrate content: The total carbohydrate with fiber was calculated by subtracting from 100% the summation of the percent of total moisture, protein, fat and ash.
• Food product development: cookies and muffin have been produce from four different percentages of date pits (100%, 75%, 50% and 25%).
Results: Proximate analysis: Average of two trials for each variety. For khunaizi: CHO (84.32 %), protein (5.89), fat (3.8%), ash (0.48%) and moisture (5.39%). For sagei: CHO (79.81%), protein (5.57%), fat (2.89%), ash (0.72%) and moisture (0.86%). Minerals analysis (ppm): for Khalas: Mn: 22.881, Pb: 0.498, Cr: 40.827, Zn: 29.212, Mg: 838.983, Fe: 207.078, Cu:ND, Cd: 0.348, Ca: 438.434. For Khunaizi (PPM): Mn: 19.288, Pb: ND, Cr: 11.193, Zn: 15.540, Mg: 859.984, Fe: 77.003, Cu:ND, Cd: ND,Ca: 328.652. For Sagei (PPM): Mn: 8.202, Pb: ND, Cr: 0.323, Zn: 13.686, Mg: 559.365, Fe: 309.465, Cu:ND, Cd: 18.925, Ca: ND.
Quality rating for food product development using date pits (muffin and cookies): total quality number (out of 20 points) For muffin and cookies containing 25% date pits powder = 20 points; for 50% = 19 points; for 75% = 15 points; and 3 points for muffin and cookies containing 100% date pits powder.
Conclusion: Pits of date palm could be an excellent source of functional foods components as it is inexpensive and rich source of carbohydrate (mostly fiber) as shown in the analysis results of date pits from two leading varieties in Qatar (Sagei and Khunazi). The results also prove that date pits powder from Khalas variety of date is unsafe for production of food as it contain significant amount of lead.
-
-
-
Pharmacoeconomic Evaluations of Oral Anticancer Agents. Thematic Systematic Review
Authors: Ahmad Amer Alkadour, Daoud Al-Badriyeh and Wafa Al-MarridiBackground: Around 14.1 million new cancer cases and 8.2 million deaths caused by cancer were reported in 2012, expected to rise up to 22 million within the next 2 decades. The parenteral route (intravenous dosage form) has been the most common administration route for chemotherapeutic agents, which is associated with the need for hospitalization and a range of significant adverse drug reaction. A new generation of chemotherapies that is orally administered has been introduced to practices as a superior and more efficient therapeutic alternative. Oral anticancer drugs (OACDs) have shown to be eliminating the need for hospitalization, decreasing the rate of adverse drug reactions and, ultimately, improving patients' quality of life. Economically, this translates into reduction in inpatient hospitalization costs, including several of the associated costs, such as the cost of treating side effects. A disadvantage of OACDs however, is the increased acquisition costs as compared to those for the intravenously administered alternatives. This resulted into resistance to include OACDs by several international insurance schemes and drug formulary practices, including in Qatar. Objectives: The current project sought to analyze the medical literature in relation to published economic evaluations (pharmacoeconomics) of OACDs, especially as compared to the parenteral alternatives. This will identify the decision analytic modeling conducted as well as the variety of methods used. Strengths and weaknesses of study designs will be determined, including gaps in knowledge. Methodology: A thematic systematic review was conducted using the search engines: PubMed, Medline, EconLit, Embase and Economic Evaluation Database. The following 3 categories were considered: (i) therapy (chemotherapy [Mesh]); (ii) dosage form (oral [Mesh]); and (iii) research design (economics [Mesh] OR cost-benefit analysis [Mesh]). These included full-text, English articles incorporating comparative economic evaluations of oral chemotherapies. Excluded studies were: non-comparative, non-economic based models, of secondary indications (not cancer), and/or reviews. This process was followed by two stages of manual exclusion; based on title/abstract content and, then, the full-text article content. A data extraction form was developed and pilot tested for the purpose of data collection. Article inclusion and data collection was conducted twice, each by a different investigator. Included articles were finally summarized according to methodological themes of interest. Results: A total of 235 records were identified. After screening and removing duplicates, only 18 studies were deemed eligible study inclusion. It was found that the pharmacoeconomics evaluations were mostly of cost-utility analyses (13 out of 18), measuring cost per quality adjusted life years (QALY) gained, and from the payer perspective (15 out of 18). Primary sources of clinical and economic data were randomized clinical trials, expert panels and medical charts. Other sources included medicine databases, reimbursement schedules, drug policies and price lists, treatment guidelines, case reports and patient interviews. In 13 out of 18 cases, dominance status was reported in favor of OACDs, in relation to cost and/or clinical effect. Decision analytic modeling was used in the majority of studies, mostly constituting Markov modeling for the simulation of life long use of drugs. Sensitivity analyses were conducted in most studies, mostly constituting one-way sensitivity analysis to ensure robustness of study results. The types of cancers, where the effect of OACDs was studied, were the metastatic renal carcinoma, gastrointestinal tumors, colon cancer, chronic myeloid leukemia and non-small cell lung cancer. Most included articles were published during the last seven years. Most studies were conducted in the UK, US and Europe, while none were conducted in Australia or the Middle East. Conclusion: This is first systematic review of the economic methods used in the evaluation of OACDs. There seems to be a recent increasing interest of this type of research, whereby the QALYs measurement is of priority for the decision making in relation to the comparative value of OACDs in practices. Most important, is that despite the higher acquisition cost, OACDs were demonstrated to be mostly superior over the parenteral alternatives. Furthermore, the decision analytic modeling, mostly constituting Markov modeling, is valued and enables a structured decision analyses of therapies. The pharmacoeconomics research is difficult to generalize, whereby published economic evaluations are locally specific, especially for the purpose of practical interpretation. The current review of literature proposes valuable methods for the local Qatari implementation and guidance of decision makers. This is most relevant to National Center for Cancer Care & Research (NCCCR), which is the only tertiary service provider of cancer therapy in Qatar, where confusion in relation to the use of oral chemotherapies exists, particularly the therapies vinorelbine and capacitabine.
-
-
-
Assessment of HER2 Status of Metastatic Breast Carcinoma on Cell Block Preparations of Fine Needle Aspirates is Unreliable
Authors: Vignesh Shanmugam, Syed Hoda, Thomas Dilcher, Adam Pacecca and Rana HodaObjectives: HER2 status of breast carcinoma is a powerful prognostic and predictive biomarker-particularly in the metastatic setting. Limited data is available regarding assessment of HER2 on cell block preparations (CBP). The primary objective of this study was to assess correlation between HER2 results obtained via immunohistochemistry (IHC) and fluorescence in-situ hybridization (FISH) in cases of metastatic breast carcinoma (MBC) on CBP. Secondary objectives included study of inter-observer variability in interpretation of HER2 on IHC, and concordance between HER2 results on CBP and formalin-fixed paraffin embedded material (FFPEM). Materials and Methods: Cases of MBC diagnosed on fine needle aspirates (FNA) with HER2 testing performed via IHC and FISH on CBP over 5 years (2010-2015) were reviewed. CBP material was fixed in an ethanol-based fixative (CytoRich Red Fixative system, BD). HER2 IHC was performed using polyclonal antibodies against Cerb-2 (Dako 0485). HER2 FISH testing was performed using the LSI HER2/neu/CEP17 probes (Vysis/Abbott Molecular Inc., Des Plaines, IL). Results: Seventeen cases (all female, median age: 59) were analyzed. 41% of CBP were products of bone FNA (7/17). Other sites included lymph node (3), lung (2), pleural fluid (2), liver (1), skeletal muscle (1) and mesentery (1). Median interval between diagnosis of primary carcinoma and FNA of metastasis was 5 years (range: 10 months-32 years). FISH was inconclusive due to suboptimal specimen quality in 2 cases. Correlation between IHC and FISH results was as follows: IHC 0/1+ (0/2; 0% amplification), IHC 2+ (2/12; 16.7% amplification) and IHC 3+ (0/1; 0% amplification). Inter-observer agreement of IHC scoring between 2 pathologists who independently reviewed IHC slides was fair (66.7% agreement, κ = 0.31). Comparison of HER2 results on CBP with FFPEM (primary carcinoma or metastasis) showed a high discordance and slight agreement (discordance rate = 37.5%; κ = 0.02). Conclusions: In this study, (a) 16.7% of MBC cases that scored 2+ on IHC showed amplification on FISH, (b) there was poor inter-observer agreement in HER2 scoring of IHC on CBP, and (c) there was high discordance between HER2 results obtained on CBP and FFPEM. Our results indicate that HER2 testing of MBC on CBP may be unreliable.
-
-
-
Decision-Analytic Modeling in the Economic Evaluations of Systemic Antifungals for the Prophylaxis against Invasive Fungal Infections – A Thematic Systematic Review
More LessBackground: The interest in the economic evaluations of “prophylactic” systemic antifungals is on the rise, especially with the emergence of newer expensive agents for prophylaxis of invasive fungal infections (IFI). Decision analytic modeling is a systematic approach that has become integral in the economic evaluation process for the purpose of simplifying the decision making. This systematic review aims to identify the prevalence of decision-analytic modeling in the pharmacoeconomic literature regarding prophylactic therapies for systemic fungal infections, and to identify variations in model designs used along with defining specific areas of strengths and weaknesses. Method: A systematic literature search was conducted using the e-databases Pubmed, Medline, Embase, Economic Evaluation, Econlit, and Cochrane to obtain all model-based economic evaluations of antifungal agents. Search terms were under three categories: (i) therapy (antifungal agent [Mesh] OR Prophylaxis); (ii) disease (mycosis [Mesh] OR fungal disease [Mesh] OR invasive OR systemic); and (iii) research design (economics [Mesh] OR decision analysis [Mesh] OR costs and cost analysis [Mesh]). Publications were included if they were journal articles, full text publications, human studies, English language. Study articles were excluded if they were reviews, studying topical antifungal, non-invasive infections, or non-economic models. Journal article inclusion and data extraction, via a data collection form, were conducted twice, each by different researcher. Results: Out of 841 citations, only 19 articles were eligible for inclusion. Most of studies were relatively recent, conducted in 2008–2013. Seventeen of them used sources of clinical data from pooled randomized control trials. Evaluations were mostly in USA (7), the remaining in Australia, Canada, Spain, The Netherland, Korea, Greece, France, Germany, and Switzerland (1–2 articles each). All articles utilized the cost-effectiveness method using decision tree models; including 10 using Markov modeling for simulating future use of medications. This was, as appropriate, associated with discounting type of cost adjustment. Drug comparisons in included studies (27/29) were mostly between an old cheaper antifungals and more expensive newer ones. The 19 articles incorporated 15 studies with cost per life year gained measure, six with cost per IFI avoided, one with cost per Quality Adjusted Life Year, and four included cost saving per patient measure. Most important, is that same clinical measures were defined differently in different studies. Most studies reported dominance state, the majority were in favor of posaconazole (9 out of 12), and five studies required incremental cost effectiveness ratio analysis. Only direct medical costs were considered in studies despite that six articles had social perspectives instead of the hospital perspective. All articles had adjusted costs either for inflation (9/19 articles) or discounting. Fourteen articles used only one way sensitivity analysis while few used a combination with multivariate (2) or scenario (3) analyses. Conclusion: Decision making in relation to prophylactic antifungals is not complex, including the economic considerations; whereby straightforward therapy dominance status was demonstrated in the majority of studies. Most important, is that the literature evidence in relation the cost-effectiveness of systematic antifungals is not cumulative in nature, which is due to that same outcomes are defined differently in studies. This also meant that literature economic models are incomparable and not generalizable since different decision makers appear to be interested in different outcomes, including for the same antifungal agent. Studies are limited by not considering cost of side effects and alternative therapy options. Further studies are needed to compare among the newer more expensive agents, where evidence is lacking. Also, studies should be enhanced by better adhering to guidelines in relation to standardized definitions of health states, enabling a cumulative evidence generation and generalizability of findings.
-
-
-
Criminalizing Domestic Violence in Qatar: A Case Study of Student Activism
Globally, gender based violence effects one out of every three women. Recently, the alarming rise in reported cases of domestic violence in Qatar has led to a national call to find an effective way to deal with the issue. This paper documents the efforts of a group of Qatar University students to do just that: draft legislation to criminalize domestic violence. The research project involves eight Qatar University male and female undergraduate students from five different countries (Bahrain, Pakistan, Egypt, Nigeria and Qatar), and three faculty members from different countries (Palestine, Egypt and Saudi Arabia).
In order to determine the status of current societal and legal protection provided to victims of domestic violence, interviews were conducted with law enforcement authorities, judges, religious scholars/leaders, medical professionals and victims of domestic violence themselves. After analyzing the interviews, along with the official documentation provided by institutions (such as hospitals, police departments, and shelters) systematic weaknesses and legal loopholes were identified. A benchmarking of legislation in the Arab and Muslim world was then conducted in order to come up with a conceptual framework for a comprehensive protection system for female victims of domestic violence in Qatar.
-
-
-
Knowledge of MERS-Corona Virus among Female Students at Qatar University
Authors: Zahra Al-Muhafda, Maryam Qaher, Eman Faisal, Heba Abushahla, Rana Kurdi and Ghadir Al-JayyousiMiddle East Respiratory Syndrome (MERS) is a severe, acute respiratory illness caused by Corona Virus (CDC, 2015). Globally, there are 1599 cases diagnosed with MERS Corona virus and at least 574 related death cases were reported in the year 2012 (WHO, 2015). In Qatar, a total of 17 cases were reported between November 2013–May 2015 (WHO, 2015). The routes of transmission of MERS-COV includes direct contact with an infected person, touching contaminated objects or surfaces then touching your mouth, nose or eyes and direct contact with infected camels. A study in Qatar showed that MERS-COV was detected in 3 camels out of 14 from nose swabs taken from these camels. It also showed that the virus fragments were similar to what was found in two human cases from the same farm (Haagmans et al., 2014). Another study was conducted at Al-Shahaniya-Qatar in 2014 confirmed the presence of MERS-CoV in the milk of five camels. This explains that camels are a source of transmission of MERS-COV in the State of Qatar (Reusken et al., 2014). The purpose of this study is to examine knowledge of MERS-CoV transmission, symptoms and prevention techniques among female students at Qatar University, and further evaluate the effect of an awareness event organized by the Public Health Program. Participants (N = 33) were female students at Qatar University aged from 18–26 years old. Public health students designed a survey to test the knowledge of MERS-CoV transmission, symptoms and prevention techniques among female students at Qatar University. A pre-test survey was distributed in an awareness event of MERS-CoV that was held on the 19th of November 2015 at the college of Arts and Sciences. The survey included questions about demographics such as age, college, and nationality. It also included five questions to examine the level of knowledge about transmission routes for the virus, symptoms associated with the infection, the prevention techniques and the preferred strategy to be educated about the disease among students. Later, participants attended activities organized by public health students to be educated about MERS-CoV. They were exposed to epidemiological facts through distributed flyers and screen slides. The transmission routes were explained to the students using a creative and meaningful poster, Moreover, students were informed about the symptoms by another poster and a demonstration of MERS-CoV model. The prevention techniques regarding MERS-CoV were also explained through a poster and attractive, colorful brochures. The same recipients answered same questions as a post-test to measure changes in knowledge about MERS-CoV. For the analysis, SPSS software was used to analyze the pre- and post-test data. The McNemar's Test was used to compare the pre- and post-test results. P-value less than 0.05 was considered to be significant. The results showed the percentage of recipients aged (18-20) years and (21-23) years was the same (45.5%) for both age groups. The majority of the recipients were from the college of Science (57.6%); however, neither was from the college of Medicine nor the college of Law. Moreover, due to the high diversity in Qatar University, students form different nationalities participated in our survey such as, Qatari, Gulf countries, Egyptian, Palestinian, Iranian, Jordanian, Sudanese, Pakistani and Others. Most of the students were Qatari (21.2%), whereas Iranian and Pakistani had the lowest number of recipients (3.0%), (see Table 1). The results showed that prior to the educational event, the majority of the recipients thought that they didn't have enough knowledge about MERS-Corona Virus (54.5%). However, after the event, the majority agreed that they have enough knowledge about MERS-CoV, McNemar's Test (P = 0.000). In addition, the findings regarding transmission routes showed that the majority of recipients didn't know any of the transmission routes (33.3%), on the other hand, after the event 78% of the recipients were aware of all the transmission routes, McNemar's Test (P = 0.000). The next result showed that, most recipients knew about the symptoms associated with MERS-Corona Virus in pretest (51.5%), additively, the majority showed that they knew about these symptoms, but their knowledge improved compared to pre-test (90.9%), McNemar's Test (P = 0.001). When it comes to prevention, most recipients chose washing hands as preventive methods (33.3%) in pretest. However, the results after the event showed that recipients were aware of all of the preventive methods, but without statistical significance, McNemar's Test (P = 0.424). Finally, the pretest regarding the best educational methods showed that all the strategic methods were effective to educate about MERS-Corona Virus (75.8%). The percentage increased in the post-test (84.4%) but without statistical significance, McNemar's Test (P = 0.375) (see Table 2). Future research should focus more on comprehensive educational interventions that are needed to facilitate adoption of precautions associated with MERS-COV, and more follow up studies to see if these educational interventions promote changes in knowledge and behavior of students.
References
Center of Disease and Control (2015). Middle East Respiratory Syndrome (MERS). Retrieved from: http://www.cdc.gov/coronavirus/mers/.
Haagmans, B. L., Al Dhahiry, S. H., Reusken, C. B., Raj, V. S., Galiano, M., Myers, R.,… & Koopmans, M. P. (2014). Middle East respiratory syndrome coronavirus in dromedary camels: an outbreak investigation. The Lancet infectious diseases, 14(2), 140–145.
Reusken, C. B., Farag, E. A., Jonges, M., Godeke, G. J., El-Sayed, A. M., Pas, S. D., & Koopmans, M. P. (2014). Middle East respiratory syndrome coronavirus (MERS-CoV) RNA and neutralising antibodies in milk collected according to local customs from dromedary camels, Qatar, April 2014. Euro Surveill, 19(23), pii20829.
World Heath Organization (2015). Middle East respiratory syndrome coronavirus (MERS-CoV) – Qatar. Retrieved from: http://www.who.int/csr/don/11-february-2015-mers-qatar/en/. On: November 12, 2015.
World Heath Organization (2015). Middle East respiratory syndrome coronavirus (MERS-CoV) – Republic of Korea. Retrieved from: http://www.who.int/csr/don/25-october-2015-mers-korea/en/. On: November 12, 2015.
-
-
-
Picture Archive Communication (PAC) System with extended Image Analysis and 3D Visualization for Cardiac Abnormality
More LessPAC system is extending increasingly from the now well-established radiology applications into a hospital-wide PACS. New challenges are accompanying its spread into other clinical fields. With awareness of the importance of PAC systems among various medical experts, this system has been enhanced through the PAC system's pipeline, and simplification of image display for analysis via an interaction with the user. Generally PAC system consists of medical image, patients' data acquisition, storage, and display subsystems integrated by digital networks and application software. PAC system facilitates the systematic utilization of medical imaging for patient care. However, even though PAC system consist of medical image, data acquisition, storage and display subsystems, most of available PAC system does not have image analysis as required by the clinical expert. If the PAC system do have this element, it need interaction or interference from the clinical expert as user, and the PAC system storage mostly is an unstructured storage with no analysis element and report modules. And unfortunately in most cases, for the web-based PAC system, there are delayed in retrieval and visualize the required image from outside of the hospital. Most of the PACS with function of 3D display, it did not communicate information clearly and efficiently to users (clinical expert). Most of the visualization did not visual accurate information as required by the clinical expert. These listed constraints limited the clinical expert perspective regarding his/her decision.
From market validation observation we concluded that most of the PAC system available in market does not have medical image processing functions for the purposes of decision analysis with minimum user interaction. Research towards this limitation has been conducted in accordance with the needs of clinical experts. Among these studies are: (i) angiography image processing for stenosis position detection and measurement of its dimensions, (ii) echocardiography image processing for detection of ventricular cardiac abnormalities (walls and volume) and (iii) 2D angiography images reconstructed to 3D images for display purposes and to identify the location of artery tree.
With the result of these studies, the PAC system will integrate with extra modules, which are: i) 3D reconstruction function from single image angiogram with identification of the stenosis location, ii) Identification of abnormality heart wall chamber, iii) 3D reconstruction echocardiography left and right ventricular heart, and iv) 3D fused within CTA, angiography and MRI.
As stated above, the common limitation for the web-based PAC system is the delayed in retrieval and visualize the required image from outside of the hospital/clinics. To overcome this limitation, we proposed a technique and integrated a related function for faster transmission of the processed image without sacrifice any importance information. And to complete the PAC system so that the new PAC system able to compete with current PAC system that available in market, we link the PACS with our Patients Clinical Record Database with report modules as required by the medical expert.
The outputs of all those said researches will be integrated with the PAC system where each research output has been tested and validate by numbers of cardiac expert, the patients clinical record database has been tested in UKMMC and the PAC system currently being beta tested in a Private Clinic in Kelang, Selangor, Malaysia. Eventually the PAC system with the Patients Clinical Record Database will be integrated with these image analysis and 3D image visualization and it is plan to be tested in Veterinary Hospital of Universiti Putra Malaysia.
To forward this project to commercialization activity, we have distributed questioners to Clinics in area Bangi (Selangor, Malaysia) and Nilai (Negeri Sembilan, Malaysia). There are 58 Medical, Veterinary and Dental Clinics received the questioner (currently we expand the distribution towards Serdang and Kajang (Selangor, Malaysia). Out of 58 clinics; a) 16% interested to collaborate and looking forward to see PAC system, b)8% interested with the PAC system but do not willing to have any demo, c) 44% not interested but open for demo of system, d) 28% not interested and not willing to have demo system and e) 4% return the form without answered for that particular question.
To secure and protect the ownership, each research output has been submitted for Patent filed in Malaysia and with three chosen country, where 2 of the patent filed has been granted in Malaysia. We also copyrighted each module. This project has been selected by Universiti Putra Malaysia to be commercialized by a startup company seeded by UPM (CASD Medical Private Limited) under program INNOHUB.
We realize to implement the complete PAC system in Hospital there are 10 main problems exist that we might need to overcome or try to minimize the consequences. These problems are; i) Integration with the Hospital Information System. Although a lack of inter-vendor device and IT integration can often make the problem worse, the market is improving as providers and meaningful use demand greater integration. Unfortunately, still, many radiologists and PACS administrators prefer to make full use of hospital IT to configure their own systems and achieve a bit more autonomy, ii) Every system has downtime where we need to establish alternate workflows. Both scheduled and unscheduled, but they need not have to be too serious to minimize the effect on patient care, iii) Non-standardized hanging protocol display is a common and pesky challenge for PACS users. Images from different modalities are not organized by default even though each of them generally will be transmitted through DICOM format getaway, each study takes a little longer to read. As the number of scanners increases and the sample of vendors expands, the problem grows worse, iv) Integration problems concern hardware, from digitizing pre-DICOM modalities to integrating systems for advanced image reconstruction. Add-ons like a DICOM converter can help squeeze out additional value from older CT, angiography and fluoroscopy systems, v) As with downtime, failures are unavoidable, there is a need to demonstrate strong support activities, vi) Effective training can be a cost-effective way to demonstrate to administrators and physicians many of PACS' underused and undervalued features. Training wil help to expose staff to what the system can do to make their jobs easier and more efficient, vii) The migration of data to the new PACS is often the most challenging part of the process, both in negotiating the release of data from the current PACS and in sorting out all the data entry errors that have accumulated over the lifespan of the system, viii) As other specialties realize the value of PACS, the system is slowly being taken out of radiologists' hands. PACS has become a mission-critical enterprise-wide tool used by nearly all specialties. With this change, decision-making for PACS-related purchases, upgrades and configurations has, in some cases, shifted from radiologists to a more central process, ix) Hiring a certified public ergonomist to evaluate the department's workstations can ease radiologists' repetitive stress symptoms and contribute substantially to productivity. Despite accelerating advances in technology, many interface tools have changed little since the introduction of PACS, and finally x) Like business continuity, disaster recovery can prevent a painful experience from becoming fatal. Many hospitals opt for either redundant servers, cloud storage or both. At the very least, preparation for downtime can spare physicians and patients from experiencing significant losses.
To minimize the consequences of the listed 10 main problems, this project (in early stage) targeting potential customers among the owner of the small private clinics where the numbers of patients is less and the bureaucracy of the administration is limited.
-
-
-
Public-Key Cryptosystem Based on Invariants of Groups
Authors: Frantisek Marko, Martin Juras and Alexandr N. ZubkovThe presented work falls within one of Qatar's Research Grand Challenges, namely the area of Cyber Security. We have designed a new public-key cryptosystem that can improve the security of communication networks. This new cryptosystem combines two important ideas. The first idea is the difficulty of finding an invariant of infinite diagonalizable group. More specifically, for the coding purposes, we build an infinite diagonalizable group that has a given polynomial invariant of high degree. One possible attack on this cryptosystem is to find an invariant of this group. However, during the design of the system we guarantee that the minimal degree of invariants of this group is very high which makes the direct attempt to find any invariant using linear algebra techniques computationally expensive. The second idea is based on our discovery that when working over the ring Z of integers, another attack to break the cryptosystem is possible. This attack is based on replacing the prime factorization of integers by finding a factorization of “atoms” and can be implemented using the Euclidean algorithm. Similar algorithm works for rings that are unique factorization domains. To prevent this type of attack, we work over number fields that are not unique factorization domains (there are many such suitable number fields of small degrees). By doing so we invoke the well-known problem of prime factorization (used in commercial cryptosystems like RSA) which becomes principally more complicated over number fields that are not unique factorization domains. We have also showed that similar system based on finite diagonalizable groups are not secure against attack because such a system can be broken in polynomial time using an algorithm that finds a root of every polynomial p(x) with complex coefficients such that all of its roots are roots of unity. All invariants considered for diagonalizable groups are linear combinations of monomial invariants. The situation is more complicated for diagonalizable supergroups. Since it could improve the safety of the cryptosystem, we also investigated the case of supergroups and derived theoretical results about the minimal degrees of invariants. Since the diagonalizable supergroups enjoy more complicated structural properties, a cryptosystem based on them would be even more secured. In order to move to the supergroups, a better understanding of general linear supergroups was desired. We have established theoretical results describing and proving the linkage principle for these supergroups and we gained an understanding how are the composition factors of highest weight modules related. For the future work, we plan to implement the algorithm for the public-key cryptosystem that we have designed and test the speed of coding and test the security of the system by determining the time and space complexities of known attacks of the system.
-
-
-
Fast Prototyping of KNN Based Gas Discrimination System on the Zynq SoC
Electronic noses (EN) or machine olfaction are systems used for the detection and identification of odorous compounds and gas mixtures. The accuracy of such systems is as important as the processing time. Therefore, the choice of the algorithm and the implementation platform are both crucial. In this abstract, a design and implementation of a gas identification system on the Zynq platform which shows promising results is presented. The Zynq-7000 based platforms are increasingly being used in different applications including image and signal processing. The Zynq system on chip (SoC) architecture combines a processing system based on a dual core ARM Cortex processor with a programmable logic (PL) based on a Xilinx 7 series field programmable gate arrays (FPGAs). Using the Zynq platform, real-time hardware acceleration of classification algorithms can be performed on the PL and controlled by a software running on the ARM-based processing system (PS). The gas identification system is based on a 16-Array SnO2 in-house fabricated gas sensor and k-Nearest Neighbors (KNN) for classification. The KNN algorithm is executed on the PL for hardware acceleration. The implementation takes the form of an IP developed in C and synthesized using Vivado High Level Synthesis (HLS), the synthesis includes the conversion from C to register transfer level (RTL). The implementation requires the creation of a hardware design for the entire system that allows the execution of the IP on the PL and the remaining parts of the identification system on the PS. The hardware design is developed in Vivado using IP Integrator. The communication between the PS and PL is performed using advanced extensible interface protocol (AXI). A software application is written and executed on the ARM processor to control the hardware acceleration on the PL of the previously designed IP core and the board is programmed using Software Development Kit (SDK). An overview of the system architecture can be seen in Figure 1. The system is designed to discriminate five types of gases including C6H6, CH2O, CO, NO2 and SO2 at various concentrations, from 0.25 to 5 parts per million (ppm) for C6H6 and CH2O, from 5 to 200 ppm for CO, from 1 to 10 ppm for NO2 and finally from 1 to 25 ppm for SO2. The experimental setup used in the laboratory to collect the data is shown in Figure 2. It consists of a gas chamber where the sensor array is placed. The gas chamber has two orifices, one to serve as an input for the in-flow of gases and the other one as an exhaust to evacuate the gases. Multiple gases are stored in various cylinders and connected to the gas chamber individually through several Mass Flow Controllers (MFCs). A control unit is connected to the MFCs to control the in-flow of gases and to the sensor array via a Data Acquisition (DAQ) system to collect and sample the response of the sensor array. In total, 192 samples are collected, 50% is used for training and the other 50% is used for testing. Simulations were performed in MATLAB environment prior to the implementation on the hardware where different k values have been used. The Euclidean distance has been used as a metric for the computation of distances between various points. The best results were obtained for k = 1 and k = 2 with a classification accuracy of 97.91% and 98.95% respectively. The system implemented on hardware is based on k = 1 since the accuracies are almost similar while the hardware resources required for k = 2 are much higher than for k = 1. This can be explained by the fact that in the case of k = 2 we need to sort the vector of distances to be able to find the nearest two neighbours while in k = 1 we only need to find the smallest distance. The target hardware implementation platform of the proposed KNN is the heterogeneous Zynq SoC. The implementation is based on the use of Vivado HLS. A summary of the design flow is presented in Figure 3. The starting point is Vivado HLS where the KNN block is converted from C/C++ implementation to a RTL based IP core. This allows a considerable gain in development time without scarifying on high parallelism characteristics because Vivado HLS provides a large number of powerful optimization directives. The generated IP-core is then exported and stored in the Xilinx IP Catalog before being used in Vivado IP Integrator to create the hardware block design with all needed components and interconnections. The next step is to export the generated hardware along with IP drivers to the SDK tool. The SDK tool is used to program the Xilinx ZC702 prototyping board via joint test action group (JTAG) interface and the terminal in SDK is used to communicate with the board via universal asynchronous receiver/transmitter (UART) interface. The KNN IP is implemented on the PL of the Zynq SoC and communicates with the PS part via the Xilinx AXI-Interconnect IP. A software is written in C/C++ and executed on the PS to manage the IP present in the PL in terms of sending the input data, waiting for the interrupt and then reading the output data. The block design and the resulting chip layout are shown in Figure 4. It is worth mentioning that the running frequency for the ARM processor is set to the maximum 667MHz while the PL frequency is set to 100 MHz which is the maximum for the KNN IP generated in HLS. The real execution of KNN on the PL side of the ZC702 board shows that one sample can be processed for gas identification in 0.0078 ms while the same sample requires 0.9228 ms if executes on the PS side in the ARM processor in a pure software manner. This means that a speed up of 118 times has been achieved. The main directive in Vivado HLS that helped to reach these performances is the “Loop pipelining” which allows the operations in a loop to be implemented in a concurrent manner. The hardware resources usage can be seen in Figure 5, it shows that 24% of lookup tables (LUT), 12% of flip-flops (FF), 6% of BRAM and 58% DSP have been used. As shown in Figure 6, the total power consumption is 1.895 W, 1.565 W is consumed by the PS and the remaining 0.33W is consumed by the PL.
-
-
-
Robust Controller and Fault Diagnoser Design for Linear Systems with Event-based Communication
Authors: Nader Meskin, Mohammadreza Davoodi and Kash KhorasaniIn order to improve the effectiveness and safety of control systems, the problem of integrated fault diagnosis and control (IFDC) design has attracted significant attention in the recent years, both in the research and in the application domains. The integrated design unifies the control and diagnosis units into a single unit which leads to less complexity as compared to the case of separate designs. Nowadays, the IFDC modules are implemented on digital platforms. However, in almost all of these implementations, the IFDC task is executed periodically with a constant sampling period, which is called “time-triggering” sampling. However, the time-triggering sampling scheme produces many useless messages if the current sampled signal has not significantly changed in contrast to the previous sampled signal, which leads to a conservative usage of the communication bandwidth. This is especially disadvantageous in applications where the measured outputs and/or the actuator signals have to be transmitted over a shared (and possibly wireless) communication network, where the bandwidth of the network (and power consumption of the wireless radios) should be constrained. To mitigate the unnecessary waste of computation and communication resources in conventional time-triggered IFDC design, the problem of event-triggered integrated fault diagnosis and control (E-IFDC) for discrete-time linear systems is considered in this paper. A single E-IFDC module based on a dynamic filter is proposed which produces two signals, namely the residual and the control signals. The parameters of the E-IFDC module should be designed such that the effects of disturbances on the residual signals are minimized (for accomplishing the fault detection objective) subject to the constraint that the mapping matrix function from the faults to the residuals is equal to a pre- assigned diagonal mapping matrix (for accomplishing the fault isolation objective), while the effects of disturbances and faults on the specified control output are minimized (for accomplishing the fault-tolerant control objective). Two event-triggered conditions are proposed and designed to reduce the transmissions from the sensor to the E- IFDC module and from the E-IFDC module to the actuator. These event-triggered conditions determine whether the newly measured data or control output, respectively, should be transmitted or not. Indeed, the sensor measurement (controller output) is sent to the E-IFDC module (actuator) only when the difference between the latest transmitted sensor (controller) value and the current sensor measurement (controller output) is sufficiently large as compared to the current sensor (controller) value. This property reduces the burden on the network communication and saves the communication bandwidth in the network. Consequently, it is possible to significantly reduce the usage of communication resources for diagnosis and control tasks as compared to a conventional time-triggered IFDC approach. A multi-objective formulation of the problem is presented based on the H∞ and H- performance indices. The sufficient conditions for solvability of the problem are obtained in terms of linear matrix inequality (LMI) feasibility conditions. Indeed, the filter parameters and the event-triggered conditions are simultaneously obtained using strict LMI conditions. The main advantage of the proposed LMI formulation is that it is convex, and is therefore solved effectively using interior-point methods. Application of our methodology to a linearized model of the Subzero III ROV is presented to illustrate the effectiveness and capabilities of our proposed methodology. Remotely operated vehicles (ROVs) are underwater robotic platforms that have become increasingly important tools in a wide range of applications including offshore oil operations, fisheries research, dam inspection, salvage operations, military applications, among others. Since transmission resources are limited under water, using an event-triggered scheme for communication is more efficient. Therefore, the results of this paper are applied for designing an event-triggered IFDC module for the Subzero III ROV.
-
-
-
Annotation Guidelines and Framework for Arabic Machine Translation Post-Edited Corpus
Authors: Wajdi Zaghouani, Nizar Habash, Ossama Obeid, Behrang Mohit, Houda Bouamor and Kemal Oflazer1. Introduction
Machine translation (MT) became widely used by translation companies to reduce their costs and improve their speed. Therefore, the demand for quick and accurate machine translations is growing. Machine translation (MT) systems often produce incorrect output with many grammatical and lexical choice errors. Correcting machine-produced translation errors, or MT Post-Editing (PE) can be done automatically or manually.
The availability of annotated resources is required for such approaches. When it comes to the Arabic language, to the best of our knowledge, there is no MT manually post-edited corpora available to build such systems. Therefore, there is a clear need to build such valuable resources for the Arabic language. In this abstract, we present our guidelines and annotation procedure to create a human corrected MT corpus for the Modern Standard Arabic (MSA). The creation of any manually annotated corpus usually presents many challenges. In order to address these challenges, we created a comprehensive and simplified annotation guidelines which were used by a team of five annotators and one lead annotator. In order to ensure a high annotation agreement between the annotators, multiple training sessions were held and regular inter annotator agreement (IAA) measures were performed to check the annotation quality
2. Corpus
We collected a corpus of 100K of English news article taken from the collaborative journalism Wikinews website. Afterwards, the corpus collected was automatically translated from English to Arabic using the Google Translate API paid service.
3. Guidelines
In order to annotate the MT corpus, we use the general annotation correction guidelines we designed previously for L1 described in Zaghouani et al. (2014) and we add specific MT post-editing correction rules. In the general correction guidelines we place the errors to be corrected into seven categories: spelling, word choice, morphology, syntax, proper names, dialectal usage and punctuation. We refer to Zaghouani et al. (2014) for more details about these errors. In the MT post-editing guidelines, we provide the annotators with detailed annotation procedure and explain how to deal with borderline cases. We include many annotated examples to illustrate some specific cases of machine translation correction rules. Since there are equally-accurate alternative ways to edit the machine translation output, all being considered correct, some using fewer edits than others, we explained in the guidelines that the machine translated texts should be corrected with a minimum number of edits necessary to achieve an acceptable translation quality. However, correcting the accuracy errors and producing a semantically coherent text is more important than minimizing the number of edits and therefore, the annotators were asked to pay attention to the following three aspects: accuracy, fluency and style.
4. Annotation Pipeline
The annotation team consisted of a lead annotator and six annotators. The lead annotator is also the annotation workflow manager of this project. He frequently evaluate the quality of the annotation, monitor and report on the annotation progress. A clearly defined protocol is set, including a routine for the Post-editing annotation job assignment and the inter-annotator agreement evaluation. The lead annotators is also responsible of the corpus selection and normalization process beside the annotation of the gold standard to be used to compute the Inter-Annotator Agreement (IAA) portion of the corpus.
The annotation itself is done using an in house built web annotation framework built originally for the manual correction of errors in L1 and L2 texts (Obeid et al., 2013). This framework includes two major components: 1. The annotation management interface which is used to assist the lead annotator in the general work-flow process, it allows the user to upload, assign, monitor, evaluate and export annotation tasks. 2. The MT post-editing annotation interface is the actual annotation tool, which allows the annotators to do the manual correction of the MT Arabic output.
5. Evaluation
The low average WER of 4.92 obtained show a high agreement with the post-editing done in the first round between three annotators. The results obtained with the MT are comparable to those obtained with the L2 corpus, this can be explained by the difficult nature of both corpora and the multiple acceptable corrections for both.
6. Related Work
Large scale manually corrected MT corpora are not yet widely available due to the high cost related to building such resources. For the Arabic language, we cite the effort of Bouamor et al. (2014) who created a medium scale human judgment corpus of Arabic machine translation using the output of six MT systems and a total of 1892 sentences and 22k rankings. Our corpus is a part of the Qatar Arabic Language Bank (QALB) project, a large scale manually annotated annotation project (Zaghouani et al., 2014; Zaghouani et al., 2015). The project goal was to create an error corrected 2M-word corpus for online user comments on news websites, native speaker essays, non-native speaker essays and machine translation output.
7. Conclusion
We have presented in detail the methodology used to create a 100K word English to Arabic MT manually post-edited corpus, including the development of the guidelines as well as the annotation procedure and the quality control procedure using frequent inter-annotator measures. The created guidelines will be made publicly available and we look forward to distribute the post-edited corpus in a planned shared task on automatic error correction and getting feedback from the community on its usefulness as it was in the previous shared tasks we organized for the L1 and L2 corpus (Mohit et al., 2014; Rozovskaya et al., 2015).We believe that this corpus will be valuable to advance research efforts in the machine translation area since manually annotated data is often needed by the MT community. We believe that our methodology for guideline development and annotation consistency checking can be applied in other projects and other languages as well.
8. Acknowledgement
This project is supported by the National Priority Research Program (NPRP grant 4-1058-1-168) of the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.
9. References
Obeid, O., Zaghouani, W., Mohit, B., Habash, N., Oflazer,K., and Tomeh, N. (2013). A Web-based Annotation Framework For Large-Scale Text Correction. In The Companion Volume of the Proceedings of IJCNLP 2013: System Demonstrations, Nagoya, Japan, October.
Mohit, B., Rozovskaya, A., Habash, N., Zaghouani, W., and Obeid, O. (2014). The first QALB shared task on automatic text correction for Arabic. ANLP 2014, page 39.
Rozovskaya Alla; Houda Bouamor; Nizar Habash; Wajdi Zaghouani; Ossama Obeid; Behrang Mohit. The Second QALB Shared Task on Automatic Text Correction for Arabic. In Proceedings of the ACL 2015 Workshop on Arabic Natural Language Processing (ANLP), Beijing, China, July 2015.
Zaghouani, W., Mohit, B., Habash, N., Obeid, O., Tomeh,N., Rozovskaya, A., Farra, N., Alkuhlani, S., and Oflazer, K. (2014). Large scale Arabic error annotation: Guidelines and framework. In International Conference on Language Resources and Evaluation (LREC 2014).
Zaghouani, W., Habash, N., Bouamor, H., Rozovskaya, A., Mohit, B., Heider, A., and Oflazer, K. (2015). Correction annotation for non-native Arabic texts: Guidelines and corpus. Proceedings of The 9th Linguistic Annotation Workshop, pages 129-139.
-
-
-
Crowd Inventing: An Innovation about Innovation
More LessThis paper presents the blueprint for the design of a practical system that would promote sharing of ideas among researchers, to allow them to identify optimal partners, to protect their intellectual capital, to ensure attribution of their ideas and to create equitable sharing in the ownership and revenue of any eventual commercializable invention. The title, Crowd Inventing, reflects the fact that the signaling among researchers in search of best partners – the “crowd” – is itself an inventive element in their larger enterprise. The combination of legal and technological components that the Crowd Inventing system offers allows it to reduce the transaction costs of this search. It is an invention about the inventive process that promises theoretical and practical advantages that will hopefully attract research sponsors or private enterprise to invest in promising projects and to thereby better promote and reward innovation. Qatar's rich research environment and entrepreneurial aspirations make it an ideal forum to implement the Crowd Inventing system as a platform that will allow it to capitalize on its investments through commercialization of products and processes. Crowd Inventing is designed to help researchers and innovators:
• Find valuable complementary ideas and research collaborators that are missing from their research team and that threaten to impair or, worse still, cripple their project. The drive to commercialization of research, as well as the basic goal of ensuring attribution of one's research, confronts huge challenges when all the inventive elements are not part of one large enterprise that has designed its own proprietary signaling and invention protection system. The open community of researchers who publish in conferences and journals confronts these challenges, with the result that researchers engage in greater secrecy that limits communication and impairs collaboration and, if they do publish, of losing all attribution and commercial value to the inventor who builds a successful product on their ideas.
• Address the inability of intellectual property to protect against a downstream user's failure to attribute or share revenue. Intellectual property imposes little or no legal obligations on such users to attribute the source of the ideas they employ. It also focuses all reward on the last person to combine the inventive ideas into a final, commercializable product; there is no legal requirement of sharing revenues with contributors that fall outside its corporate or contractual network. Neither copyright nor trademark protect the functional ideas that a researcher discloses in his/her publications. These ideas are deemed to pass into the public domain to become free for competitors to use. Nor do they provide the researcher adequate protection against false attribution. As a result, a downstream user can pluck the idea from the public domain, use it without attributing its source, and even itself claim full attribution.
• Reach their patent goal. Patent law provides researchers with protection for their novel functional ideas, but its reduction-to-practice requirement means that the proprietary reward that it offers is very distant from many research projects. In many areas of innovation the authors lack the collaboration and capital to put together all of the pieces, with the result that the entity adding the last needed element can capture the full commercial benefit of the invention. In a world where ever-increasingly complex projects result in cost-prohibitive and prolonged research and the demanding search for collaborators to stay the course to eventual invention, the patent hurdle produces huge unintended consequences. Increased secrecy is one consequence and this has strong negative effects on publication and on the signaling required to find the missing elements for successful invention. Large enterprises whose rich financial and human capital is congregated in a single corporate silo often emerge as the winners in this environment – and when they do achieve a patent they have the financial resources to protect and defend their resulting property rights. However, academic or disbursed research communities do not typically have these resources or systems in place to compete. Even though their decentralized structure and flexibility mean they are increasingly the sites of path-breaking discovery, they are unable to successful achieve patenting and their ideas pass to others to commercialize them or to fade into obscurity. The proposed Crowd Inventing system crafts a legal and a technological solution that offers the following practical solutions to research and innovation problems:
• The signaling mechanism needed for successful collaboration; more effective sharing of ideas; full attribution to all contributors; equitable sharing in the rewards of patenting; and the enhanced ability to attract financial investment to projects. Researchers that subscribe to Crown Inventing would contractually enter the system through a master agreement that defines and protects member's attribution and revenue-sharing rights. Underlying this is a technological information-sharing platform whereby data is shared in standardized form and access is monitored and controlled using a system of digital management that would help secure and control information flows. The researcher submitting the information (the source) could track access to its information and ensure attribution, while the user could more easily verify the lineage of ideas and link it more closely to the reputation of its source. Thereby both sides could gain. The Crowd Inventing master agreement would also supply contractual templates to provide standardized resolution of the revenue sharing aspects of any resulting joint ventures and its trained intermediaries could facilitate agreements, either of which could be less expensive than employing the lawyers and the other professionals that typically exact heavy taxes on every venture or technology transfer transaction.
• The means to find out the conditions under which the research data was generated and to identify and approach the source through the clearing function of the information sharing system. Once the parties had identified themselves they could begin to work together to learn more about one another's work and address the terms of any relationship between them based on this information. The Crown Inventing System anticipates the contractual needs of their relationship facilitating an ensuing master agreement which avoids the potentially long delays of initial contracting. (If necessary the identity of the source could be held back until a formal approach is made by the interested party to the source).
• The apparatus to more successfully monitor and publish the reputations of the members of its user community. Crowd Inventing would facilitate a community where not only the reputation of products (academic or applied) could be charted by registering the number of transactions, but also the reputations of companies and academic labs as joint venture partners (their good faith and candor) could be logged thereby creating a system of accountability and verifiable reputation.
• The means to create a “market” based on failed experiments conducted by other researchers in related areas. While the Crown Inventing system is designed principally to promote maximum innovation and successful invention, it could also be adapted to create a “market” based on failed or stalled experiments conducted by other researchers in related areas. Currently there is a dearth of communication within the scientific community concerning unsuccessful experiments and failed hypotheses. Only successful experiments are published; scientists do not expose their failures perhaps out of fear that it will lower their prestige and because of a lack of an appropriate, widely-disseminated forum for this purpose. The Crowd Inventing system could be used to address sharing of information about failed or shelved experiments as readily as it could regarding successful experiments, and in the process it could create market value for such information.
• A useful complement to the internal governance structure of large corporations. While it is contemplated that the Crown Inventing system would be ideally suited to communities of scattered researchers or small independent companies, it could also be employed as a useful complement to the internal governance structure of large corporations. Such enterprises confront at the level of their inter-departmental relationships, problems of how to share information between departments, employee attribution, what budgets to set for each department, and the departments, compensations, which the Crown Inventing system could address. In summary, Crowd Inventing aspires to offer the research community a solution to the impediments to collaborative communication and the inequities of an intellectual property system that rewards the last contributor and that fails to protect attribution of prior inputs. In the process it promises to help researchers more easily find the collaborators and commercial funding and other resources they need to reach invention. It is itself an academic-conceived invention that with adequate community or commercial funding could become a reality that makes Qatar a leader in facilitating innovation. This proposal is led by Professor Clinton Francis, the Founding Dean of the Hamad bin Khalifa University, who is joined by HBKU Juris Doctorate students who will assist in conducting the research for, and design of, Crowd Inventing – an innovation about innovation.
-
-
-
Analysis of In-band Full-Duplex OFDM Signals Affected by Phase Noise and I/Q Imbalance
Authors: Lutfi Samara, Ozgur Ozdemir, Mohamed Mokhtar, Ridha Hamila and Tamer KhattabThe idea of the simultaneous transmission and reception of data using the same frequency is a potential candidate to be deployed in the next generation wireless communications standard 5G. The In-Band Full-Duplex (IBFD) concept permits to theoretically double the spectral efficiency as well as to reduce the medium access control (MAC) signaling which will improve the overall throughput of a wireless network. However, IBFD radios suffer from loopback self-interference (LSI) that is a major drawback that hinders the full exploitation of the potential benefits that this system is capable of offering. Recently, there has been an increased interest in modeling and analyzing the effect of LSI on the performance of an IBFD communication system, as well as in developing novel LSI mitigation techniques at the radio-frequency (RF) front-end and/or at the baseband stage.
LSI mitigation is approached by three different ways: The first approach is a propagation domain approach, where the transmitter and receiver antennas of the full-duplex node are designed in a manner that results in minimizing the interference between them. Although this method seems promising, the risk of nulling the received signal of interest is always present. Motivated by this risk, researchers have resorted to the use of analog circuitry to regenerate the LSI effect by adjusting the gain, phase and delay of the known transmitted data to mimic the effect of the channel of the received LSI signal on the transmitted data, and finally subtracting the estimated signal from the received signal. However, this turns out to be a formidable task since the surrounding environment of the full-duplex node is always varying and the LSI channel variations are difficult to track using analog circuit components. Both of the discussed approaches are classified as passive LSI mitigation approaches, given that they lack the ability to adapt to the constantly varying LSI channel. To overcome this drawback, a third technique of LSI cancellation is implemented, where the complex implementation of an adaptive LSI mitigation technique is moved to the digital domain, and the receiver actively updates the estimation of the LSI channel depending on the performance of the communication system to finally combine it with the known transmit data and subtract it from the received signal. Given that the LSI mitigation process can be easily performed in the digital domain using digital signal processing (DSP) algorithms, one might ask: why isn't the whole LSI mitigation process performed in the digital domain? The answer is that the signal entering the analog-to-digital converter (ADC) is limited by the ADC's dynamic range. Consequently, a combination of the three aforementioned LSI mitigation techniques must be deployed towards the implementation of an efficient and reliable IBFD communication node.
Orthogonal frequency division multiplexing (OFDM) is the preferred modulation scheme that is adopted by many wireless communication standards. Its implementation using direct conversion receiver architecture, which is favored over its super-heterodyne counterpart, suffer from inter-carrier interference (ICI) introduced by RF impairments such as oscillator phase noise (PHN) and in-phase/quadrature-phase imbalance (IQI). PHN's effect is manifested in the spread of the energy of an OFDM subcarrier over its neighboring subcarriers, and IQI introduces ICI between image subcarriers. In this work, we analyze the joint effect of PHN and IQI on the process of LSI mitigation in an IBFD communication scenario. The analyses is performed to yield the average per-subcarrier residual LSI signal power after the final stage of the digital LSI cancellation. The analysis shows that, even with the perfect knowledge of the LSI channel-state information, the residual LSI power is still considerably high, and more sophisticated LSI mitigation algorithms must be designed to achieve a better performing IBFD communication scheme. Acknowledgement: This work was made possible by GSRA grant # GSRA2-1-0601-14011 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the authors.
-
-
-
A System for Big Data Analytics over Diverse Data Processing Platforms
Authors: Jorge Quiane, Divy Agrawal, Sanjay Chawla, Ahmed Elmagarmid, Zoi Kaoudi, Mourad Ouzzani, Paolo Papotti, Nan Tang and Mohammed ZakiData analytics is at the core of any organization that wants to obtain measurable value from its growing data assets. Data analytic tasks may range from simple to extremely complex pipelines, such as data extraction, transformation and loading, online analytical processing, graph processing, and machine learning (ML). Following the dictum “one size does not fit all”, academia and industry have embarked on a race of developing data processing platforms for supporting all of these different tasks, e.g., DBMSs and MapReduce-like systems. Semantic completeness, high performance and scalability are key objectives of such platforms. While there have been major achievements in these objectives, users are still faced with many road-blocks.
MOTIVATING EXAMPLE
The first roadblock is that applications are tied to a single processing platform, making the migration of an application to new and more efficient platforms a difficult and costly task. As a result, the common practice is to re-implement an application on top of a new processing platform; e.g., Spark SQL and MLlib are the Spark counterparts of Hive and Mahout. The second roadblock is that complex analytic tasks usually require the combined use of different processing platforms where users will have to manually combine the results to draw a conclusion.
Consider, for example, the Oil & Gas industry and the need to produce reports by using SQL or some statistical method to analyze the data. A single oil company can produce more than 1.5TB of diverse data per day. Such data may be structured or unstructured and come from heterogeneous sources, such as sensors, GPS devices, and other measuring instruments. For instance, during the exploration phase, data has to be acquired, integrated, and analyzed in order to predict if a reservoir would be profitable. Tens of thousands downhole sensors in exploratory wells produce real-time seismic structured data for monitoring resources and environmental conditions. Users integrate these data with the physical properties of the rocks to visualize volume and surface renderings. From these visualizations, geologists and geophysicists formulate hypotheses and verify them with ML models, such as regression and classification. Training of the models is performed with historical drilling and production data, but oftentimes users also have to go over unstructured data, such as notes exchanged by emails or text from drilling reports filed in a cabinet. Therefore, an application supporting such a complex analytic pipeline should access several sources for historical data (relational, but also text and semi-structured), remove the noise from the streaming data coming from the sensors, and run both traditional (such as SQL) and statistical analytics (such as ML algorithms).
RESEARCH CHALLENGES
Similar examples can be drawn from other domains such as healthcare: e.g., IBM reported that North York hospital needs to process 50 diverse datasets, which are on a dozen different internal systems. These applications show the need for complex analytics coupled with a diversity of processing platforms, which raises several challenges. These challenges relate to the choices users are faced with on where to process their data, each choice with possibly orders of magnitude differences in terms of performance. For example, one may aggregate large datasets with traditional queries on top of a relational database such as PostgreSQL, but the subsequent analytic tasks might be much faster if executed on Spark. However, users have to be intimate with the intricacies of the processing platform to achieve high efficiency and scalability. Moreover, once a decision is taken, users may still end up tied up to a particular platform. As a result, migrating the data analytics stack to a different, more efficient processing platform often becomes a nightmare. In the above example, one has to re-implement the myriad of PostgreSQL-based applications on top of Spark.
RHEEM VISION
To tackle these challenges, we are building RHEEM, a system that provides both platform independence and interoperability across multiple platforms. RHEEM acts as a proxy between user applications and existing data processing platforms. It is fully based on user-defined functions (UDFs) to provide adaptability as well as extensibility. The major advantages of RHEEM are its ability to free applications and users from being tied to a single data processing platform (platform-independence) and provides interoperability across multiple platforms (multi-platform execution).
RHEEM exposes a three-layer data processing abstraction that sits between user applications and data processing platforms (e.g., Hadoop or Spark). The application layer models all application-specific logic; the core layer provides the intermediate representation between applications and processing platforms; and the platform layer embraces all processing platforms. In contrast to DBMSs, RHEEM decouples physical and execution levels. This separation allows applications to express physical plans in terms of algorithmic needs only, without being tight to a particular processing platform. The communication among these levels is enabled by operators defined as UDFs. Providing platform-independence is the first step towards realizing multi-platform task execution. RHEEM can receive a complex analytic task, seamlessly divide it into subtasks and choose the best platform on which each subtask should be executed.
RHEEM ARCHITECTURE
The three layers separation allows applications to express a physical plan in terms of algorithmic needs only, without being tied to a particular processing platform. We detail these levels below.
Application Layer. A logical operator is an abstract UDF that acts as an application-specific unit of data processing. In other words, one can see a logical operator as a template whereby users provide the logic of their analytic tasks. Such abstraction enables both (i) ease-of-use by hiding all the implementation details from users, and (ii) high performance by allowing several optimizations, e.g., seamless distributed execution. A logical operator works on data quanta, which are the smallest units of data elements from the input datasets. For instance, a data quantum represents a tuple in the input dataset or a row in a matrix. This fine-grained data model allows RHEEM to apply a logical operator in a highly parallel fashion and thus achieve better scalability and performance.
Example 1: Consider a developer who wants to offer end users logical operators to implement various machine learning algorithms. The developer defines five such operators: (i) Transform, for normalizing input datasets, (ii) Stage, for initializing algorithm-specific parameters, e.g., initial cluster centroids, (iii) Compute, for computations required by the ML algorithm, e.g., finding the nearest centroid of a point, (iv) Update, for setting global values of an algorithm, e.g., centroids, for the next iteration, and (v) Loop, for specifying the stopping condition. Users implement algorithms such as SVM, K-means, and linear/logistic regression, using these operators.
The application optimizer translates logical operators into physical operators that will form the physical plan at the core layer.
Core Layer. This layer exposes a pool of physical operators, each representing an algorithmic decision for executing an analytic task. A physical operator is a platform-independent implementation of a logical operator. These operators are available to the developer to deploy a new application on top of RHEEM. Developers can still define new operators as needed.
Example 2: In the above ML example, the application optimizer maps Transform to a Map physical operator and Compute to a GroupBy physical operator. RHEEM provides two different implementations for GroupBy: the SortGroupBy (sort-based) and HashGroupBy (hash-based) operators from which the optimizer of the core level will have to choose.
Once an application has produced a physical plan for a given task, RHEEM divides the physical plan of a task into task atoms, i.e., sub-tasks, which are the units of execution. A task atom (a part of the execution plan) is a sub-task to be executed on a single data processing platform. It then translates the tasks atoms into an execution plan by optimizing each task atom according to a target platform. Finally, it schedules each task atom to be executed on its corresponding processing platform. Therefore, in contrast to DBMSs, RHEEM produces execution plans to run in multiple data processing platforms.
Platform Layer. At this layer, execution operators define how a task is executed on the underlying processing platform. An execution operator is the platform-dependent implementation of a physical operator. RHEEM relies on existing data processing platforms to run input tasks. In contrast to a logical operator, an execution operator works on multiple data quanta rather than a single one. This enables the processing of multiple data quanta with a single function call, hence reducing overhead.
Example 3: Again in the above ML example, the MapPartitions and ReduceByKey execution operators for Spark are one way to perform Transform and Compute.
Defining mappings between execution and physical operators is the developers’ responsibility whenever a new platform is plugged to the core. In the current prototype of RHEEM, the mappings are hard-coded. Our goal is to rely on a mapping structure to model the correspondences between operators together with context information. Such context is needed for the effective and efficient execution of each operator. For instance, the Compute logical operator maps to two different physical operators (SortGroupBy and HashGroupBy). In this case, a developer could use the context to provide hints to the optimizer for choosing the right physical operator at runtime. Developers will provide only a declarative specification of such mappings; the system will use them to translate physical operators to execution operators. A simple and extensible operator mapping is crucial as it enables developers to easily provide extensions and optimizations via new operators.
PRELIMINARY RESULTS
We have implemented two applications on top of RHEEM, one for data cleaning, BigDansing [1], and one for machine learning. The performance of both applications are encouraging and already demonstrate the advantages of our vision.
Our results show that, in both cases, RHEEM enable orders of magnitude better performance than baseline system. These improvements come from a series of optimization done at the application layer as well as at the core layer. As an example of optimization at the core layer, we extended the set of physical operators with a new physical operator for joins, called IEJoin [2], This new physical operator provides a fast algorithm for joins containing only inequality conditions.
REFERENCES:
[1] Z. Khayyat, I. F. Ilyas, A. Jindal, S. Madden, M. Ouzzani, P. Papotti, J.-A. Quian-Ruiz, N. Tang, and S. Yin. BigDansing: A System for Big Data Cleansing. In ACM SIGMOD, pages 1215-1230, 2015.
[2] Z. Khayyat, W. Lucia, M. Singh, M. Ouzzani, P. Papotti, J.-A. Quiane-Ruiz, N. Tang, and P. Kalnis. Lightning Fast and Space Efficient Inequality Joins. PVLDB 8(13): 2074-2085, 2015.
-
-
-
Secure Communications Using Directional Modulation
Authors: Mohammed Hafez, Tamer Khattab, Tarek El-Fouly and Hüseyin ArslanLimitations on the wireless communication resources (i.e., time and frequency) introduces the need for another domain that can help communication systems to match the increasing demand on high data transfer rates and quality of service (QoS). By using multiple antennas. Besides, the widespread use of wireless technology and its ease of access makes the privacy of the information, transferred over the wireless network, questionable. Along with the drawback of the traditional ciphering algorithms, physical layer security rises as a solution to over come such problem.
Multiple-antennas systems offer more resources (i.e. degrees of freedom) which can be used to achieve secure communication. One of the recently developed techniques, that make use of directive antenna-arrays to provide secrecy, is Directional Modulation (DM).
In DM, the antenna pattern is recognized as a spatial complex constellation, but it's not used as a source of information. The antenna pattern complex value, at a certain desired direction, is set to have the same complex value of the symbol to be transmitted. This scheme also randomizes the signal in the undesired directions, thus, providing a source of directional security. Contrary to the regular beam-forming, which provides directional power scaling, DM technique is applied in the transmitter by projecting digitally encoded information signals into a pre-specified spatial direction while simultaneously distorting the constellation formats of the same signals in all other directions.
In our previous work, we introduced the Multi-Directional DM transmission scheme (MDDM). By using MDDM, we were able to provide multiple secure communication links for different directions. We showed that the scheme increases the transmission capacity of the system up to the number of the antenna elements. Also, the secrecy capacity increases with the increase of the number of transmitted streams. Moreover, MDDM has a low complexity structure compared to other DM implementations and it does not necessitate the implementation of special receiver algorithms.
Up till now, DM was only discussed from the algorithm construction perspective, and to the extent of the authors knowledge there has been no study of the employment of DM algorithms into the system level. Hereby, we introduce a multi-user access system level design that uses MDDM as a transmission technique. The new design utilizes the dispersive nature of the channel to provide a location-based secure communication link to each of the legitimate users. The scheme shows the ability to highly degrade the eavesdropper channel, even for the worst case scenarios. We also deduce the Achievable secrecy rate and secrecy outage probability for the scheme. The amount of degradation increases with the increase of the number of users in the system. Moreover, the secrecy analysis shows that the proposed system is always able to achieve a positive secrecy rate with a high probability. Besides, we compare the performance of this scheme to the performance of Artificial Noise (AN) precoding, as they share the same assumption about the channel knowledge. The results also shows that the DM scheme outperforms the ordinary AN scheme, while having a simpler hardware and processing structure.
-
-
-
Building a Global Network of Web Observatories to Study the Web: A Case Study in Integrated Health Management
Authors: Wendy Hall, Thanassis Tiropanis, Ramine Tinati and Xin WangThe Web is barely 25 years old but in that time it has changed every aspect of our lives. Because of its sociotechnical rather than purely engineered nature, not only is the Web changing society but also we shape the way the technology evolves. The whole process is inherently co-constituted and as such its evolution is unlike any other system. In order to understand how the Web might evolve in the future - for good or bad - we need to study how it has evolved since its inception and the associated emergent behaviours. We call this new research discipline Web Science [1,2], and it is important for all our futures that we urgently address its major research challenges.
We are fast becoming part of a world of digital inter-connectivity, where devices such as smartphones, watches, fitness trackers, and household goods are part of a growing network, capable of sharing data and information. Increasingly, the Web has become the ubiquitous interface to access this network of devices. From sensors, to mobile applications, to fitness devices, these devices are transmitting their data to - often - centralised pools of data, which then become available via Web services. The sheer scale of this data leads to a rich set of high-volume, real-time streams of human activity, which are often is made publicly consumable (potentially at a cost) via some API. For academia, the combination of these sources are providing social scientists and digital ethnographers a far richer understanding of society, and how we as individuals operate.
These streams represent a global network of human and machine communication, interaction, and transaction, and with the right analytical methods, may contain valuable research and commercial insights. In domains such as health and fitness for example, the aggregation of data from mobile devices is supporting the transition towards the quantified-self, and offers rich insight into the health and well-being of individuals, with the potential of diagnosing or decreasing disease.
Why the need for Web Observatories?
Studying the Web provides us with critical insights about how we as individuals and society operate in the digital world. The actions, communications, interactions, and transactions produced by humans and machines has the potential to offer rich insight into our world, allowing us to better understand how we operate at a micro and macro scale. However, there are a number of barriers that prevent researchers from making the most of those data resources.
Herein lies a challenge, and a great opportunity. We are now in a position where the technologies used within the big data processing pipeline are maturing, as are the methods we use to analyse data to provide valuable insights. Yet, overshadowing these benefits are issues of data access, control, and ownership. Whilst the data being produced continue to grow, their availability beyond the walled-gardens of the data holder - whether commercial or institutional - reduce the full potential of analysis envisaged in the big data era.
(a) Datasets are distributed across different domains (b) Metadata about those datasets are not available or are in different vocabularies/formats, (c) Searching or for or inside datasets is not possible, (d) Applying analytics on one or more datasets requires copying them into a central location, (e) Datasets are often provided in the context of specific disciplines lacking the metadata and enrichment that could make them available in other disciplines, (f) The nature of some of the datasets often requires access control in the interest of privacy. (g) There is a need for engines that lower the barrier of engagement with analytics for individuals, organisations and interdisciplinary research communities by supporting the easy application of analytics across datasets without requiring them to be copied into a central location, (h) There is a need for services enabling the publication and sharing of analytical tools within and across interdisciplinary communities.
Addressing the challenges described above, we have introduced the Web Observatory [3], a globally distributed infrastructure that enables users to share data with each other, whilst retaining control over who can view, access, query, and download their data. At its core, a Web Observatory comprises of a list of architectural principles that describe a scalable solution to enable controlled access to heterogeneous forms of historical and real-time data, visualisations, and analytics. In order to handle these new forms of big, and small data, significant effort has gone into developing technologies capable of storing, querying, and analysing high-volume datasets - or streams - in a timely fashion, returning useful insights of social activity and behaviour.
A Global Network of Web Observatories
The Web Observatory (WO) project, developed under the auspices of the Web Science Trust, aims to develop the standards and services that will interlink a number of existing or emergent Web Observatories to enable the sharing, discoverability and use of public or private datasets and analytics across Web Observatories, on a large, distributed scale (http://online.liebertpub.com/doi/abs/10.1089/big.2014.0035). It involves the publication or sharing of both datasets and analytic or visualisation tools (http://webscience.org/web-observatory/). At the same time, it involves the development of appropriate standards to enable the discovery, use, combination and persistence of those resources; effort in the direction of standards is already underway in the W3C Web Observatory community group (http://www.w3.org/community/webobservatory/).
International research collaboration is one of the primary goals of creating a network of Web Observatories. There has already been significant effort in creating a number of Web Observatory nodes globally [4,5]
In this paper will describe an instance of the Web Observatory, the Southampton Web Observatory (SUWO) and how it is being applied both at Southampton and at other institutions in areas such as integrated health management in particular in support of an aging population
We believe that the true potential of the Web Observatory vision will be realised when the different observatories become part of a global network of Wide Web of Observatories, allowing cross-observatory querying and analysis. Only through working through a set of initial application areas, we will show the immediate value that the Web Observatory platform will provide, from the sharing of datasets and resources, to improving international collaboration and research opportunities as a result of the raised awareness of institutional resources.
References
[1] Berners-Lee, Tim, Hall, Wendy, Hendler, James, Shadbolt, Nigel and Weitzner, Danny Creating a Science of the Web. Science, 313, (5788), 769-771.
[2] Hendler, James, Shadbolt, Nigel, Hall, Wendy, Berners-Lee, Tim and Weitzner, Daniel (Web science: an interdisciplinary approach to understanding the Web. Communications of the ACM, 51, (7), 60-69.
[3] Tiropanis, Thanassis, Hall, Wendy, Shadbolt, Nigel, De Roure, David, Contractor, Noshir and Hendler, Jim, The Web Science Observatory. IEEE Intelligent Systems, 28, (2), 100-104.
[4] Tinati, Ramine, Wang, Xin, Tiropanis, Thanassis and Hall, Wendy, Building a real-time web observatory. IEEE Internet Computing (In Press).
[5] Wang, Xin, Tinati, Ramine, Mayer, Wolfgang, Rowland-Campbell, Anni, Tiropanis, Thanassis, Brown, Ian, Hall, Wendy and O'Hara, Kieron, 2Building a web observatory for south Australian government: supporting an age friendly population. In, 3rd International workshop on Building Web Observatories (BWOW), 10pp.
-
-
-
Internet of Things Security: We're Walking on Eggshells!
By Aref MeddebSince the Internet of Things (IoT) will be entwined with everything we use in our daily life, the consequence of security flaws escalates. Smart objects will govern most of the home appliances and car engines yielding potential disaster scenarios. In this context, successful attacks could lead to chaos and scary scenarios (www.darkreading.com). Unprotected personal information may expose sensitive and embarrassing data to the public and attacks may threaten not only our computers and smart devices, but our intimacy and perhaps our lives too.
Because persons and objects will be bonded with each other, user consent becomes critical. Therefore, thing, object, and user Identity will be the focus of future IoT security solutions, yielding a Trust, Security, and Privacy (TSP) paradigm, which may constitute the Achilles' heel of IoT.
While security issues are quite straightforward, mainly from background knowledge, privacy issues are far more complex. Privacy constitutes a rather challenging task, even for the most skilled developer and may impede large scale deployment of IoT. Vinton Cerf stated that “Privacy may actually be an anomaly”, generating a whole lot of discussions among Internet users. And as Scott McNealy further pointed out nearly a decade ago: “You have zero privacy anyway. Get over it!”
From an industry and developer perspective, privacy is a matter of user conduct and responsibility. Consumers need to be trained to understand that by saving their personal data on various devices, they expose themselves to various types of attacks. Often, there is no mean for users to know whether their personal data is being tracked or “stolen” by third parties.
In fact, technology seems to have evolved far beyond any expectations and we seem not prepared to deal with it. As Vinton Cerf also pointed out that “figuring out how to make a security system work well that doesn't require the consumer to be an expert is a pretty big challenge.” For instance, consumers often use easy to remember and similar passwords as well as USB Flash drives in various systems, rendering the development of secure solutions as digging holes in water.
With the advent of IoT, manufacturers of “traditional” home appliances, construction, and industrial engines will be required to include communication components to their products. As these components will be subject to the same cyber threats as computers and smart phones, manufacturers will also need to integrate security into their manufacturing processes, from the design phase to packaging.
There are quite a number of IoT architectures emanating from mainstream standards bodies. In what follows we describe and discuss some of the most promising architectures namely from IETF, ITU-T, ISO/IEC, IEEE, ETSI, and oneM2M.
IETF's Security Architectures
As for the common Internet, the IETF is playing a lead role in IoT standardization efforts. A variety of proposals are being made, ranging from application layer to network layer protocols; and from sensor networks to RFID communications.
IETF Core Architecture
According the IETF, “a security architecture involves, beyond the basic protocols, many different aspects such as key management and the management of evolving security responsibilities of entities during the lifecycle of a thing.” The proposed IoT security architecture aims to be flexible by incorporating the properties of a centralized architecture whilst at the same time allowing devices to be paired together initially, without the need for a trusted third party.
Some key new security features that go beyond current IT paradigms are to take into account the lifecycle of a thing. To this regard, a thing may need to go through various stages during its lifecycle: Manufactured, Installed, Commissioned, Running, Updated, Reconfigured, etc.). In the manufacturing and installation phases, the thing is Bootstrapped, while during the commissioning and running phases, the thing is Operational. In each stage, security credentials and ownership information may need to be updated.
The architecture also takes into account the specific features of IoT devices namely, low processing power, low energy resources, and potential inaccessibility. Further, things may need to be protected for decades and need to be reset to rebuild security capabilities over time.
Further, the IETF proposes an architecture that describes implementation and operational challenges associated with securing the streamlined Constrained Application Protocol (CoAP, RFC 7252). The draft also proposes a security model for Machine to Machine (M2M) environments that requires minimal configuration. The architecture relies on self-generated secure identities, similar to Cryptographically Generated Addresses (CGAs) (RFC3972) or Host Identity Tags (HITs) (RFC5201).
DTLS-based Security Architecture
Datagram Transport Layer Security (DTLS, RFC 6347) is based on the stream-oriented Transport Layer Security (TLS) protocol and is intended to provide similar security features, but uses datagram semantics. The IETF introduces a full two-way authentication security scheme for IoT based on DTLS, which is designed to work over standard protocol stacks namely UDP/IPv6 over Low power Wireless Personal Area Networks (6LoWPANs, RFC 4944).
HIP support for RFIDs
In order to enforce privacy, an architecture based on the Host Identity Protocol (HIP, RFC 5201) for active RFID systems that supports tamper resistant computing resources is proposed.
The HIP-RFID architecture includes three functional entities: HIP RFIDs, RFID readers, and portals. The architecture defines a new HIP Encapsulation Protocol (HEP). The architecture also defines an identity layer for RFID systems that is logically independent from the transport facilities. HIP-RFID devices hide the identity (typically an EPC-Code) by a particular equation that can be solved only by the portal. Messages exchanged between HIP-RFIDs and portals are transported by IP packets.
ETSI M2M Architecture
The ETSI M2M architecture describes a range of variants that depend on the security characteristics of the underlying networks and on the relationships between the M2M service provider and the network operator.
The ETSI TS 102 690 Technical Specification (TS) describes a functional architecture, including the related reference points and the service capabilities, identifiers, information model, procedures for bootstrapping, security, management, charging, and M2M communications implementation guidance. The M2M functional architecture is designed to make use of IP based networks, typically provided by 3GPP as well as the Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN) environment.
Among other things, the ETSI TS 102 690 TS introduces an M2M security framework for underlying functions and related key hierarchy. It is worth noting that the ETSI 103 104 specification describes an “Interoperability Test Specification for CoAP Binding of ETSI M2M Primitives”, which is of a particular importance in terms of interoperability with IETF standards.
OneM2M Security Architecture
The oneM2M standardization body emerged as a unified effort of standards organizations namely, ETSI, ATIS (Alliance for Telecommunications Industry Solutions), TIA (Telecommunications Industry Association), CCSA (China Communications Standards Association), TTA (Telecommunications Technology Association of Korea), ARIB (Association of Radio Industries and Businesses) and TTC (Telecommunication Technology Committee) from Japan. oneM2M aims to “unify the Global M2M Community, by enabling the federation and interoperability of M2M systems, across multiple networks and topologies”.
ITU-T Architectural Framework
The ITU-T is actively working on standardizing IoT security. For this purpose, a large number of recommendations were published or being considered. In particular, recommendation ITU-T Y.2060 provides an overview of IoT and clarifies the concept and scope of IoT. It further identifies fundamental characteristics and high-level requirements of IoT. What is important to note is that security and privacy are assumed to be de facto features of IoT within ITU-T standards.
ITU-T SG17 is currently working on cybersecurity; security management, architectures, and frameworks; identity management; and protection of personal information. Further, security of applications and services for IoT, smart grid, Smartphones, web services, social networks, cloud computing, mobile financial systems, IPTV, and telebiometrics are also being studied.
In particular, ITU-T rec. X.1311 provides a security model for Ubiquitous Sensor Networks (USN). Note that this model is common with ISO/IEC 29180 and based on ISO/IEC 15408-1 (see below). In the presence of a threat, appropriate security policies will be selected and security techniques will be applied to achieve the security objective.
Further, rec. X.1312 provides USN middleware security guidelines, while security requirements for routing and ubiquitous networking are provided in Rec. X.1313 and X.1314, respectively.
In addition, ITU-T is actively working on tag based identification through a series of recommendations such as ITU-T rec. F.771, rec. X.672, and rec. X.660. In particular, rec. X.1171 deals with Threats and Requirements for Protection of Personally Identifiable Information in Applications using Tag-based Identification.
ISO/IEC Reference Architecture
The ISO/IEC NP 19654 IoT draft std. introduces a Reference Architecture as a “generalized system-level architecture of IoT Systems that share common domains”. Developers may use parts or all these domains and entities. The IoT RA also aims to provide rules, guidance, and policies for building a specific IoT system's architecture. The IoT RA includes three key enabling technology areas:
1. IoT system of interest;
2. communications technology; and
3. information technology.
The IoT RA standard describes a conceptual model where seven IoT System Domains are defined: 1) IoT System, 2) Sensing Device, 3) Things/Objects, 4) Control/Operation, 5) Service Provider, 6) Customers, and 7) Markets.
The ISO/IEC 29180 std. (which is common with ITU-T rec. X.1311 described above) describes security threats to and security requirements of USN. The std. also categorizes security technologies according to the security functions.
On the other hand, ISO/IEC 29167-1 deals with security services for RFID air interfaces. This standard defines a security service architecture for the ISO/IEC 18000 RFID standards. It provides a common technical specification of security services for RFID devices that may be used to develop secure RFID applications. In particular, the std. specifies an architecture for intractability, security services, and file management.
Moreover, the ISO/IEC 24767 standard specifies a home network security protocol for equipment that cannot support standard Internet security protocols such as IPSec or SSL/TLS. This protocol is referred to as Secure Communication Protocol for Middleware (SCPM).
European Internet of Things Architecture (IoT-A)
A Reference Model and a Reference Architecture are introduced by IoT-A, both providing a description of greater abstraction than what is inherent to actual systems and applications. The RM is composed of several sub-models. The primary model is the IoT Domain Model, which describes all the concepts that are relevant to IoT. Other models include the IoT Communication model and the Trust, Security, and Privacy (TSP) Model.
The TSP model introduces interaction and interdependencies between these three components. The IoT-A model focuses on Trust at the application level providing data integrity, confidentiality, authentication, and non-repudiation.
In turn, the security RM proposed by IoT-A is composed of three layers: the Service Security layer, the Communication Security layer, and the Application Security layer. One of the key aspects is that while taking into account heterogeneity, tradeoffs between security features, bandwidth, power consumption, and processing are of a major concern in IoT-A.
Conclusion
The need for a Reference Model and a Reference Architecture seems to have reached a global consensus. However, because this concept is quite abstract, we need more pragmatic definitions that give developers straightforward guidelines in their endeavour to develop secure IoT services and applications.
Technologies like Zigbee, KNX, Z-Wave, BACNet are quite mature and much more secure than 6LoWPAN. Users who invested in those mature technologies may not be willing to switch to another technology any time soon. This was also the case for other networking areas where IP may be used for entertainment, education, and research but cannot be trusted for transactional applications, business critical applications, and sensitive applications requiring high reliability and security.
Further, the advantages brought by 6LowPAN over Zigbee are not significant: they both use the 802.15.4 PHY and MAC layers. Zigbee uses its own high layer stack while 6LowPAN is based on compressed IPv6 headers. Further, 6LowPAN requires an adaptation layer and supports fragmentation, a feature that may be cumbersome given the required simplicity of constrained low-resource environments.
ITU-T, IEEE, IETF, ETSI, and ISO/IEC seem to be heading towards a common security architecture although the picture is not clear yet. The oneM2M initiative is one step towards this goal. Other initiatives are needed where pragmatic definitions will be of a much greater help for developers.
-
-
-
Lattices are Good for Communication, Security, and Almost Everything
Authors: Joseph Jean Boutros, Nicola di Pietro, Costas N. Georghiades and Kuma P.R. KumarMathematicians considered lattices as early as the first half of the nineteenth century, e.g. Johann Carl Friedrich Gauss and Joseph-Louis Lagrange explored point lattices in small dimensions. After the pioneering work of Hermann Minkowski around 1900, lattices were extensively studied in the twentieth century until engineering applications in the areas of digital communications, data compression, and cryptography were recently discovered. Nowadays it is admitted that lattices are good for almost everything, including many new fields such as physical-layer security, post-quantum cryptographic primitives, and coding for wireless mobile channels.
In this talk, after introducing the mathematical background for point lattices in multi-dimensional Euclidean spaces, we shall describe how lattices are used to guarantee reliable and secure communications. The talk will include strong technical material but it is intended to a large audience including engineers and scientists from all areas. The talk shall also present new results on lattices found in Qatar under projects NPRP 6-784-2-329 and NPRP 5-597-2-241 funded by QNRF.
Lattices are mathematical structures with specific algebraic and topological properties. We consider the simplest form of lattices, i.e. lattices in real Euclidean spaces equipped with the standard scalar product. In communication theory, lattices can play different roles in the processing and the transmission of information. They are suitable for vector quantization of analog sources, for channel coding as coded modulations, and also for joint source-channel coding. In the recent literature, lattices are found to be good tools in network coding and secure coding at the physical layer. More information on lattice for communications can be found in [1] and references therein. A lattice is a Z-module of the Euclidean vector space R^N or equivalently it is a discrete additive subgroup of R^N. A new family of lattices, referred to as Generalized Low-Density Lattices (GLD) was built in Qatar. GLD lattices are obtained by Construction A from non-binary GLD codes. Their impressive performance and analysis can be found in [2].
In his seminal paper [3], Miklos Ajtai showed how lattices could be used as cryptographic primitives. Since Ajtai's result, lattice-based cryptography became very popular in the cryptography community. One important area is Learning With Errors (LWE), see [4]. LWE is a clear example of elementary mathematical problem with an algorithmically complex solution. Thanks to its intrinsic connection with lattice-based problems that are known to be hard also for quantum algorithms, LWE has raised much interest in the last decade, it is part of the so-called post-quantum cryptography. Another area in lattice-based cryptography is GGH. This is the first McEliece-like scheme for lattices, referred to as GGH cryptosystem, and was proposed by Goldreich, Goldwasser, and Halevi in 1997. GGH is a lattice equivalent of the McEliece cryptosystem based on error-correcting codes. Many other cryptosystems based on lattices were investigated and proposed by several researchers. In this talk, after presenting how lattices are used for reliable communications, we also present how lattices are used to secure data communications. The attached slides constitute a draft mainly focusing on the communication aspects of lattices. These slides will be completed by a second part on security based on lattices.
[1] R. Zamir, Lattice Coding for Signals and Networks, Cambridge, 2014.
[2] J.J. Boutros, N. di Pietro, and Y.C. Huang, “Spectral thinning in GLD lattices”, Information Theory and Applications Workshop (ITA), La Jolla, pp. 1-9, Feb. 2015. Visit http://www.ita.ucsd.edu/workshop/15/files/paper/paper_31.pdf
[3] M. Ajtai, “Generating Hard Instances of Lattice Problems,” Proc. of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, pp. 99-108. doi:10.1145/237814.237838, 1996.
[4] V. Lyubashevsky, C. Peikert, and O. Regev, “A Toolkit for Ring-LWE Cryptography,” in EUROCRYPT, pp. 35-54, 2013.
-
-
-
South Asia's Cyber Insecurity: A Tale of Impending Doom
More LessIn the digital era, India's national security has extricably been linked with its cyber security. However, although India has digitalized its governance, economy and daily life on an industrial scale, it has never paid adequate attention to adopt a programme side-by-side to securitize its digitalization plan. Resultantly, not only India's cyber space but also its physical spheres have been exposed and facing constant attacks from its rivals and enemies. India is the single biggest supplier of cyber professionals around the world and successfully leading cyber space across the globe. But India's army of cyber professionals falters when it comes to detect the simplest of cyber crime, which often led to devastating consequences. Cyber security is ensuring secure use of computers, smart phones as well as computer network including internet from security threat – physical or virtual (emphasize mine). There are two types of threat attached with cyber security. First is threat to digital equipment from unauthorized access, with the intention to change, destruct or misuse the information available on that system which would wreck havoc on the intended service linked with that system. The second threat, which is still to be analysed by digital practitioners, analysts, security agencies and academics, is ‘the authorized use of cyber tool to aid, organize and orchestrate terror attacks and conduct or facilitate devastating physical damage to life, property and national assets (interpretation mine)’. In India, nearly all efforts, public and private, to prevent cyber threats are employed within the description of first type of threat. No endeavour spared on either to monitor or prevent the second type of threat. All cyber security related debates, government commissioned reports, private initiatives and public discourses are confined on how to secure the cyber information, data, secrets stored in computer networks and the seamless functioning of software enabled services. But contrastingly, most of the damages suffered by India during the past decade are because of the second type of security threat where enterprising terrorists and criminals have been exploiting the cyber world to inflict severe damage to personal as well as national security. Terrorists and criminals have been using telephone, email, internet, instant messaging, VoIP and other method of communication to execute terror plot and crime. Therefore, it is essential for the security agencies to keep pace with the plotters. Western intelligence agencies like the British MI6 and American Central Intelligence Agency (CIA) have been using eavesdropping technologies to chase, arrest, and pre-empt ominous attack plot as well as imminent crime. India's Military Intelligence uses the eavesdrop method to intercept the instructions of rival armies to their corps commanders and cadres while its Intelligence Bureau employs the method on a limited scale to unravel domestic disturbances and violence. Eavesdropping is the unauthorized real-time interception of a private communication, such as phone call, email, instant message, internet search, videoconference, and fax transmission. Owing to its robust cyber security programme and a pro-active interception policy, the United States has successfully prevented 25 terrorist attacks since 11 September 2001. In the contemporary era, potential recruits and cadres for various terror organizations and crime syndicates are found on social media. Therefore, the terrorist organizations are not looking recruits from the campus of orthodox madrassas or from poverty stricken ghettos but on the social sites to enlist the participation of highly educated radicals with an ability to crack government security. Leading terrorist organizations and various terrorist leaders are openly visible on cyber space flaunting their idiosyncratic agenda to lure potential foot soldiers. For example, three Muslim youths from an upmarket Mumbai suburb have recently not only joined the Islamic State of Iraq and Syria (ISIS) through social media but also travelled to Mosul in Iraq and received training to become suicide bombers. In India, the ISIS has not been soliciting cadres through mosque-madrassa sermoning but by a self-motivated cyber savvy information technology professional from Bangaluru. The cyber world provided an extensive and expansive platform to the terror recruiters and the recruits to meet, give-and-receive indoctrination and orchestrate high volume terror attacks. All nineteen 9/11 attackers, all four 7/7 London suicide bombers, and even the kingpin of 26/11 Mumbai attacks David Headley were recruited by their respected terrorist organizations from the cyber world. Therefore, it is essential to install a proper mechanism to monitor and restrain the users not to fall in the trap of terror organizations. Indian security agencies have been functioning in a reactionary fashion where prevention is received least priority. The country's British era security system is so archaic that the police officers on duty at street, which form the first line of citizen's defence, do not understand what cyber crime is. Security is a state subject and due to lack of evolution, provincial police departments have been using obsolete methodology to deal with modern day crime. Because of the inertia of security agencies, citizens do not trust the state police. Added to the malice is the fact that state police are neither capable nor trained to deal with cyber related crimes. At the federal level, India is still to develop a data base of criminals, home-grown militants and international terrorists with recognizable information like facial images, fingerprints, voice samples and biographical descriptions. In the absence of such a data bank, the viability of the system installed at India's entry-exit ports to effectively screen individuals entering or departing the country is worthless. It is time to correct the anomalies and absence of a robust cyber security system in India. While the Modi government is spreading the digital web throughout the country as part of its ‘Digital India’ campaign what India lacking is a definite monitoring mechanism. Misguided exuberance on the part of the government to digitalize India would prove counterproductive sooner than later. Ibn Khaldun, the all time great Arab historian, in his seminal ‘Muqaddimah’ explained how simple court intrigues devastated and defeated mighty emperors who were otherwise invincible and matchless in open battle. On cyber security issues India is following the maxim of Iban Khaldun. As per an estimation of the National Security Council, China, with its 1.25 lakh cyber security experts, is a potential challenge to India's cyber security. In humiliating contrast, India has a mere 556 cyber security experts. At stake is India's US$ 2.1 trillion GDP, power grids, telecommunication lines, air traffic control, the banking system and all computer-dependent enterprises. India and China's cyber security preparedness is a striking study in contrast. India is a reputed information technology-enabled nation while China struggles with its language handicap. India, with a massive 243 million internet users, has digitized its governance, economy and daily life on an industrial scale without paying adequate attention to securitize the digitization plan. In the digital era, national security is inextricably linked with cyber security, but despite being the single biggest supplier of cyber workforce across the world India failed to secure its bandwidth and falters to detect the simplest of cyber crimes, which often leads to devastating consequences. India's Cyber Naiveté India's inertia to induct cyber security as an essential element of national security and growth is tremblingly palpable. Cyber security is less debated, sporadically written about, and rumoured at best in India. Because of this apathy and despite India's grand stature in the cyber world, India is vulnerable to the cyber snarls of China and other countries. With its archaic governmental architecture, India is still in expansion mode with little time spared on digital security. One of the significant reasons of India's inertia is its lack of understanding and appreciation of the gravity of cyber security. Added to that, despite being a proclaimed land of young people, India's age-old lamentation for its youth is one of the vital stumbling blocks to adopting a strong cyber security policy. For example, the Narendra Modi-government appointed expert group ‘to prepare a roadmap on cyber security’ is comprised of aged professors and busy bureaucrats who cannot keep pace with the speed, agility and thought of modern-day hackers. China and all other countries' cyber security, on the other hand, rest in the hands of their young cyber experts. Prime Minister Modi might be a cyber wizard but the country's political apathy to cyber security is blatant. While the Chinese President and Prime Minister have been involving themselves directly with the cyber security initiative, no political figure in India has ever shown the slightest interest in securing India's cyberspace. The Ground Zero Summit, which is considered as the Mecca of India's cyber security debate and an earnest endeavor of cyber security professionals, failed to get a single political figure to deliberate on the issue. The lone reluctant political participant, former army-general-turned-politician Gen. V.K. Singh addressed the gathering through video conferencing. Prime Minister Modi talks about Digital India and the next wave of internet growth will have to come from vernacular users who would be far more vulnerable to cyber-related deception than their city-based English-speaking counterparts. The apathy of aging politicians and bureaucrats stem from the fact that this new field is dominated by twentysomethings with cans of Diet Coke and a constant chat history with their girlfriends. India is delaying the rightful prestige to its young cyber security professionals at its own peril. China, US, Israel and even war-torn Syria has long been cherishing the ability of its young cyber professionals. India's vulnerability to Chinese cyber attacks could be judged from the fact that a colonel rank officer from People's Liberation Army informed Swarajya contributing editor Ramanand Sengupta that India's cyber infrastructure to protect its stockmarkets, power supply, communications, traffic lights, train and airport communications is so ‘primitive’ that can be overwhelmed by the Chinese in less than six hours. So if there is a second India-China War, India's adversary does not need to send troops to the trenches of the Himalayas but to ask its cyber warriors to cripple India's security infrastructure from their cool air-conditioned computer rooms. India is nowhere in the cyber war that has engulfed the globe. India's response to such a critical situation is a timid National Cyber Security Policy that the government circulated in 2013. There is no national overhaul of cyber security and the Indian Computer Emergency Response Team, the statutory body to look after cyber attacks, has little critical strength or capability. Its endeavour to recruit young talent and meaningfully engage them is still to take off. After the 2013 National Security Council note that exposed India's cyber security unpreparedness, the government decided to augment infrastructure and hire more professionals. However, what is required is a strategic vision to ensure stealth in India's cyber security and a political conviction to plug strategic vulnerabilities. The National Technical Research Organization has regularly been alerting successive governments about the danger from Chinese cyber attacks. India cannot afford to be passive and unresponsive because if it does not not act now, by the time a sophisticated cyber-attack happens, it will probably be too late to defend against it effectively. India's immediate requirement is to understand the impending cyber security threat from China and build better network filters and early warning devices and add new firewalls around computers that run the Indian economy and regulate vital civil and military installations. But in any battle the attackers are always embedded with all advantages from choosing the battlefield to deciding the time of war to the choice of instrumentalities. Poor defenders end up defending an attack that they even cannot imagine.
-
-
-
Role of Training and Supporting Activities in Moderating the Relationship Between R & D and Introducing KM Systems Within SMEs
More LessThis paper presents an abstract of the final phase of an on-going research project aiming at investigating the antecedents and consequences of research and innovating within Lebanese small and medium-sized enterprises (SMEs). The role of training personnel and introducing supporting activities in moderating the relationship between R & D and introducing knowledge management systems within Lebanese small- medium –sized innovation in Lebanon still suffer from funding shortages, short of IT personnel training and lack of the ability to adequately use existing knowledge. Ashrafi, and Murtaza suggest that “Large organizations have enough resources to adopt ICT while on the other hand SMEs have limited financial and human resources to adopt ICT” (Ashrafi and Murtaza, 2008 P. 126). Even though, Lebanese government is trying to create a digital economy, Lebanon ranked 94th out of 144 countries on the Network Readiness Index in 2012 and “In the Arab world, Lebanon ranked in 10th position, right behind Morocco (89th worldwide), but right ahead of Algeria (131st Worldwide)” (BankMed, 2014; P.19). What it is imperative, however, to note here is that “SMEs have been recognized as an important source of innovative product and process” (HanGyeol et al 2015; P.319). It is widely believed that “Research and development (R&D) intensity is crucial for increasing the innovative capacity of small to medium-sized enterprises (SMEs)” (Nunes et al 2010; P.292). Most interestingly, “the Labour Market Survey (2001) showed a clear relationship between business failure and a lack of planning or training by SMEs” (Javawarna et al 2007; P.321). Based on existing review of literature, it is found that business expansion oblige SMEs to adopt new and original information technology solutions. Simultaneously, it is suggested that “Lack of training and skills of IT in organizations will result in a limited use of IT and lack of success in reaping benefits from computer hardware and software in organizations” (Ghobakhloo et al 2012, P.44). All that is said is true, “Information technologies (IT) have become one of the most important infrastructural elements for SMEs” (Uwizeyemungu and Raymond, 2011 P.141). As a result, it is generally believed that information technology has an imperative role to play in gaining innovation and competitiveness for SMEs. What is more, it has always been recognized that investing in technology is necessary but insufficient by itself. An imperative need exists for businesses of all sizes to protect their customers by protecting themselves from cyber attack. This is to be accomplished via changing the attitude towards cyber security and the development of cyber culture. Valli and his associates note that “There is little literature available about the ability of SMEs to deploy, use and monitor cyber security countermeasures” (Valli et al 2014;P. 71). Borum and his colleagues believe that “Industries and commercial sectors must collaborate with the government to share and disseminate information, strengthen cyber intelligence capabilities and prevent future cyber incidents” (Borm et al 2015 P.329). Uwizeyemungu and Raymond suggest that “IT adoption and assimilation in these firms should be the product of an alignment between the strategic orientation and competencies that characterize their organizational context on one hand, and specific elements in their technological and environmental contexts on the other hand” (Uwizeyemungu and Raymond 2011, P.153). A study by Ghobakhloo and his colleague “suggested that through the passing of cyber laws by governments to regulate and secure online transaction activities, and also by providing appropriate anti-virus and/or firewall/security protocols for SMEs by vendors and service providers to reduce or prevent the attacks of hackers, viruses and spyware, the perceived risk of IT adoption by these businesses, should be alleviated” (Ghobakhloo et al 2012, P.57). To lead the way to successful innovation within SMEs, this study will be a significant effort in promoting the key sustainability issues affecting innovation within SMEs. The best way to start is by understanding innovation within the Lebanese SMEs. A study such as the one conducted here is recommended by experts in this area. Armbruster and his associates noted that “There is still plenty of research to do before organizational innovation surveys achieve the degree of homogeneity and standardization that advanced R&D and technical innovation surveys possess” (Armbruster et al 2008, P.656). The purpose of this investigation is to determine the relative importance of introducing supporting activities, training personnel and R & D on the variation in introducing knowledge management systems within Lebanese SMEs. To this end, The aim of this project is to investigate the adoption of existing technologies to new applications in a concrete SME business case in addition to what motivate innovation within Lebanese SMEs and the challenges and barriers facing SMEs in adopting innovation. The population of the study consists of all SMEs in Lebanon. Most of SMEs are family businesses and as to be expected “Family involvement in a firm has an impact on many aspects of organizational behaviour” (Cromile and O'Sullivan, 1999, p. 77). Morris and his colleagues argue that “family firms violate a tent of contemporary models of organizations, namely, the separation of ownership from management” (Morris et al, 1996, p. 68) This leads to a lot of complications including the succession problems, role conflict and role ambiguity that may represent major barriers to adopting IT and innovation within SMEs. The sample for this study is relatively large sample and the instrument for collecting the primary data was a well constructed questionnaire. Cronbach's alpha and factor analysis were used to establish the reliability and construct validity of the instrument. Findings of the study show that introducing new or significantly improved supporting activities, training personnel, having an employee who is fully in charge of the website and having an R&D department are the significant factors affecting introducing new or significantly improved knowledge management systems to better use or exchange information, knowledge and skills within Lebanese SMEs. Findings of this study are in line with previous findings. Schienstock and associates believe that firms have to develop their competence to learn and innovate by introducing new knowledge management practices and organizational restructuring. In fact, they criticized the traditional approach in the classical studies of “the so-called linear model, traditional innovation policy focuses primarily on the creation of new scientific and technical knowledge, supposing some kind of automatic transformation of this new knowledge into new products” (Schienstock et al 2009, PP.49–50). What is more, Molero and García believe that “the theory about factors affecting firms' innovation has still a long way to go because the analytical object is complex and difficult to set limits for” (Molero and García 2008, P.20). This project has implications for policy making, decision making and recommendations for further research.
References:
Armbruster, H. Bikfalvi, A. Kinkel, S. and Lay, G (2008) “Organizational innovation: The challenge of measuring non-technical innovation in large-scale surveys”, Technovation 28, 644–657. The full text is available at: www.sciencedirect.com
Ashrafi, R. and Murtaza, M. (2008), “Use and Impact of ICT on SMEs in Oman.” The Electronic Journal Information Systems Evaluation Volume 11 Issue 3, pp. 125–138. The full text is available at: www.ejise.com
BankMed (April 2014) “ANALYSIS OF LEBANON'S ICT SECTOR”, P.19. The full text is available at: http://www.bankmed.com.lb/LinkClick aspx?fileticket = xGkIHVHVrM4%3D&portalid = 0
Borum, R.; Felker, J.; Kern, S.; Dennesen, K; Feyes, T. (2015), “Strategic cyber intelligence”, Information & Computer Security, Vol. 23 Iss: 3, pp.317–332. The full text is available at: http://www.emeraldinsight.com.ezproxy.aub.edu.lb/doi/10.1108/ICS-09-2014–0064
Cromile, S. and O'Sullivan, S. (1999), “Women as managers in family firms”, Women in Management Review, Vol. 14 No. 3, pp.76–88. The full text is available at: http://www.emeraldinsight.com/doi/abs/10.1108/09649429910269884
Ghobakhloo, M.; Hong, T.S.; Sabouri, M.S.; Zulkifli, N. (2012), “Strategies for successful information technology adoption in small and medium-sized enterprises”. Information 3, 36–67. The full text is available at: file:///C:/Users/PC/Desktop/information-03-00036%20(2).pdf
Hangyol, S. Yanghon, C. Dongphil, C and Chungwon, W. (2015), “Value capture mechanism: R&D productivity comparison of SMEs”, Management Decision, Vol. 53 Iss: 2, pp.318–337. The full text is available at: http://www.emeraldinsight.com.ezproxy.aub.edu.lb/doi/abs/10.1108/MD-02-2014–0089
Javawarna, D., Macpherson, A. and Wilson, A. (2007), “Training commitment and performance in manufacturing SMEs: Incidence, intensity and approaches”, Journal of Small Business and Enterprise Development, Vol. 14 Iss: 2, pp.321–338. The full text is available at: http://www.emeraldinsight.com.ezproxy.aub.edu.lb/doi/abs/10.1108/14626000710746736
Molero, J. and García, A. (2008), “Factors affecting innovation revisited”, WP05/08, PP:1–30. The full text is available at: https://www.ucm.es/data/cont/docs/430-2013-10-27-2008%20WP05-08.pdf
Morris, M. H. Williams, R. W. Nel, D. (1996), “Factors influencing family business succession”, International Entrepreneurial Behaviour & Research. Vol. 2 No. 3, pp.60–81. The full text is available at: http://www.emeraldinsight.com/doi/abs/10.1108/13552559610153261
Nunes, P.M. Serrasqueiro, Z. Mendes, L. Sequeira, T.N (2010), “Relationship between growth and R&D intensity in low-tech and high-tech Portuguese service SMEs”, Journal of Service Management, Vol. 21 Iss: 3, pp.291–320. The full text is available at: http://www.emeraldinsight.com.ezproxy.aub.edu.lb/doi/pdfplus/10.1108/09564231011050779
Schienstock, G. Rantanen, E. and TyniIAREG, P. (April 2009) “Organizational innovations and new management practices: Their diffusion and influence on firms' performance. Results from a Finnish firm survey”. IAREG Working Paper 1.2.d. PP. 1–64. The full text is available at: http://www.iareg.org/fileadmin/iareg/media/papers/WP_IAREG_1.2d.pdf
Uwizeyemungu, S. and Raymond, L. (2011), “Information Technology Adoption and Assimilation: Towards a Research Framework for Service Sector SMEs,” Journal of Service Science and Management, Vol. 4 No. 2, pp. 141-157, doi: 10.4236/jssm.2011.42018. The full text is available at: http://www.scirp.org/Journal/PaperInformation.aspx?PaperID5229 World Economic Forum (2015), Global Information Technology Report 2015. The full text is available at: http://www3.weforum.org/docs/WEF_Global_IT_Report_2015.pdf
Valli, C. Martinus, I. and Johnstone, M. (Aug 2, 2014), “Small to Medium Enterprise Cyber Security Awareness: an initial survey of Western Australian Business”, The 2014 International Conference on Security and Management, At Las Vegas, Nevada, PP:71-75. The full text is available at: https://www.researchgate.net/publication/264417744_Small_to_Medium_Enterprise_Cyber_Security_Awareness_an_initial_survey_of_Western_Australian_Business
-
-
-
A Survey on Sentiment Analysis and Visualization
More LessOnline Social Networks become the medium for a plethora of applications such as targeted advertising and recommendation services, collaborative filtering, behavior modeling and pre- diction, analysis and identification of aggressive behavior, bullying and stalking, cultural trend monitoring, epidemic studies, crowd mood reading and tracking, revelation of terrorist net- works, even political deliberation. They mainly aim to promote human interaction on the Web, assist community creation, and facilitate the sharing of ideas, opinions and content. Social Networking Analysis Research has lately focused on major Online Social Networks like Face-book, Twitter and Digg [Chelmis and Prasanna, 2011]. However, research in Social Networks [Erétéo et al., 2008] has extracted underlying and often hidden social structures [Newman, 2010] from email communications [Tyler et al., 2003], structural link analysis of web blogs and personal home pages [Adamic and Adar, 2003] or recently explicit FOAF networks [Ding et al., 2005], structural link analysis of bookmarks, tags or resources in general [Mika, 2007], co-occurrence of names [Kautz et al., 1997] [Mika, 2007], and co-authorship in scientific publications references [Wasserman and Faust, 1994], and co-appearance in movies or music productions [Yin et al., 2010]. Interactive visualizations is employed by visual analytics in order to integrate users' knowledge and inference capability into numerical/algorithmic data analysis processes. It is an active research field that has applications in many sectors, such as security, finance, and business. The growing popularity of visual analytics in recent years creates the need for a broad survey that reviews and assesses the recent developments in the field. This paper reviews the state of the art of sentiment visualization field. The growing popularity of sentiment visualization systems in recent years is an active research area. In this paper, we will present a survey that reviews and assesses the recent visualization techniques and systems in the field. This report classifies and reviews the recent approaches in visual analysis. The motivations in conducting this survey are twofold. First, we aim to review the most recent research trends and developments of sentiment visualization techniques and systems and provide a precise review of the field. Second, this survey aims to provide a critical assessment of the research which can help enhance the understanding of the field.
-
-
-
Full-View Coverage Camera Sensor Network (CSN) for Offshore Oil Fields Surveillance
More LessThe United Arab Emirates (UAE) is the eighth largest oil producing country in the world. It has about 800 km of coastline. Beyond the coastline, its territorial water and exclusive economic zone have very rich and extensive marine life and natural resources. Most of the oil and natural gas in the UAE is produced from the offshore oil fields. Maritime oil exploration and transportation has increased more steeply due to the expansion of the world crude oil and natural gas production, and the trend of using larger-shaped and higher-speed container vessels. The probability of oil rigs pollution, burning and explosion continues to rise. All these factors stimulate a greater danger for vessels, oil operation safety and maritime environment. Therefore, maritime security and environmental protection are of great interest, both from the academia and petroleum industry point of view. The continuous surveillance of the offshore oil fields is essential to secure the production flow, avoid trespassing and prevent vandalism from intruders and pirates. With the emergence of the new technologies such as maritime wireless mesh network (MWMN) and camera sensor network (CSN), maritime surveillance systems have been gradually improving due to the accuracy, reliability and efficiency of the maritime data acquisition systems. However, in order to realize the oil operation security, it is necessary to control and implement a dynamic system to monitor the maritime environment. The monitoring objects include vessels, fishery boats, pollution, navigational and sailing conditions. By the same token, the legacy monitoring systems such as very-high frequency (VHF) communication, marine navigational radar, vessel traffic service (VTS) and automatic identification system (AIS) are still insufficient to satisfy the increasing demand of the maritime surveillance. The objective of the paper is to provide full-view coverage CSN for a triangular grid based deployment and to reduce the total transmission power of the image to cope with the limited available power in the CSN. The rough and random movements of the sea surface can lead to a time-varying uncovered area by displacing the CSN from its initial location. Thus, it is important to investigate and analyze the effects of the sea waves on the CSN to provide full-view coverage in such complex environments. The main challenges in the deployment of the CSN are the dynamic maritime environment and the time-varying full-view coverage provided by the sea waves. Therefore, quasi-mobile platforms such as buoys are envisaged to hold the camera sensor nodes. The buoys will be nailed at the sea floor to limit the movement of the buoys due to the sea waves. In addition, cooperative transmission method has been proposed to reduce the total transmission power of the image in the CSN. A CSN is formed by autonomous, self-organized ad-hoc camera sensor nodes that are equipped with wireless communication devices, processing unit and power supply. The design, implementation and deployment of a CSN for maritime surveillance stimulate new challenges different to that which exist on the land, as the maritime environment hinders the development of such a network.
The main differences are summarized as follows:
• Dynamic aspect of the maritime environment which requires rigorous levels of device protection.
• Deployment characteristic of a CSN in maritime environment which is highly affected by wind direction and speed, sea wave, and tide.
• Requirement of flotation and anchoring platforms for a CSN and the possible vandalism from intruders and pirates.
• Coverage problem of a CSN due to the random and rough sea movement.
• High energy consumption if battery-power based cameras are used continuously.
• Communication signals are highly attenuated by the constant sea movement.
In this context, CSNs with ubiquitous and substantive camera sensor nodes can be utilized to monitor offshore oil fields to secure the production flow, avoid trespassing and vandalism from intruders and pirates. However, camera sensor nodes can generate various views of the same target if they are captured at different viewpoints if the image is taken near or at a frontal viewpoint, then, the target will be more likely to be recognized by the recognition system. It is fundamental to understand how the coverage of a given camera depends on different network parameters to better design numerous application scenarios. The coverage of a particular CSN represents the quality of surveillance provided by the CSN. As the angle between the target's facing direction and the camera's viewing direction increases, the detection rate drops severely. Consequently, the camera's viewing direction has a considerable effect on the quality of surveillance in a CSN. Recently, a novel concept, which is called full-view coverage, has been introduced to characterize the intrinsic property of camera sensor nodes and assess the coverage in CSNs. A target is full-view covered if its facing direction is always within the scope of a camera, regardless of the target's actual facing direction. Simply, the underlying contribution of full-view coverage tackles the pledge of capturing the target's face image. Consequently, designing a CSN with full-view coverage is of major importance, as the network does not only provide the detection of a target, but also the recognition of it. In many network configurations, camera sensor nodes are not mobile and they remain stationary after the initial deployment. In a stationary CSN, when the deployment characteristic and sensing model for the CSN are defined, the coverage can be deduced and remain unchanged over time. In order to address the hostile maritime environment, there has been a strong desire to deploy sensors mounted on quasi-mobile platforms such as buoys. Such quasi-mobile CSNs are extremely beneficial for offshore oil field surveillance where buoys move with the sea waves. Hence, the coverage of a quasi-mobile CSN depends not only on the initial network deployment, but also on the mobility pattern of the CSN. Nevertheless, the full-view coverage under a quasi-mobile CSN in maritime network has not been investigated. This problem is pivotal for the network design parameters and application scenarios of CSNs where conventional deployment characteristics such as air-drop fail or is not appropriate in a maritime environment. Since a priori knowledge of the terrain is available, a grid-based deployment can be utilized for the given terrain. The endeavor to design a practical mobility pattern for CSNs gives rise to model the cable attached to the buoy as a spring. In this practical mobility pattern, buoys start from an initial coordinate assignment, then oscillate based on spring force, sea wave, wind direction and speed and ultimately converge to a consistent solution. Specifically, this mobility pattern is based on two stages. The first stage is the effects of sea wave, wind direction and speed that move a buoy. The second stage is the spring reaction, based on the previous effects. Then, the design concept is followed and extended to develop a mobility pattern for CSNs in maritime environment. A CSN is considered that constitutes of a small number of buoys whose locations are initially known and consecutively their locations will be derived based on spring relaxation technique. With this technique, coverage issues arisen in a CSN and design a cooperative transmission method to reduce the total transmission power in the CSN are studied. One primary problem is how to design a realistic sea wave model for a given deployed CSN to achieve full-view coverage. Compared with the traditional sea wave model which assumes sine wave model for simplicity analysis, there are two elements that increase the complexity of the problem in a realistic sea wave model. First, the force that acts on the surface of the sea, which is supposed to be the main driving force for the creation of waves in deep water. Second, the force from the sea surface and ground interaction, which is the main contributing force near shoreline, however, this type of force becomes dominant in deep water when seaquakes occur. The realistic sea wave model should encounter the extruding of a two dimensional sine wave model into a three dimensional sea wave model. However, there should be some order of variation along the wave propagation direction for the finite wide waves. In conventional wireless sensor networks (WSNs), scalar phenomena can be traced using thermal or acoustic sensor nodes. In camera sensor networks (CSNs), images and videos can significantly enrich the retrieved information from the monitored environment, and hence provide more practicality and efficiency to WSNs. Recently; there has been enormous development of applications in surveillance, environment monitoring and biomedicine for CSNs that has brought a new spectrum to the coverage problem. It is indispensable to understand how the coverage of a camera depends on various network parameters to better design numerous application scenarios. In many network configurations, cameras are not mobile and they remain stationary after the initial deployment. However, different from a stationary CSN, maritime environment poses challenges on the deployment characteristic and mobility pattern for CSNs. In stationary CSNs, when the deployment characteristic and sensing model are defined, the coverage can be deduced and remain unchanged over time. In the maritime environment, camera sensors are mounted on quasi-mobile platforms such as buoys. This paper aims to provide full-view coverage CSN for maritime surveillance using cameras mounted on buoys. It is important to provide full-view coverage because in full-view coverage, targets facing direction is taken into account to judge whether a target is guaranteed to be captured. Image shot at the frontal viewpoint of a given target considerably increases the possibility to detect and recognize the target. The full-view coverage has been achieved using equilateral triangle grid-based deployment for the CSN. To accurately emulate the maritime environment, a mobility pattern has been developed for the buoy which is attached with a cable that is nailed at the sea floor. The buoy movement follows the sea wave that is created by the wind and it is limited by the cable. The average percentage of full-view coverage has been evaluated based on different parameters such as equilateral triangle grid length, sensing radius of camera, wind speed and wave height. Furthermore, a method to improve the target detection and recognition has been proposed in the presence of poor link quality using cooperative transmission with low power consumption. In some parameter scenario, the cooperative transmission method has achieved around 70% improvement in the average percentage of full-view coverage of a given target and total reduction of around 13% for the total transmission power PTotal(Q).
-
-
-
Classification of Bisyllabic Lexical Stress Patterns Using Deep Neural Networks
Authors: Mostafa Shahin and Beena AhmedBackground and Objectives: As English is a stress-timed language, lexical stress plays an important role in the perception and processing of speech by native speakers. Incorrect stress placement can reduce the intelligibility of the speaker and their ability to communicate more effectively. The accurate identification of lexical stress patterns is thus a key assessment tool of the speaker's pronunciation in applications such as second language (L2) learning, language proficiency testing and speech therapy. With the increasing use of Computer-Aided Language Learning (CALL) and Computer-Aided Speech and Language Therapy (CASLT) tools, the automatic assessment of lexical stress has become an important component of measuring the quality of the speaker's pronunciation. In this work we proposed a Deep Neural Network (DNN) classifier to discriminate between the unequal lexical stress patterns in English words, strong-weak (SW) and weak-strong (WS). The features used in training the deep neural network are derived from the duration, pitch and intensity of each of the two consecutive syllables along with a set of energies of different frequency bands. The robustness of our proposed lexical stress detector has been validated by testing it on the standard TIMIT dataset collected from adult male and female speakers distributed over 8 different dialect regions. Method: Our lexical stress classifier is applied on the speech signal along with the prompted word. Figure 1 shows a block diagram of the overall system. The speech signal is first force aligned with the predetermined phoneme sequence of the word to obtain the time boundaries of each phoneme. The alignment is performed using a Hidden Markov Model (HMM) Viterbi decoder along with set of HMM acoustic models trained from the same corpus to reduce the error caused by inaccurate phone level segmentation. A set of features is then extracted from each syllable and the features of each pair of consecutive syllables combined using the extracted features directly and concatenating them into one wide feature vector.
Lexical stress is identified by the variation in the pitch, energy and duration produced between different syllables in a multi-syllabic word. The stressed syllable is characterized by increased energy and pitch as well as a longer duration compared to the other syllables within the same word. Therefore we extracted seven features f1–f7 related to these characteristics as listed in Table 1. The energy based features (f1, f2, f3) were extracted after applying the non-linear Teager energy operator (TEO) on the speech signal to obtain a better estimation of the speech signal energy and reduce the noise effect. These seven features are commonly used in the detection of the stressed syllable in a word. As the speech signal energy is distributed over different frequency bands, we also computed the energy in the Mel-scale frequency bands in each frame of the syllable nucleus. The speech signal was divided into 10 msec non-overlapped frames and the energy, pitch and the frequency bands energies calculated for each frame.
As seen in Figure 1, to input the raw extracted features directly to the DNN, we concatenate the extracted features into one wide feature vector. Each syllable has 7 scalar values f1–f7 and 27*n Mel-coefficients where n is the number of frames in each syllable's vowel.
To handle variable vowel lengths, we limit the number of input frames provided to the DNN to a maximum N frames for each syllable. This provides the DNN with a fixed length Mel-energy input vector and allows the DNN to use information about the distribution of the Mel-energy bands over the vowel. If the vowel length (n) is greater than N frames, only the middle N frames are used. If the length of the vowel (n) is smaller than N frames, inputs frames are padded to N frames. The final size of the input vector to the DNN is 2*(7+27*N) for a pair of consecutive syllables, with N tuned empirically.
The DNN is trained using the mini-batch stochastic gradient decent method (MSGD) with adaptive learning rate. The learning rate starts with an initial value (typically 0.1) and after each epoch the loss in the error of the validation data set is computed. If the loss is greater than zero (i.e. the error increases) the training continues with the same learning rate.
If the error continues increasing for 10 consecutive epochs, the learning rate is halved and the parameter of the classifier returned to the one that achieved minimum error. Training is terminated when the learning rate reaches its minimum value (typically 0.0001) or after 200 epochs, whichever is earlier. The performance of the DNN is then computed using a separate testing set. Experiments and Results: We extracted raw features from consecutive syllables belonging to the same word from the TIMIT speech corpus. With the TIMIT corpus, we achieved a minimum error rate of 12.6% using a DNN classifier with 6 hidden layers and 100 hidden units per layer. Due to the unavailability of sufficient male and female data, we were unable to build a separate model for each gender. In Fig. 2, we present the error rate for each gender using a model trained on both male and female data. The results show that the classification of the SW is better in male speakers compared to female speakers while the WS error rate is lower for female speakers. However, the overall misclassification rate for both male and female speakers is almost the same.
To study the influence of the dialect on the algorithm, we compared the error rate when testing each dialect using a model trained with the training data of all dialects and when the model was trained with training data from all dialects except the tested one as shown in Fig. 3. As seen, the error rate of most of the dialects remains unchanged except for DR1 where the error rate increased significantly from 4.8 % to 8%. This can be explained by the small amount of test samples for this dialect (only 5% of the test samples). DR4 also shows a considerable increase in the error rate. Although the smallest amount of training samples was from the DR1 (New England) dialect, it produced the lowest error rate among the other dialects. Further work is needed to explain this behavior. Conclusion: In this work we present a DNN classifier to detect bisyllabic lexical stress patterns in multi-syllabic English words. The DNN classifier is trained using a set of features extracted from pairs of consecutive syllables related to pitch, intensity and duration along with energies in different frequency bands. The feature set of each pair of consecutive syllables is combined by concatenating the raw features into one wide vector. When applied on the standard TIMIT adult speech, the algorithm achieved a classification accuracy of 87.4%. The system performance show high stability over different dialects and gender.
-
-
-
Mobile Sensing of Human Sunlight Exposure in Cities
Authors: Ahmad Al Shami, Weisi Guo and Yimin WangI. Abstract: Despite recent advances in sensor and mobile technology, there is still a lack of an accurate, scalable, and non-intrusive way to knowing how much sunlight we are exposed to. For the first time, we devise a mobile phone software application (SUN BATH), that utilizes a variety of on-board sensors and data sets to accurately predict the sunlight exposure each person is exposed to. The algorithm is able to take into account the sunlight exposure based on the person location, the local weather, sun location, and shadow from buildings. The algorithm achieves this by using the mobile user's location and other sensors to determine whether it is indoors or outdoors, and uses building data to calculate shadow effects and weather data to calculate diffused light contributions. This will ultimately allow the user to be more informed about sunlight exposure and compare it with daily recommended levels to encourage positive behaviour change. In order to show the value added by the application, SUN BATH is distributed to a sample of students population for benchmarking and user experience trials. The latest stable version of the application, suggests a scalable and affordable way compared to survey or physical sensing methods. II. Introduction: In this particular proposal, we examine how to live healthily in cities using a data-driven mobile-sensing approach. Cities are partly defined by a high building concentration and a human lifestyle that is predominantly spent indoors or in the shadow of buildings. Some cities in particular, also suffer from heavy pollution effects that significantly reduces the level of direct solar radiation. As a result, one area of concern is the urban dwellers lack of exposure to the ultra-violet (UV) band of sunlight and the wide range of associated health problems. The large scale and chronic nature of the health problems can lead to a time bomb in the National Health Service and cause irreversible future damage to the economy. This article proposes using the ray tracing SORAM model by Erdelyi et al. as an innovative and flexible technique for modelling and estimating the amount of solar irradiation can be collected at a time and certain location. SORAM module is already benchmarked against real measurement data, hence, our work here will benefit from this by taking the calculated ray-tracing information as a primary filter. The aim is to devise an affordable and accurate way of continuously estimating each person's UV exposure. Primarily, this is achieved by developing an Android smartphone application that uses the SORAM advanced modelling techniques to estimate the level of UV exposure each person is subjected to at any given time and location. The research novelty is that the proposed solution does not require additional purpose-built hardware such as a photovoltaic sensor, but instead utilizes a combination of accurate wireless localization, and weather-/terrain-informed sunlight propagation mapping. The challenges addressed include how to accurately locate a human and how to model the propagation of sunlight in complex urban environments. The latest stable version of the application, suggests a novel and affordable way compared to traditional or physical methods when calculating the amount of sunshine we are exposed to. III. System Overview: We implemented and evaluated SUN BATH application with the Android platform using different mobile phone models such as Samsung S5, Asus Zen5 and Archos tablet. The application is developed using Android Studio as IDE for Android application development. The application allows the user to create a profile using a user name and some information such as date of birth, height, weight, skin colour, country of origin, and level of income. To be used later for future detailed reporting with relation to the amount of sun bathing for different groups and ethnicities. SUN BATH only relies on lightweight “sensors to server” modelling which allows continuous low-energy and low-cost tracking of the user location and state transitions. In particular, we will present the process to show that we were able to use SORAM within the smart-phone environment to accurately infer the amount of sunshine in a user is exposed to based on the accuracy of the GPS and other location modules for Android mobile phones. To meet stringent design requirements, SUN BATH utilizes a series of lightweight ‘sensors – server’ for a fault-tolerant location detection. SUN BATH primarily makes use of three types of location-aware detectors: WiFi, cellular-network, and GPS. The aforementioned three wireless location detectors are used in conjunction to improve resolution and resilience. WiFi hub SSID identifiers are used to locate the hub in known open and commercial databases up to an accuracy of a few metres. In the absence of WiFi, a combination of cell tower location area and assisted GPS is used to get an accuracy of 10–15 m in urban areas with shadow effects. WiFi detector adopts the distributed IP address to capture the source location to determine the region the user is in. Cellular-network detector detects the source and attenuation of signals caused by objects on its path (e.g., trees, buildings). It normally help to indicate the movement of the user as the mobile signal gets handed from one network to another. The Application utilizes the GPS sensor to exactly pinpoint the coordinates of the user location i.e. Latitude and Longitude. The system time clock is also used to assist the detection of the local time. The App cache-in those parameters and sends it to a remote server whenever there is an Internet connection. The server hosts the SORAM calculation algorithm which generates a live estimate of the amount of sun exposure the user is experiencing. The results are then passed back to the applications through the Open Database Connectivity (ODBC) middleware service to permanently store the results in a secure database management system (DBMS). IV. Soram Ray-tracing Methodology: A person positioned in an out-door environment is surrounded by solar radiation, which consists of direct and diffuse rays. Direct and diffuse radiation data on a horizontal surface are usually collected at various locations and weather stations across the universe. The raw datasets collected can be used to estimate the amount of global radiation at any point of earth of a given slope and azimuth. Due to cost and scarcity of live data, the SORAM algorithm embed and made use of the Reindel Model, to estimate the direct and diffuse irradiation from hourly horizontal global radiation data. In addition and to go light on computation and avoid calculations for the nighttime hours, the SORAM determines the sunrise time for each day of the year and the amount of solar radiation data from that point onwards which is then calculated until sunset. The algorithm also estimates and with high accuracy direct and diffuse radiation on a surface of given slope and azimuth from their counterparts on a horizontal access considering surrounding shading conditions. We tested SUN BATH in simulated and real locations for five continuous days from sunrise to sunset in around the School of Engineering building complex at the University of Warwick campus. Simulated tests were carried manually, using two fixed location parameters i.e Longitude and Latitude. A quick memory and CPU monitor view revealed that the SUN BATH energy consumptions and resource-constraint on the used smartphone devices were moderate. A full memory and CPU monitor view can be easily produced, but it is beyond the scope of this article. V. Conclusions and Future Work: This research presented the architecture of SUN BATH mobile sensing application that gathers a variety of lightweight sensors information and utilised ray-tracing algorithm to derive the level of human sun exposure in urban areas. The application has demonstrated that it can be an affordable and pervasive way of accurately measuring the level of sunlight exposure each person is exposed to. Further work is required to scale the project to the global level, which requires big data sets on urban building maps and meterological data from all the cities.
-
-
-
Information Security in Artificial Intelligence: A Study of the possible intersection
Authors: Tagwa Warrag and Khawla Abd Elmajed1. Introduction
Artificial Intelligence or A.I attempts to understand intelligent entities, and strives to build ones. And it is obvious that computers with human-level intelligence (or more) would have huge impact in our everyday lives and the future. A student in physics might reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest, and that it takes many years of study before one can contribute new ideas. Artificial Intelligence, on the other hand, still has openings for a full-time Einstein. [1] With the ever increasing amounts of the generated information-security data, smart data analysis will become even more pervasive as a necessary ingredient for more effective and efficient detection and categorization. Which will provide intelligent solutions to security problems that perform beyond the typical automatic approaches. Moreover, the combination of Artificial Intelligence and Information Security focuses on analytics of security data rather than simple statistical reporting. In our research, we are conducting a survey on the different A.I methodologies that are being used by researchers and industry professionals in order to tackle security problems. We added our own analysis and observations on the findings, and compared between different methods. We are working on providing more details about which approaches that suit certain problems, and how some A.I methodologies are not always a good choice for specific information security problems. By this work, we are trying to introduce the intersection of the two fields of Information Security and Artificial Intelligence, and to hopefully promote more use of intelligent methods in solving cyber security issues in the middle-east. The background is divided into two parts: the first part is about the different forms of information security data sets, and the second part of background is briefly giving examples of major corporations that use A.I to address security issues. The background part is followed by the results and discussion, at which we expressed our own opinions, observations and analysis. Our work is still in progress, so we concluded the paper by stating our future missions of this research.
2. Background
Artificial Intelligence has always proven its way through the successful applying in solving various industrial problems related to medicine, finance, marketing, low and technology. It is the cognitive era we are living in, and according to IBM, the augmented-intelligence systems like IBM-Watson process information themselves, and they can teach too. Which will lead to more cognitive learning platforms that eradicate the need to manually work in industrial problems. [2] On the other side, the overwhelmingly huge sizes of data generated by networking entities and information security elements is considered as a rich and valuable resource to more promising security insights.
2.1 Information Security Data Sets
Data can take different forms when it comes to information security. Starting from the logs, as Windows security logs, servers logs, outputs generated from networking tools as Snort, TCPDump, and NMAP [3]. In addition, Sandbox output [4] when malwares are executed, the sniffed network-traffic in a.pcap [Packet CAPture] files, and features of installed android mobile applications [5] are all examples of information security data that can be treated as input to the Artificial Intelligence techniques as Machine Learning, Data Mining, Artificial Neural Networks, Fuzzy logic systems and Artificial Immune Systems. For the experimental and academic purposes, there are various online repositories that provide information security data sets as DARPA data sets [6].
2.2 Examples from the major corporations
2.2.1 Kaspersky Cyber Helper
Cyber Helper is a successful attempt in getting nearer to employing truly autonomous Artificial Intelligence in the battle against malware. The majority of Cyber Helper autonomous sub-systems synchronize, exchange data and work together as if they were a single unit. For the most part they operate using fuzzy logic and independently define their own behavior as they go about solving different security tasks. [7].
2.2.2 IBM – IBM Watson
IBM Watson is a technology platform that uses Natural Language Processing or NLP and Machine Learning to reveal insights from large amount of unstructured data. [8] IBM Watson can be trained up on massive amount of security data from the Common Vulnerabilities and Exposures, or CVE, threat database to articles on network security, plus deployment guides and how-to manuals, and all sorts of content that makes IBM Watson very smart on security. IBM Watson uses NLP technologies, so the users might pose security questions to Watson, and Watson will response with all pertinent applications. [9]
3. Results & Discussion
Through the survey that we conducted on number of different researches and projects that work on the intersection area of A.I and Information security, we came across the following:
1- We noticed that the most commonly followed approach is Machine Learning. But not all the Machine Learning algorithms were right for solving every information security problem. Some algorithms result in high rates of false alarms, false negative and false positive for certain kind of information security issues. So it turns out that deciding the most suitable A.I approach to follow depends on the nature of the information security problem we are trying to solve, what kind of data we do have in hand, whether we have classes or labels for those data or not, and many other factors.
2- There is a preprocess stage for the unstructured data before it becomes ready for upload into the selected Artificial Intelligent model or method. When the information security data was an android malware, it had to be executed inside a Sandbox first, and then a report was generated about the execution of this malware. Each different type of Sandbox usually generate different format of reports. The common format are text, XML and MIST [Sequence of instructions]. Text is more convenient for humans but XML and MIST is are more suitable for machines. [3] Another example is when the features of installed android mobile applications were the input to the artificial intelligence processes. Those features had to be extracted from the dex code [Android programs are compiled into.dex Dalvik Executable files, which are in turn zipped into a single.apk file] [5].
3- When Machine Learning is the selected approach to be applied on the information security problem, the dataset defines what action need to be followed: clustering, classification, feature-selection or a combination between different processes. When the security data set has labels [supervised learning] then the applied actions are classification algorithm. When there are no labels or classes on the data rows of the security data set [Unsupervised learning] then we need to group the similar entities in group, and thus we use clustering Machine Learning algorithms. Some researchers used both clustering and classification techniques for security applications that detect malicious malware behavior that has not been previously assigned to a certain malware class, and they see both clustering and classification as two techniques that complement each other. [4] The feature selection was conducted as well on security datasets to identify the features that help the most in the prediction process and building of the predictive models.
4- The use of some of the Linear Algebra techniques as vector space, which was combined with the static analysis, was suggested by one of the researchers, so as to have a better representation of the selected features of installed android mobile applications that are suspected to be malicious.
4. Conclusion
The application of Artificial Intelligent methodologies in addressing the information security problems will play a major role in bringing brighter insights that move the security forward; by formulating more successful winning approaches leading to a security that “thinks”. The huge overwhelming amount of data that is generated by networking devices and security appliances will be of a great use when combined with the intelligence of machine learning; and going beyond the traditional and limited automatic techniques of information security. Our research work on the intersection between the Artificial Intelligence and Information Security is still on going. And we hopefully look forward by the end of this work to help in designing a matrix with more accurate criteria that help the information security practitioners to decide which A.I approach to follow. Security professionals will no longer need to dwell into the deep mathematical formulas of A.I and machine learning when they start considering more intelligent alternative solutions.
5. References
[1] Russell, Stuart J. and Norvig, Peter. Artificial Intelligence: A Modern Approach. A Simon & Schuster Company 1995.
[2] Powered by IBM Watson: Rethink! Publication - Our future in augmented intelligence. August 2015.
[3] Buczak, Anna L. and Guven, Erhan. A survey of data mining and machine learning methods for cyber security intrusion detection. 2015.
[4] Rieck, Konrad. Trinius, Philipp. Willems, Carsten, and Holz, Thorsten. Automatic Analysis of Malware Behavior using Machine Learning. 2011.
[5] Arp, Daniel. Spreitzenbarth, Michael. Hübne, Malte. Gascon, Hugo. Rieck, Konrad. DREBIN: Effective and Explainable Detection of Android Malware in Your Pocket.
[6] DARPA Intrusion Detection Data Sets – MIT Lincoln Laboratory. Link: http://ll.mit.edu/ideval/data.
[7] Oleg Zaitsev. Cyber Expert. Artificial Intelligence in the realms of I.T security, link: http://securelist.com/analysis/publications/36325/cyber-expert-artificial-intelligence-in-the-realms-of-it-security October 25, 2010.
[8] IBM Watson official website, link: http://www.ibm.com/smarterplanet/us/en/ibmwatson/what-is-watson.html.
[9] An interview with Amir Husain on how IBM Watson is helping to fight cyber-crime, link: http://www.forbes.com/sites/ibm/2015/05/29/how-ibm-watson-is-helping-to-fight-cyber-crime/ MAY 29, 2015.
-
-
-
Patient Center – A Mobile Based Patient Engagement Solution
More LessPatient Engagement's Research results a mobile Application Patient Center to Manage Appointments, Personal Health Records and to avail Emergency Online Healthcare across all hospitals in Patient Center Network. It's Single Mobile App to manage and engage Patients actively in Healthcare across Qatar.
Problem Statement: Healthcare trends moving towards Patient Centric Healthcare Model from Hospital Centric Model. Qatar is having more than 350 Health facilities. Most of Health facilities not connected each other. Global Healthcare trends are implementing Interoperable Systems in Hospitals (Integrated Health Network), so that they can exchange their health data and Patient's Care will be continued and data will be shared with partnered networks and also government entities Biggest Gap in Healthcare process will be Patient Engagement. Providers and patients working together to improve health. A patient's greater engagement in healthcare contributes to improved health outcomes, and information technologies can support engagement. Patients want to be engaged in their healthcare decision-making process, and those who are engaged as decision-makers in their care tend to be healthier and have better outcomes Outpatients in Hospitals. Outpatient's work flow is very much defined in Healthcare Process, By Current technology, we need to reduce out patient's visit to hospital by having remote healthcare facility, so that Providers can provide better health outcomes on Inpatients Study: Book Name: Engage! Transforming Healthcare Through Digital Patient Engagement Edited by Jan Oldenburg, Dave Chase, Kate T. Christensen, MD, and Brad Tritle, CIPP This book explores the benefits of digital patient engagement, from the perspectives of physicians, providers, and others in the healthcare system, and discusses what is working well in this new, digitally-empowered collaborative environment. Chapters present the changing landscape of patient engagement, starting with the impact of new payment models and Meaningful Use requirements, and the effects of patient engagement on patient safety, quality and outcomes, effective communications, and self-service transactions. The book explores social media and mobile as tools, presents guidance on privacy and security challenges, and provides helpful advice on how providers can get started. Vignettes and 23 case studies showcase the impact of patient engagement from a wide variety of settings, from large providers to small practices, and traditional medical clinics to eTherapy practices. Book Name: Applying Social Media Technologies in Healthcare Environments Edited by Jan Oldenburg, Dave Chase, Kate T. Christensen, MD, and Brad Tritle, CIPP Applying Social Media Technologies in Healthcare Environments provides an indispensable overview of successful use of the latest innovations in the healthcare provider-patient relationship. As professionals realize that success in the business of healthcare requires incorporation of the tools of social media into all aspects of their worlds and recognize the value offered by the numerous media channels, this compendium of case studies from various voices in the field – caregivers, administrators, marketers, patients, lawyers, clinicians, and healthcare information specialists – will serve as a valuable resource for novices as well as experienced communicators. Written by experienced players in the healthcare social media space and edited with the eye of an administrator, chapters provide insight into the motivation, planning, execution, and evaluation of a range of innovative social media activities. Complete with checklists, tips, and screenshots that demonstrate proven application of various social channels in a range Based on the research on Patient Engagement, I have designed mobile based Patient Engagement solution called Patient Center. Using Patient Center mobile app, we can search nearest Specialties, Doctors with review ratings and also can book appointments in Health Facilities which are part of Patient Center Network. Patients can securely access their health records mobile and also can exchange health records to any provider for better treatment. Patient Center App works as personal health advisor by reminding patients medications, exercises and discharge notes etc. Patient Center is trying solve problems in following areas Reducing Phone calls for booking appointments and reducing time to fill forms prior to Encounter Patient Center Network contains several Providers, Payers, Pharmacies and Laboratories, all are completely interoperable and exchange Health Data with HIPPA Compliance Patients no need to carry bundles of his/her health records in papers when they meet doctor, instead all personal health records across all hospitals can be downloaded to Patient's Mobile and Patient can securely exchange his/her health data to any provider Single Mobile App to manage and access Appointments, lab orders medications, diagnoses, care plans, immunization history, and more On Emergency Cases, Patient Can click Emergency Service Menu in App and automatically nearest ambulance services will be alerted with Patient's GPS Location Complex Discharge Notes will be illustrated in Mobile App with images. Medication Consumption Alarms as per doctor's Advice in mobile Communications with Doctor from home using Patient Center, Healthcare Wearables which can provide Heart Rate, Blood Pressure, ECG, Diabetes Readings to Doctor in Real Time, so that, Doctor can diagnose patient online. Patient Center will try to reduce 60% outpatient visits and get Quality Healthcare right from Home In addition to the resources above, Patient Center will focus on following areas Social and Behavioral which covers social media, texting and gaming, wearables and mobile, and the social determinants of health Home Health which covers remote monitoring and telehealth, patient education, and smart homes Financial Health which includes managing health insurance and expenses, transparency and consumerism, patient onboarding and financial options Technology We are going to build entire application using IHE Profiles to achieve Interoperability in our networked hospitals (Integrated health Management). Integrated Health Enterprise (IHE) is an initiative by healthcare professionals and industry to improve the way computer systems in healthcare share information. IHE promotes the coordinated use of established standards such as DICOM and HL7 to address specific clinical needs in support of optimal patient care. Systems developed in accordance with IHE communicate with one another better, are easier to implement, and enable care providers to use information more effectively. The Exchange of Personal Health Record Content (XPHR) profile provides a standards-based specification for managing the interchange of documents between a Personal Health Record used by a patient and systems used by other healthcare providers to enable better interoperability between these systems. The Exchange of Personal Health Record Content (XPHR) integration profile describes the content and format of summary information extracted from a PHR system used by a patient for import into healthcare provider information systems, and vice versa. The purpose of this profile is to support interoperability between PHR systems used by patients and the information systems used by healthcare providers. Patient Center leverage other IHE Integration and Content Profiles for interoperability in addition to the XPHR Content Profile. For example, a PHR Systems may implement XDS-MS to import medical summaries produced by EHR systems, XDS-I to import imaging information, XDS-Lab to import laboratory reports, etc Patient Center Mobile Application is connected to Intersystems Healthshare Platform which is Health informatics platform certified by IHE. Patient Center app will be developed in ios and Android native application development platforms and connected to Healthshare server using web services and REST. Healthshare will be connected to network hospitals using IHE transactions. We use IHE XPHR for personal health records exchange.
Conclusion
These scenarios are not theoretical; early adopting consumers across these various patient personae are currently using Patient Engagement Solutions. Consumers appreciate the idea of remote health monitoring, and 89% of physicians would prescribe a mobile health app to patients. Patients continue to trust their personal physicians above most other professionals, following nurses and pharmacists. Patient Center is designed for better Health outcomes, Clinical Decision Support, Engaging Patients actively in Healthcare and reducing outpatient visits and repetitive encounters.
-
-
-
Partition Clustering Techniques for Big LIDAR Dataset
More LessI. Abstract: Smart cities are collecting and producing massive amount of data from various data sources such as local weather stations, LIDAR data, mobile phones sensors, Internet of Things (iOT) etc. To use such large volume of data for potential daily computing benets, it is important to store and analyse such amount of urban data using handy computing resources and al-gorithms. However, this can be problematic due to many challenges. This article explores some of these challenges and test the performance of two partitional algorithms for clus-tering such Big LIDAR Datasets. Two handy clustering algorithms the K-Means vs. the Fuzzy c-Mean (FCM) were put to the test to address the suitability of these algorithms for clustering such a large dataset. The purpose of clustering urban data is to categorize it into homogeneous groups according to specic attributes. Clustering Big LIDAR Data in compact format represents the information of the whole data and this can benefit researchers to deal with this reorganised data much efciently. To achieve this end, the two techniques were utilised against a large set of Lidar data to show how they perform on the same hardware set-up. Our experiments conclude that FCM outperformed the K-Means when presented with such type of dataset, however the later is lighter on the hardware utilisations. II. Introduction: Many ongoing and recent researches and development in computation and data storing technologies have contributed to production of the Big Data phenomena. The challenges of Big Data are due to the 5V's which are: Volume, Velocity, Variety, Veracity and Value to be gained from the analysis of Big Data [1]. From the survey of the literature, there is anagreement between data scientists about the general attributes that characterise Big Data 5V's which can be summed as follows: Very large data mainly in Terabytes/Petabytes/Exabyte's of data (Volume). Data can be found in structured, unstructured and semi structured forms (Variety). Often incomplete data and inaccessible. Data sets extraction should be from reliable and verified sources. Data can be streaming at very high speed (Velocity). Data can be very complex with interrelationships and highdimensionality. Data may contain few complex interrelationships between different elements. The challenges of Big Data in general are an ongoing thing and the problems is growing every year. A report by Cisco, estimated that by the end of 2017, annual global data traffic will reach 7.7 Zettabytes. The global internet traffic will be three times over the next five years. Overall, the global data traffic will grow at a Compound Annual Growth Rate (CAGR) of 25% by the year 2017. It is essential to take steps toward tackling these challenges because it can be predicted that a day will come when Big Data tools will become obsolete in front of such enormous amount of data flow. III. Clustering Methods: Researchers are dealing with many types of large datasets, the concern here is to wither introduce new algorithms or use the existing algorithms to suit large datasets by focusing on the data itself to suit the available algorithms. Currently, two approaches are predominant: First, is known as “Scaling-Up” which focuses the efforts on the enhancement of the available algorithms. This approach risks them becoming useless for tomorrow, as the data continues to grow. Hence, to deal with continuously growing in size datasets, it will be necessary to frequently scale up algorithms as the time moves on. The second approach is to “Scale-Down” or to skim the data itself, and to use existing algorithms on the skimmed version of the data after reducing its size. This article focuses on the scale-down of data sets by comparing clustering techniques. Clustering is defined as the process of grouping a set ofitems or objects which have same attributes or characteristics in the same group called a cluster which may differ from another group. Clustering can be very useful for between cluster separation, within cluster homogeneity and for good representation of data by its centroid. These can be applied to different fields such as Biology to find groups of genes which have same functions or similarities. It is also used in Medicine to find patterns in symptoms of disease and in Business to find and target potential customers. IV. Compared Techniques K-Means vs. Fuzzy c-Means: To highlight the advantages to everyday computing for Big Data, this article is focusing on comparing two trendy and computationally attractive partitional techniques which can be explained as follows: 1) K-Means Clustering: This is a widely used clustering algorithm. It partition a data set into K clusters (C1;C2;:::;CK), represented by their arithmetic means called the “centroid” which is calculated as the mean of all data points (records) belonging to certain cluster. 2) Fuzzy c-Means clustering: FCM was introduced by Bezdek et al. and it is derived from the explained K-means concept for the purpose of clustering datasets, but it differs in that the object may belong to more than one cluster with degrees of belonging. However, it is possible that an object may belong to more than one cluster according to its degree of membership, which is also calculated on the bases of distances (usually the Euclidean) between the data points and cluster. V. Experiments Set-up: The experiments are done to compare and illustrate how the candidate K-Means and FCM clustering techniques cope with clustering Big LIDAR Data set using a handy computer hardware. The experiment were performed using an AMD8320, 4.1 GHz, 8 core processor with 8 GB of RAM and running a 64-bit Windows 8.1 OS. The algorithms were implemented against a LIDAR data points, taken for our campus location at Latitude: 52:23–52:22 and Longitude: 1:335–1:324. This location represents the International University of Sarajevo main campus with an initialization of 1000000 × 1000 digital surface data points. Both clustering techniques were applied to the dataset starting with a small cluster number K = 5 and gradually increased to reach K = 25 clusters. VI. Conclusions: The lowest time measured for FCM to re group the data into 5 clusters was recorded at 42.18 seconds while it took K-Means 161.17 seconds to form the same number of clusters. The highest time recorded for K-Means to converge was 484. 01 seconds, while it took FCM 214.15 seconds to cluster the same dataset. Hence, There is a high positive correlation between the time and the number of clusters assigned, as the number of clusters count increases so does the time complexity for both algorithms. On average FCM used up between 5–7 out of the eight available cores, with 63.2 percent of the CPU processing power and 77 percent of the RAM memory. The K-Meanson the other hand utilised between 4–6 with the rest remain as idle cores with an average of 37.4 percent of the CPU processing power and 47.2 percent of the RAM memory. Overall, both algorithms are scalable to deal with Big Data, but, FCM is fast and would make an excellent clustering algorithm for everyday computing. In addition, it would offer some extra added advantages such as its ability to handle different data types. Also, this fuzzy partitioning technique and due to its fuzzy capability, FCM could produce a better quality of the clustering output which could benefit many data analysts.
-
-
-
Understanding Cross-modal Collaborative Information Seeking
Authors: Dena Ahmed Al-Thani and Tony StockmanI. Introduction
Studies reveal that often group members collaborate when searching for information even if they were not explicitly asked to collaborate [1]. The activity that involves a group of people engaging in a common information seeking task is called Collaborative Information Seeking (CIS). Over the past few years, CIS research has focused on providing solutions and frameworks to support the process [2]. However, work in this field to date has always assumed that information seekers engaged in CIS activity are using the same access modality, the visual modality. The attention on this modality has failed to address the needs of users who employ different access modalities such as haptic and/or audio. Visual Impaired (VI) employees in a workplace may often have to collaborate with their sighted team members when searching the web. Given that the VI individual's search behaviour is challenged by poor web design and the shortcomings of current assistive technology [3][4]; collaboratively engaging in web search activity with peers can be considered as a major barrier to workplace inclusion. This study explores the under investigated area of cross-modal collaborative information seeking (CCIS), that is the challenges and opportunities that exist in supporting VI users to take an effective part in collaborative web search tasks with sighted peers in the workplace.
II. Study Design
To develop our understanding of the issues, we investigated the CCIS activity of pairs of VI and sighted participants. The study consisted of an observational experiment that involved 14 VI and sighted users completing two web search tasks which was followed up by scenario-based interviews which conducted with seven of the 14 pairs from the experiment. We conducted the experiment to examine the patterns of CCIS behaviour and the challenges that occur. In the scenario-based interviews we examined the techniques used, tools employed and the ways information is organized both for individual and collaborative use. In the observational study, all VI participants used a speech-based screen reader. Each pair was given two search tasks; one task was performed in a co-located setting and the other task was performed in a distributed setting. For the co-located task, the participants were asked to work collaboratively to organize a trip to the United States while for the distributed task they were asked to plan a trip in Australia. In the distributed condition, participants were seated in different locations and were told that they were free to use any method of communication they preferred; 5 pairs used emails and 9 pairs used Skype. Seven pairs from the observational studies took part in the scenario-based interviews. The interviews involved the interviewer describing a CIS activity to the participants followed by four scenarios that contain questions relating to the management of the retrieved information.
III. Observational Study Findings
A. Division of Labour
In the co-located condition, discussion about the division of labour occurred at two levels. Firstly in the initial discussion and secondly as a result of one participant interrupting his/her partner in order to complete a certain action. Three reasons were identified for these interruptions: (1) When viewing large amounts of information with assistance from their sighted participant. (2) When browsing websites with inaccessible components, VI users asked sighted participants to perform the task or assist them in performing it. (3) The third reason is related to the context of the task. In contrast, in the distributed condition, the discussion about the division of labour occurred only at the beginning of the task. The pair divided the work and started work independently. The participants only updated each other about their progress through the communication tool. Unlike, the co-located sessions, collaboration in the later stages was not observed. Additionally, VI participants' requests for assistance were fewer in this condition as they seemed to be more reluctant to ask for support in the distributed condition. When a VI participant encountered an accessibility issue, they would try on average 3 websites before asking their sighted partner to assist them. The majority (13 pairs in the co-located and 12 pairs in the distributed setting) divided the labour so that sighted participants performed booking related activities and VI participants performed event organization activities. VI participants emphasised that they chose this approach to avoid any issues related to accessibility. Vigo and Harper [5] categorized this type of behaviour as “emotional coping”, in which users past experience relating to an inaccessible action in a similar webpage or similar task affects their judgment on website use or tasks conducted. It is clear from the results that VI users put some thought into either dividing the labour in a specific way or to find some other way to get around the issues encountered.
B. Awareness
In the co-located condition, the main method to implement awareness was verbal communication. In the distributed condition, the only method to implement awareness was through email and instant messaging. To facilitate awareness of partner's activities while performing the task, participants informed their partners about actions. The participants tended to provide their partner constantly with information about their current activities to enrich group awareness in the absence of a tool that supported awareness in both conditions. In fact, pairs who completed more of the task in both conditions communicated more information about their activities. It was observed that when more information to avoid duplication of effort was communicated, the higher was the performance in the distributed condition. This indicates that making this type of information available between distributed collaborators might enhance their ability to complete tasks efficiently. This was not the case in the sessions in the co-located condition, as the sessions with lowest and highest performance reported the same amount of information communicated relating to duplication of effort. However, it was observed that pairs who performed well in the co-located sessions have communicated more information related to the actions they are performing that is not essential for the other pair to know. This indicates that facilitating the appropriate type and amount of awareness information in each condition is crucial to team performance and can increase team productivity [6].
C. Search Results exploration and management
Collaboration occurred mainly in two stages of the information seeking process: the results exploration and results management. In the results exploration stage, collaboration was triggered when VI participants viewed large amounts of information with their sighted partner's assistance or by both partners deciding to explore search results together. The average number of search results viewed collaboratively was higher than the average number of search results viewed by VI participants alone. Screen readers' presentation of large volumes of data imposes a number of challenges such as short term memory overload and a lack of contextual information [3]. This stage is highlighted as one of the most challenging stages faced by VI users during the IS process [4]. The amount of retrieved information kept by sighted users is nearly double the amount of information kept by VI users. The reasons for this were twofold: (1) Sighted users viewed more results than their VI partners. (2) The cognitive overhead that VI users experience when switching between the web browser and an external application used to take notes. This increased cognitive load is likely to slow down the process. The effect of this is more apparent in the distributed condition where VI users are required to switch between three applications: the email client or instant chat application, the web browser and the note taking application.
IV. Scenario-based interviews findings
The scenario-based interview is a tool that allows exploration of the context of the task. This type of scenario narrative approach provides a natural way for people to describe their actions in a given task context. The scenario-based interviews revealed that collaborative searching is quite a common practice as all the participants were able to relate the given scenario with similar activities they had undertaken in the past. It found that often ad hoc combinations of everyday technologies are used to support this activity rather than the use of dedicated solutions. There were clear instances of the use of social networks such as Twitter and Facebook to support the sharing of retrieved results by both VI and sighted interviewees. Individual and cross-modal challenges were also extensively mentioned by VI interviewees, as current screen readers' fall short of conveying information relating to spatial layout and helping users form a mental model of web pages congruent with that of their sighted partners. It is clear that the VI participants interviewed were fully aware of the drawbacks that the serial nature of screen readers impose on their web search activities. In fact, these challenges have led them to choose to perform some web search activities collaboratively when that was an option. In the interviews, sighted users tended to use more complex structures for storing retrieved information, such as under headings or multi-level lists, while VI users tended to use simpler flat or linear lists of information.
V. Implications
The studies we carried out highlighted the challenges encountered when VI and sighted users perform a collaborative web search activity. In this section we propose a number of design implications for the design of CCIS systems. Due to space limitations in this paper, we only present three.
1. Overview of Search Results
Developing a mechanism that provides VI group members with an overview of search results and the ability to focus on particular pieces of information of interest could help in increasing VI participants' independence in CCIS activities. VI web searchers are likely to perform the results exploration stage more effectively and efficiently if they could firstly get a gist of results retrieved and could then drill down for more details as required. This will advantage both individual and collaborative information seeking activities.
2. Cross-modal Shared workspace
Having a common place to save and review information retrieved can enhance both the awareness and the sense making processes and reduce the overhead of using multiple tools, especially in the case of VI users, who do not have sight of the whole screen at one time. The system should support a cross-modal representation of all changes made by collaborators in the shared workspace. As changes in a visual interface can be represent it in colours, changes in the audio interface might be represented by a non-speech sound or a modification to one or more properties of the speech sound, for example timbre or pitch.
3. Cross-modal representation of collaborators Search Query Terms and Search Results
Allowing collaborators to know their partner's query terms and results viewed will inform them about their partner's progress during a task. Additionally, having a view of partners search results can allow sighted users to collaborate with their VI partners while going through search results.
VI. Conclusion
This paper discussed CCIS; an area that has not previously been explored in research. The studies presented in this paper is a part of a project that aims to provide support to the CCIS process. The next part of the project is to investigate the validity of the design implications in supporting CCIS and their effect on collaborators performance and engagement.
References
[1] Morris, M. R. (2008). A survey of collaborative web search practices. In Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, New York, USA. ACM.
[2] Golovchinsky, G., Pickens, J., and Back, M. (2009). A taxonomy of collaboration in online information seeking. In JCDL Workshop on Collaborative Information Retrieval.
[3] Stockman, T., and Metatla, O. (2008). The influence of screen readers on web cognition. Proceedings of Accessible design in the digital world conference. York, United Kingdom.
[4] Sahib, N. G., Tombros, A., and Stockman, T. (2012). A comparative analysis of the information-seeking behavior of visually impaired and sighted searchers. Journal of the American Society for Information Science and Technology.
[5] Vigo, M., and Harper, S. (2013). Coping tactics employed by visually disabled users on the web. International Journal of Human-Computer Studies.
[6] Shah, C., and Marchionini, G. (2010). Awareness in collaborative information seeking. Journal of the American Society for Information Science and Technology.
-
-
-
Geography of Solidarity: Spatial and Temporal Patterns
Authors: Noora Alemadi, Heather Marie Leson, Ji Kim Lucas and Javier Borge-HolthoeferWe would like to propose a panel which discusses this paper but includes special guests from the International and Qatar Humanitarian community to talk about the future of humanitarian research in the MENA region. Abstract: In the new era of data abundance, procedures for crisis and disaster response have changed. As relief forces struggle with assistance on the ground, digital humanitarians step in to provide a collective response at unprecedented shorttime scales, curating valuable bits of information through simple tasks – mapping, tagging, and evaluating. This hybrid emergency response leaves behind detailed traces, which inform data scientists about how far and how fast the calls for action reach volunteers around the globe. Among the few consolidated platforms in the DH technology arena, we find MicroMappers1. It was created at QCRI, partnering with UN-OCHA and the Standby Task Force, as part of a tool set that combines machine learning and human computing: Artificial Intelligence for Disaster Response2 (AIDR) [1]. MicroMappers is activated during natural disasters in order to tag short text messages and evaluate ground and aerial images. Thus, MicroMappers can also be viewed as a valuable data repository, containing historical data from past events in which it was activated. To perform our study, we rely on rich datasets coming from three natural disasters occurring in Philippines (Typhoon Hagupit, 2014) [2], Vanuatu (Cyclone Pam, 2015) [3] and Nepal (earthquake, 2015) [4]. Each event rendered thousands of digital records from the labor inputs of the crowd. We focus particularly on IPaddresses, which can be conveniently mapped to a specific location; and timestamps, which describe for us the unfolding of the collective response in time. The anonymity of each contributor is preserved at all times throughout the project.
-
-
-
Detecting And Tracking Attacks in Mobile Edge Computing Platforms
Authors: Abderrahmen Mtibaa, Khaled A. Harras and Hussein AlnuweiriDevice-to-device (d2d) communication has emerged as a solution that promises high bit rates, low delay and low energy consumption which represents the key for novel technologies such as Google Glass, S Beam, and LTE-Direct. Such d2d communication has enabled computational offloading among collaborative mobile devices for a multitude of purposes such as reducing the overall energy, ensuring resource balancing across device, reducing the execution time, or simply executing applications whose computing requirements transcend what can be accomplished on a single device. While this novel computation platform has offered convenience and multiple other advantages, it obviously enables new security challenges and mobile network vulnerabilities. We anticipate challenging future security attacks resulting from the adoption of collaborative mobile edge cloud computing platforms, such as MDCs and FemtoClouds. In typical botnet attacks, “vertical communication” between a botmaster and infected bots, enables attacks that originate from outside the network. While intrusion detection systems typically analyze network traffic to detect anomalies, honeypots are used to attract and detect attackers, and firewalls are placed at the network periphery to filter undesired traffic. However, these traditional security measures are not as effective in protecting networks from insider attacks such as MobiBots, a mobile-to-mobile distributed botnet. This shortcoming is due to the mobility of bots and the distributed coordination that takes place in MobiBot attacks. In contrast to classical network attacks, these attacks are difficult to detect because MobiBots adopt “horizontal communication” that leverages frequent contacts amongst entities capable of exchanging data/code. In addition, this architecture does not provide any pre-established command and control channels (C&C) between a botmaster and its bots. Overall, such mobile device infections will circumvent classical security measures, ultimately enabling more sophisticated and dangerous attacks from within the network. We propose HoneyBot, a defense technique that detects, tracks, and isolates malicious device-to-device communication insider attacks. HoneyBots operate in three phases: detection, tracking, and isolation. In the detection phase, the HoneyBot operates in a vulnerable mode in order to detect lower layer and service-based malicious communication. We adopt a data driven approach, using real world indoor mobility traces, to evaluate the impact of the number of HoneyBots deployed and their placement on the detection delay performance. Our results show that utilizing only a few HoneyBot nodes helps detect malicious infection in no more than 15 minutes. Once the HoneyBot detects malicious communication, it initiates the tracking phase which consists of disseminating control messages to help “cure” the infected nodes and trace back the infection paths used by the malicious nodes. We show that HoneyBots are able to accurately track the source(s) of the attack in less than 20 minutes. Once the source(s) of the attack is/are identified, the HoneyBot activates the isolation phase that aims at locating the suspected node. We assume that the suspect node is not a cooperative device that aims at hiding its identity by ignoring all the Honeybot messages. Therefore, the HoneyBot requests wireless fingerprints from all nodes that have encountered this suspect nodes at a given time period. These fingerprints are used to locate these nodes and narrow down the suspect's location. To evaluate our localization accuracy, we first deploy an experimental testbed where we show that HoneyBots accurately localize the suspect node within 4 to 6 m2. HoneyBots can operate efficiently in small numbers, as few as 2 or 3 nodes while improving the detection, tracking, and the isolation by a factor of 2 to 3. We also assess the scalability of HoneyBots using a large scale mobility trace with more than 500 nodes. We consider, in the attached Figure, a scenario of a corporate network consisting of 9 vulnerable devices labeled 1 to 9. Such network is attacked by one or many botmaster nodes using d2d MobiBot communication. We notice that attacks are propagated horizontally, bypassing all Firewall and intrusion detection techniques deployed by the corporate network administrators. In this scenario, we identify 3 main actors; the botmaster (red hexagon), the HoneyBot (green circle), the infected bot (red circle), and the cured or clear node (blue circle). We assume that the 9 nodes shown in the figure only represent the vulnerable d2d nodes in this corporate networks. We propose detection, tracking and isolation technique that aim at accurately and efficiently defend networks from insider d2d malicious communication.
-
-
-
Computational Calculation of Midpoint Potential of Quinones in the A1 Binding Site of the Photosystem I
Authors: Yasser H.A. Hussein, Velautham Sivakumar, Karuppasamy Ganesh and Gary HastingsQuinones are oxidants; colorants; electrophiles and involved in the electron transfer process of important biological functions such as photosynthesis, respiration, phosphorylation etc. In earth, photosynthesis is the main biological process which converts solar energy into chemical energy. By producing oxygen and assimilating carbon dioxide, it supports the existence of virtually all higher life forms. It is driven by two protein complexes namely photosystem I and II (PSI and PSII). In PS I, light induces the transfer of electron from P700, a pair of chlorophyll a molecules via a series of protein bound pigment acceptors (A0, A1, FeS) to ferredoxin. In PSI, phylloquinone (PhQ, 2-methyl-3-phytyl-1,4-naphthoquinone) act as a secondary acceptor termed as A1. In menB, a mutant of PS I, a gene that codes for a protein involved in PhQ biosynthesis has been deleted, plastoquinone (PQ9, 2,3-dimethyl-5-prenyl-1,4-naphthoquinone) occupies instead. Recent literature reveals that the PQ9 is weakly bounded in the A1 binding site of menB and can be easily replaced by different quinones both invitro and invivo. The efficiency of light induced electron transfer of quinone is related to the mid potential (Em) of the quinone in the A1 binding site. For native PSI, the estimated Em of PhQ is -682 mV. The estimated Em value of PQ9 in menB is -754 mV, and for the incorporated quinone, 2-methyl-1,4-naphthoquinone, it has been reported to be -718 mV. Interestingly in the case of 2,3-dichloro-1,4-naphthoquinone (DCNQ) incorporated menB, there is no forward electron transfer observed. So far this was the highest positive redox potential quinone incorporated into the A1 site in menB PSI. By keeping this reported Em values and electron transfer directionality in mind, we intent to find the Em of the substituted 1,4-naphthoquinones that can be incorporated into A1 binding site. Computation calculations were performed at the B3LYP aug cc-pVTZ level of theory using Gaussian 09 software in linux platform. High performance computer network namely VELA (512 GB RAM per node, 40 core per node, with Turbo-Boost up to 2.4 GHz) in Georgia State University, Atlanta is used through the remote operating system from Qatar University. First the electron affinity (EA) of substituted 1,4-naphthoquinones (NQs) were calculated. From the calculated EA of NQs, we have been able to calculate the redox potential of NQs in a solvent and their Em in the A1 binding site. In order to understand the electronic and structural effects, electron releasing (CH3, OCH3) and withdrawing (Cl, Br) substituted NQs were used in this calculations. Results show that out of seven NQs used, 2-methoxy-1,4-naphthoquinone has the highest negative Em of -850 mV and the DCNQ has the highest positive Em of -530 mV in the A1 binding site. Our calculated Em of DCNQ is in line with the blocking of forward electron transfer reported previously. Our Em values can be used to explain the directionality of electron transfer reactions past A1 and predict the forward electron transfer kinetics when these NQs are incorporated into the A1 binding site experimentally.
-
-
-
Effective High-level Coordination Programming for Decentralized and Distributed Ensembles
Authors: Edmund Lam and Iliano CervesatoProgramming and coordinating decentralized ensembles of computing devices is extremely hard and error-prone. With cloud computing maturing and the emerging trend of embedding computation into mobile devices, the demand for building reliable distributed and decentralized systems is becoming increasingly common and complex. Because of these growing technical challenges, solutions for effective programming and coordination of decentralized ensembles remain elusive. Most main-stream programming methodologies only offer a node-centric view of programming, where a programmer specifies distributed computations from the perspective of each individual computing node (e.g., MPI, transactional memory, the Actor model, Linda Tuple Space, graph processing frameworks). When programming distributed computations in this style, programmers experience minimal shifts in paradigm but such concurrency primitives offer minimal support for the coordination problem. However, as systems grow in complexity and sophistication, maintaining code in this node-centric style of ten becomes costly, as the lack of concurrency abstraction means that programmers assumes all the responsibility of avoiding concurrency pitfalls (e.g., deadlocks and race-conditions). Because of this, ensemble-centric concurrency abstractions are now growing in popularity. In this style of programming, programmers are able to specify complex distributed computations from the perspective of entire collections of computing nodes as a whole (e.g., MapReduce, Google Web Tool-kit, choreographic programming), making implementations of distributed computations more concise and even making large classes of concurrency pitfalls syntactically impossible. However, programming distributed computations in this style typically require programmers to adopt a new perspective of computation. At times, they are overly restrictive and hence not applicable to a wider range of distributed coordination problems.
Our work centers on developing a concurrency abstraction to overcome the above challenges, by (1) providing a high-level ensemble-centric model of coordinating distributed computations, and (2) offering a clean and intuitive integration with traditional main-stream imperative programming languages. This framework as a whole, orthogonally combines a high-level concurrency abstraction together with established lower-level main-stream programming methodologies, maintaining a clean separation between the ensemble-centric concurrency model and the underlying sequential computation model, yet allowing them to interact with each other in a symbiotic manner. The benefit of this separation is twofold: first, a clear distinction of the elements of the coordination model from the computation model helps lower the learning curve of this new programming framework. Hence, developers familiar with the underlying main-stream computation model can incrementally build their technical understanding of the framework, by focusing solely on the coordination aspects of the framework. Second, by building the coordination model on top of an underlying main-stream computation model, we inherit all existing libraries, optimizations, as well as programming expertise available to it. We have addressed several key challenges of developing such an ensemble-centric concurrency model. In particular, we have developed a choreographic transformation scheme that transform our ensemble-centric programs into node-centric encodings, and we have also developed a compilation scheme that convert such node-centric encodings into lower-level imperative codes that can be executed by individual computing nodes. Finally, we proved the soundness of this choreographic compilation scheme by showing a two-step correspondence from ensemble-centric specifications to node-centric encodings, then to node-centric compilations. We have implemented an instance of this distributed programming framework for coordinating decentralized ensembles of Android mobile devices. This system is called CoMingle and it is built to integrate with Java and the Android SDK. The ensemble-centric nature of this programming abstraction simplifies the coordination of multiple Android devices, and in we demonstrate how the clean integration with Java and Android SDK allow local computations within each device to be implemented in a traditional manner, hence leveraging from an Android programmer's expertise, rather than forcing him/her to work with an entirely new programming environment. As proof-of-concept, we have developed a number of distributed mobile Android applications. CoMingle is open-source and available for download at https://github.com/sllam/comingle.
-