- Home
- Conference Proceedings
- Qatar Foundation Annual Research Conference Proceedings
- Conference Proceeding
Qatar Foundation Annual Research Conference Proceedings Volume 2016 Issue 1
- Conference date: 22-23 Mar 2016
- Location: Qatar National Convention Center (QNCC), Doha, Qatar
- Volume number: 2016
- Published: 21 March 2016
401 - 500 of 656 results
-
-
Heme Oxygenase (HO)-1 Induction Prevents Endoplasmic Reticulum Stress-Mediated Endothelial Cell Death and Dysfunction
Authors: Abdelali Agouni, Hatem Maamoun, Matshediso Zachariah and Fiona GreenDiabetes is intimately associated with cardiovascular complications. Much evidence highlighted the complex interplay between Endoplasmic Reticulum (ER) stress and oxidative stress in the pathogenesis of diabetes. Hemeoxygenase-1 (HO-1) induction was shown to protect against oxidative stress in diabetes; however the underlying molecular mechanisms have not yet been fully elucidated. We aim in this project to test the hypothesis that HO-1 induction will protect against high glucose-mediated ER stress and oxidative stress in endothelial cells and will enhance cell survival.
Endothelial cells were cultured in physiological or high concentrations of glucose in the presence of cobalt protoporphyrin 1X (CoPP, HO-1 inducer), 4-phenylbutyrate (PBA, chemical chaperone to inhibit ER stress) or vehicle. Then, ER stress response was assessed (PCR, western blot). The productions of ROS (flow cytometer) and NO (Griess assay) were analysed. Also, apoptosis and caspase 3/7 activity were assessed. High glucose treatment in cells increased protein and mRNA expression of several ER stress response markers (BIP, CHOP, ATF4) and enhanced ROS production in addition to reducing NO release. Interestingly, the pre-treatment of cells with PBA or CoPP significantly reduced high glucose-mediated ER stress and oxidative stress in cells. Also, cells incubated with high glucose had enhanced apoptosis, increased protein expression of cleaved PARP and caspase-7 in addition to enhanced caspases 3/7 activity while cells pre-treated with either PBA or CoPP were totally protected. The mRNA expression of inflammatory cytokine IL-6 was enhanced in cells incubated with high glucose while those pre-treated with PBA or CoPP were prevented.
These results highlight the importance of oxidative stress both in initiating or maintaining ER stress response and in mediating ER stress-induced damage and cell death in endothelial cells. This work also underscores the therapeutic potential of HO-1 induction against hyperglycaemia-mediated endothelial dysfunction.
-
-
-
High Selenium Intake is Associated with Endothelial Dysfunction: Critical Role for Endoplasmic Reticulum Stress
Authors: Abdelali Agouni, Matshediso Zachariah, Hatem Maamoun and Margaret RaymanSelenium is associated with insulin resistance and may therefore affect endothelial function, increasing type II diabetes risk and associated cardiovascular-disease risk. However the underpinning molecular mechanisms involved are not clear. High selenium doses cause apoptosis in some cancer cells through the induction of endoplasmic reticulum (ER) stress response, a mechanism also involved in the pathogenesis of insulin resistance and endothelial dysfunction (ED). Thus we hypothesised that high selenium intake could cause ED through ER stress.
Endothelial cells were treated with selenite (0.5–20 μM) in the presence or absence of the ER chemical chaperone, 4-phenylbutryic acid (PBA). High selenium concentrations (5–10 μM of selenite) compared to physiological concentration (0.5 μM) enhanced mRNA expression of several pro-apoptotic ER stress markers; such as activating transcription factor-4 (ATF4) and CAAA/enhanced-binding homologous protein (CHOP). In addition, Griess assay showed that high selenite treatment (5–20 μM) reduced NO production. Moreover, flow cytometry assays showed that high selenium enhanced ROS production and apoptosis in cells. Finally, supra-nutritional concentrations of selenite increased caspases 3/7 activity in endothelial cells compared to the physiological concentration. Interestingly, the pre-incubation of cells with PBA completely reversed all the effects of high selenium indicating the involvement of ER stress response.
Overall, we show here that high selenium treatment causes endothelial dysfunction and cell death through the activation of ER stress response. These results highlight the importance of a balanced selenium intake in order to achieve maximal health benefits. These findings also underscore the importance to monitor cardiovascular risk development in cancer patients supplemented with high amounts of selenium as part of their chemotherapeutic intervention.
-
-
-
Implementation of a New Genetic Screening Test for of Genetic Recessive Diseases in a Program of Oocyte Donation
Background
Current screening for carriers of genetic diseases in oocyte donors includes the assessment of risk of transmission of inherited diseases based on personal and family history of genetic disorders. Most assisted reproduction centers also include karyotyping, mutational screening of the CFTR gene and directed the study of fragile X premutation. Next generation sequencing (NGS) technologies of have allowed to expand genetic screening to a large number of diseases at a reasonable cost.
Objective
• Develop a new genetic test (qCarrier) based on NGS technology for extended recessive diseases in the field of reproductive medicine screening.
• Implementation of the carrier screening test in our oocyte donation program.
Material and Methods
The test covers 200 genes (68 in full sequence analysis and targeted screening of 132 known mutations) associated with autosomal recessive diseases 185 (AR) and 11 linked to chromosome X. The test was developed by NGS technology and allows characterization of a broad spectrum of mutations (point, indels, rearrangements and CNVs). The expanded carrier screening is performed on all candidate of oocyte donors and on the male partner of the recipient of oocytes. We discard from the oocyte donor program all candidates that are carriers of X-linked disease. The heterozygous carrier status for an autosomal recessive condition is no a reason for exclusion as a donor, but it involves the selection of a recipient whose male partner is not a carrier of disease mutations in the same gene.
Results
The validation of the test showed high sensitivity (>99%). It has made the extended carrier screening a total of 445 oocyte donor candidates and a total of 587 male partners of oocyte recipients. The implementation of the test in our OD program has identified 57% of patients/donors carriers least a pathogenic mutation.
Conclusions
• The implementation of the test in oocyte donors program has identified a 3% allocation at high risk of disease autosomal recessive. Two percent of donor candidates to enter the oocyte donors program are carriers of pathogenic mutations linked to the X chromosome.
• The expanded carrier screening is a useful tool to reduce the rate of newborns affected of genetic diseases in children born through oocyte donors program.
• The implementation of the test in clinical settings requires pre and post-test genetic counseling to ensure adequate information, consent of patients and safeguard the fundamental ethical issues facing patients and oocyte donors.
-
-
-
Computer-Aided Design and Synthesis of N-aryl and Heteroarylpiperazine Derivatives as Dual Serotonergic Antagonists for Autism Treatment
Authors: Raed Shalaby, Ola Ghoneim and Ashraf KhalilBackground and Objective
Autism Spectrum Disorders (ASD) are characterized by abnormalities in social interaction and communication skills, in addition to stereotypic behaviors and restricted activities and interests. Autism prevalence has dramatically increased from 1 case per 5000 children in the early 1980’s to1 case per 68 children as of 2015. A recent pilot study on the demographic distribution of children with autism in Qatar showed a preliminary ratio of 1 child with autism per 500 school children. Currently, It is widely accepted that abnormalities in serotonin (5-HT) neurotransmission is one of the most important reasons for ASD. Consequently, Selective Serotonin Reuptake Inhibitors (SSRIs) have been utilized to target various symptoms of the disorders by their ability to increase 5-HT in synaptic clefts. Unfortunately, it was observed that there is a delay in the therapeutic effect of about 4–6 weeks that may be attributed to the time needed for 5-HT autoreceptor (5-HT1B/1D) desensitization. This delay adversely affects child compliance. Accordingly co-administration of SSRIs with 5-HT1B/1D antagonists would increase serotonin levels in the brain. It was then proposed that a “hybrid” drug that can act on both receptors would have the advantage of low cost and better compliance. In the presented study, we report the design of a dual pharmacophore model for binding with serotonin transporter and 5-HT1B/1D receptors, followed by the microwave-assisted synthesis of structurally diverse N-aryl and heteroarylpiperazines as dual antagonists at reuptake transporter and 5-HT autoreceptors.
Method
Molecular Modeling
All compounds with IC50 value < 10 nm against serotonin transporter were retrieved from ChEMBLdb database (Version 20, 1,463,270 molecules) and filtered using Lipinski's rule of five, which led to 367 inhibitors. The retrieved compounds were then clustered into 15 cohorts using FCFP_6 fingerprints implemented in Accelrys Discovery Studio software. The most active compound of each cohort was picked and used to build the pharmacophore model using Common Feature Pharmacophore Generation module of Discovery Studio. The pharmacophore hypotheses were evaluated, ranked and validated using a set of mixed actives and decoys.
Chemistry and Biology
The first reaction of the scheme was a modified Buchwald-Hartwig amination, in which different aryl and heteroaryl bromo-derivatives were coupled with 1-Boc-piperazine utilizing microwave energy. Deprotection was then performed to remove the Boc group from the synthesized compounds and liberate the free amine that was coupled with the activated acid that mimics the SSRI Fluoxetine. The Synthesized compounds obey Lipinski rule of five and pharmacophoric features were mapped using molecular modeling studies.
Results
The proposed compounds were successfully synthesized and purified using flash chromatography. The chemical structures were confirmed with mass spectrometry and NMR. The binding affinity was then tested at 5-HT1B/1D receptors and 5-HT reuptake transporter. Some of the compounds showed promising activity and further in-vivo assay will be performed.
Conclusion
A molecular modeling study was successfully performed to generate a hypothetical pharmacophore model. A microwave-assisted synthetic scheme was accomplished and all final compounds were purified. Chemical structures were confirmed using different spectroscopic techniques. Biological in-vitro testing was conducted and all relevant data will be presented.
-
-
-
Antifolate Drug Resistance: Novel Mutations and Therapeutic Efficacy Study from Arunachal Pradesh, NE India
Malaria is a major public health concern in north east India with the preponderance of drug resistance strains. Till recently partner drug for Artemisinin Combination Therapy was Sulphadoxine Pyrimethamine. The antifolate drug resistance has been associated with the mutations at dihydropteroate synthase (dhps) and dihydrofolatereductase (dhfr) gene. The study was aimed to investigate the antifolate drug resistance at molecular level and therapeutic efficacy in 35 patients. Three novel point mutations were found in dhps gene with 10 haplotypes along with the already reported mutations. A single haplotype having quadruple mutation was found in dhfr gene. The study reports higher degree of antifolate drug resistance as evidenced by the presence of multiple point mutations in dhps and dhfr genes. The therapeutic efficacy revealed one early treatment failure and three late clinical failure. The findings of the study recommends to stop the use of SP as partner drug in malaria treatment for NE India.
Keywords
Plasmodium falciparum, North East India, Drug resistance, therapeutic efficacy
-
-
-
Medication Risks Communication in Middle East Cancer Patients
Authors: Kerry Wilbur, Sumaya Al Saadi, Maha Al Okka, Ebaa Jumat, Alya Babiker, Marwa Al Bashir and Nesma EissaBackground
Cancer treatments are frequently associated with adverse effects, but there may be a cultural reluctance by care providers to be forthcoming with patients regarding these risks for fear of promoting non-adherence. Conversely, research in a number of countries indicates high levels of patient desire for this information. We sought to pharmacist and nurse views and experiences in educating patients regarding their treatment safety and tolerability as well as the roles of other professions in this regard and to explore cancer patient experiences, satisfaction, and preferences for medication risk communication in this Middle East care setting.
Design
In this mixed methods study, six focus group discussions of nurses and pharmacists were conducted were conducted at the National Center for Cancer Care and Research (NCCCR) in Qatar during 2015. Additionally/secondly, a 10-item questionnaire (Arabic, English) was developed and administered to a convenience sample of consenting adult patients receiving treatment at NCCCR. Ethics approval was obtained from both Hamad Medical Corporation and Qatar University Institutional Review Boards.
Results
Focus group
Eleven pharmacists and 22 nurses providing direct patient care participated. Concepts related to three key themes were drawn from the seeding questions and included factors for determining the level of risk they communicated: the specific treatment regimen in question; the patient; and their assessment of the patient. Patient-related considerations arose from additional subthemes; both nurses and pharmacists described aspects related to the perceived psychological health status of the patient, as well as anticipated comprehension, as ascertained by demonstrated education and language abilities. In all discussions, it was noted that physician and family non-disclosure of cancer diagnosis to the patient profoundly influenced the nature of information they provided. While a high level of cohesion in safety communication prioritization among these two health disciplines was found, a number of pharmacists asserted a more formal role compared to informal and repeated teaching by nurses.
Survey
One hundred and forty three patients were interviewed (15 of whom were Qatari). Most (88%) stated the level of side effect information they received was sufficient, with physicians (86%) followed by pharmacists (39%) as the preferred sources. The majority (97%) agreed that knowing about possible side effects would help them recognize and manage the reaction and 92% agreed it would help them understand how to minimize or prevent the risks. Overall, eighteen percent indicated this information would make them not want to take treatment,but some regional differences among patients emerged (37.5% Gulf Coast Country-origin vs 15.8% Middle East North Africa-origin, p = 0.029, vs 12.1% Phillipines, p = 0.030) Two-thirds (65%) had previously experienced intolerance to their cancer treatment regimen.
Conclusions
Nurses and pharmacists in this Middle East healthcare environment were not reluctant to discuss treatment side effects with patients and draw on similar professional judgements in prioritising treatment risk information. We found that they did not always recognise each other's informal educational encounters and that there are opportunities to explore increased collaboration in this regard to enhance the patient care experience.
Most patients surveyed expressed preference for the details of possible side effects they may encounter in their treatment. However, one in five considered such information a factor for non-adherence indicating the need for patient-specific approaches when communicating medication risks.
Acknowledgment
This research was made possible by UREP grant from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author
-
-
-
Diabetes Mellitus: Unwanted Visitor in the Tertiary Heart Hospitals
Authors: Ayman El-Menyar, Jassim Al Suwaidi, Hajar Albinali and Hassan Al-ThaniDiabetes mellitus (DM) and cardiovascular diseases (CVD) constitute a major health care challenge worldwide. We evaluated the trends and outcome of DM in patients presenting with CVD over a 22-year period in the state of Qatar.
Methods
We performed a descriptive retrospective chart review of all admitted CVD patients, including DM from the Cardiology and Cardiovascular Surgery database at the Heart Hospital (HH) of Hamad Medical Corporation (HMC) in Qatar over a 22-year period.
Results
During the study period between the year 1991 and 2012, a total of 48,803 patients (77% males; 40.3% diabetics) were admitted to the HH, resulting in an average admissions rate of 2218 CVD patients per year. Two out of five CVD patients (40%) were known diabetics. Thus, it was estimated that 14.2 per 10,000 people of the general population in Qatar have both DM and CVD. On average each year, 895 CVD patients admitted to the HH are diabetics. The overall proportion of admissions for diabetic patients with CVD increased over the study duration. Diabetic males were 6 years younger than females. DM was more prevalent in Arabs (68 vs. 32%), but its burden showed a decreasing trend over time compared with South Asians. Diabetics presented with ST-elevation myocardial infarction (47.5 vs. 22.7%), tended to be 8 years younger compared with heart failure with DM. Over the study period, beta-blocker use increased substantially (from 10 to 71%). However, angiotensin converting enzyme inhibitors/angiotensin receptor blockers (ACEI/ARBs) were underutilized (from 30 to 56%). There were 4.4 deaths per 100 CVD admissions, which is equivalent to 97 deaths per year. Of this, 52% had DM (2.3 deaths per 100 CVD admissions). The overall case fatality rate (CFR) of DM was 5.6%. Diabetic Asian patients died 9 years earlier than diabetic Arabs at the HH. Age-adjusted Predictors of mortality in DM patients in the HH included lack of beta-blocker use (OR 4.35), lack of ACEI/ARBs use (OR 3.58), myocardial infarction (OR 3.20), lack of aspirin use (OR 2.56), and congestive heart failure (OR 1.75) (P = 0.001 for all).
Conclusions
In Qatar, DM is still a healthcare challenge. Although the admission rate of diabetic patients is increasing at the HH, the mortality rate is decreasing. Evidence-based medication use remains far from the guideline recommendation; however, it shows a substantial improvement. Lack of evidence-based CVD medications in diabetic patients is associated with a significant increase in the mortality in the HH. Efforts should be directed toward public awareness regarding CVD risk factors and DM through education programs and efficient primary and secondary prevention strategies.
-
-
-
MiSeq-Next Generation Sequencing Approach in Investigating the Evolutionary Dynamics of Viral Infection in Children with Type I Diabetes
More LessBackground
There is no longer a question as to whether viruses contribute to the pathogenesis of type 1 diabetes (T1D), as we recently reviewed, but rather how they contribute and, in particular, the role of viral diversity and evolution in the disease process. The recent finding of enterovirus (EV) capsid protein VP1 in pancreatic autopsy samples from the JDRF Network for Pancreatic Organ Donors with Diabetes (nPOD) supports earlier case series in which EVs (Coxsackievirus B, CVB)were isolated from pancreatic tissue and inoculated into human islets, causing functional impairment and β-cell death. The most interesting observation from the nPOD data is the patchy distribution of insulitis, with MHC class II hyper-expression on β-cells, which was co-located with viral protein. Indeed, it is well established that specific EV strains demonstrate β-cell tropism; we and others have shown that EVs infect and replicate in β-cells (Fig. 1), inducing inflammation, cytokine production and functional damage. There is also substantial epidemiological evidence that EVs have more than an occasional role in the disease; in our meta-analysis of >4000 cases, the odds ratio (OR) was ∼10 for EV infection at T1D onset vs controls, and OR∼4 for EV infection and islet autoimmunity (IA). While the genetic and immunological components of disease are not in question, the capacity for EVs to evolve is completely unexplored in the pathogenesis of human T1D. This information is critical for development of EV vaccines to prevent T1D, which is currently underway.
HYPOTHESIS-1: Variation in the capsid and non-structural regions of the EV genome determine β-cell tropism
AIM-1: To characterize human EV isolates in cases of IA and T1D using NGS – to identify regions in the EV genome associated with β-cell tropism
HYPOTHESIS-2: Increased genetic diversity of EVs at the full genome level is associated with seroconversion to IA and T1D
AIM-2: To examine the evolutionary dynamics and genetic diversity of EVs at the full genome level from children with IA and T1D, and to quantify the extent of intra-host evolution of EVs within an infection and the kinetics of intra-host virus evolution between infections.
Research Plan and Methods
EV prototype strains and clinical isolates from children with AI and T1D that infect and replicate in β-cells. Cohorts: Viruses in Genetically at Risk (VIGR), Environmental Determinants of Islet Autoimmunity (ENDIA), and children at onset of T1D (EET1DPP2). Samples collected at the study visit or at diagnosis of T1D. RNA extracted with QIAamp viral RNA, quantitative RT-PCR were performed on the Roche LC-480 platform. NGS: Full-length EV genomes were amplified as a single 7.4 kb fragment by RT-PCR, and NGS performed using the Illumina MiSeq sequencer. Phylogenetic analysis of the full-length viral consensus sequences performed using the neighbour joining and maximum likelihood method. Statistical analysis with R software. Trees were constructed from alignment of complete genome sequences by using best-fit models and visualised using FigTree (Figure2). Comparisons were performed with Viral Epidemiology Signature Pattern Analysis (VESPA).
Results and Discussion
Two of the EVs from IA+ cases had an N to S amino acid (AA) substitution within the 2C protein, which became dominant after 10 days passage in the islets. The 2C protein encodes for the viral helicase and lies just upstream of the viral region that shares significant homology with human GAD65. EV isolate from another IA+ case has 5 AA differences within the capsid protein VP4 at residues 3, 16, 18, 50 and 61 (Figure3). VP4 is an internal capsid protein linked to the genome. VP4 has been shown in vitro to be a target of human antibodies that enhance CVB induced synthesis of interferon α (IFN-α). CVB-induced IFN-α plays a role in the initiation and/or maintenance of chronic CVB infection in human islets. Antibodies directed towards the region 11–30 of the VP4 capsid enhance infection of peripheral blood cells with CVB4 in vitro. Therefore, our preliminary data suggest VP4 may be a determinant of ‘diabetogenicity’. Our novel NGS data will contribute to vaccine development from a global perspective. Our ultimate goal is to reduce the future burden of T1D.
-
-
-
On the way to the Optimal Design of an Aortic Heart Valve -or- Discovering the Obvious?
Authors: Albert Ryszard Liberski and Radoslaw KotThe first task of tissue engineer trying to make a scaffold of a heart valve, is to adapt some model of a heart valve to establish target geometries and properties that should be recreated in artificial scaffold. The natural way to do so is to conduct literature research and find the current scientific consensus on the topic. Here the problems start, each researcher seems to have an individual opinion about optimal geometry of valve. What makes situation more complex is that each researcher has carefully chosen arguments to explain why that particular design is better than others. Hence, the consensus is not there yet, we chosen to contribute to this discussion. The analysis of available reports enable to “cook out” 2 distinguishable and to some extent contrary hypothesis.
1st –the optimal architecture of artificial valve is an architecture of native one.
2nd-there is not such a thing as optimal architecture of a heart valve, and never will be
The first option is very tempting due to its simplicity. Of course, in most of human beings the valves work fine for entire life so let's make the same structure for the sick patient. But which one? – Literally mine or yours? Maybe we should make an average from 1000 or million healthily examples? Hypothetically, if you would need new face, would you like to have the “median” appearance or you rather would go for the most beautiful one? Let's take it further – what is an optimal design of human face, perhaps one would wish to obtain the most beautiful face possible, while others would opt for younger variant of their old one. When it comes to faces, it is easy to distinguish beautiful from average and ugly. Using our internal standards, we are doing this every day hundreds times. But valves are usually not visible, so do we need to work out the standard of “beauty” for HV. Frequently, clinicians and researchers seem to use the same face-related-internal standards to propose geometries of scaffold. “I like this more than that, because its follows better my internal visual standards of HV beauty”. But this is not the way to go. We need hard core evidence to distinguish structure that is better than the other.
The second option is more radical – each patient needs individually chosen design of the heart valve that will reflect its specific needs and conditions. This is obviously very elegant but horribly expensive solution, since an individualized scaffold not only needs to be designed but also produced only in few copies.
The consensus of both strategies is in our opinion the establishing the most “beautiful HV” but according to objective physical parameters. What additionally supports this logic is that under physiological condition the scaffold will adjust to patient by changing the geometries.
To confirm our observation and conclusion we contacted 30 prominent clinicians (n = 11) and tissue engineers (n = 19) with question: What is an optimal design of aortic heart valve? In this report we are presenting their responses and comments.
-
-
-
Social Hypertension Awareness System in Gulf Countries (SHAMS)
Authors: Mohammed Alotaibi and Zaid BassfarHypertension prevalence around the world has risen over the last few years. The gulf countries has become one of the most affected countries in the world. Saudi Arabia only estimated that 24% of the current population is suffering from hypertension. This has been attributed to elements such as lack of proper management systems for hypertension and inadequate education systems in the region. In addition to this health problem, gulf countries have increased its use of smart phones. The country's access to the internet has also increased.
Therefore, SHAMS aims to fill the gap of lacking special hypertension education program in gulf countries using mobile health technology such as a private social network for hypertensive patients in gulf countries. The system contains of two units: (1) medical staff whom might be hypertension educator. (2) hypertensive patients.
SHAMS system allows the medical staff to taught the hypertensive patient through private social network. The hypertension educator can Post information, Videos and pictures and also answer the hypertensive inquiries through the SHAMS. On the other hand, the hypertensive patients can share his thoughts with his doctor/educator and with other hypertensive patients in the SHAMS network.
In conclusion, there is substantial support for the adoption of the use SHAMS in order to improve the health awareness of hypertensive patients in Saudi Arabia. The research work is currently on-going, with a focus on the design and implementation of the integrated architecture with regard to SHAMS to be further evaluated and developed in the KSA.
-
-
-
Prevalence and Associated Factors of Physical Activity Among Mothers in the Gaza Strip-Palestine
Authors: Rima El Kishawi, Kah Leng Soo, Yehia Abed and Wan Abdul Manan Wan MudaBackground
The high prevalence of obesity was observed in numerous developed and developing countries. A reduce in energy expenditure due to the low physical activity level is a factor contributed to the increase in obesity. Physical inactivity is one of the ten leading risk factors for global death, and associated with the all-cause mortality. Regular physical activity lowers the risk of various types of non-communicable diseases. The prevalence of obesity is high among women in the Gaza Strip. There is a lack of studies on the pattern of physical activity among adults in the Gaza Strip.
Objective
The aim of this study was to determine the prevalence of physical activity among mothers aged 18–50 years in the Gaza Strip and its associated factors. Additionally to explore mothers' perception and practice of physical activity.
Methodology
A mixed methods design was conducted using quantitative and qualitative methods. A total of 357 mothers were recruited from the Gaza Strip using multistage sampling method from three different geographical areas, namely, Jabalia refugee camp in the north of Gaza Strip, El Remal urban area in Gaza city, and Al Qarara rural area in the south of Gaza Strip. A structured questionnaire was used for face to face interviews with mothers to obtain information on the mothers' sociodemographic, and their nutrition knowledge. The short form of the International Physical Activity Questionnaire (IPAQ) was used to assess the physical activity pattern. In this study, sitting time is used as an independent proxy measure of sedentary behavior. For qualitative component, three focus group discussions (one group in each area) were conducted involving 24 surveyed mothers to explore mothers' perceptions and practices of physical activity. Binary logistic regression analyses were applied to identify the determinants of physical activity and were adjusted in relation to various factors.
Results
The prevalence of physical inactivity was 21.6%, about 78% of mothers were classified as moderately active, while vigorous activity was not observed. Mean sitting hours was 2.74 ± 1.32 hour/day. Results revealed that, physical activity level decreased among mothers who lived in households with low income (OR: 2.30; 95%CI: 1.20–4.45; p = 0.013), and those with high nutrition knowledge were more likely to be physically inactive (OR: 1.15; 95%CI: 1.0–1.314; p = 0.040), while mothers who had low or medium education level were more active (OR: 0.31; 95%CI: 0.15–0.62; p = 0.001), or (OR: 0.47; 95%CI: 0.23–0.96; p = 0.039). There was no significant association between physical activity levels and the geographical areas. The qualitative results showed that most of the mothers believed home chores were kind of exercises and could substitute for practicing sports. The main constraints to physical activity practice attributed to the sociocultural factors due to the limited availability of exercising facilities for Palestinian women and restrictions on their freedom.
Conclusions
Results of this study are important to monitor physical activity levels among mothers in the Gaza Strip. Despite a high level of nutrition knowledge among mothers, physical inactivity prevalence is high. More attention must be given to the policy makers to improve awareness on the importance of the physical activity practice to improve the overall health status of the community. Understanding the cultural attitudes is required as it is relevant in order to implement effective community-based intervention programs to improve physical activity levels among mothers in the Gaza Strip.
-
-
-
Designing Customized Peptide-Linkers to Functionalize Scaffolds and Nanoparticles for Tissue Engineering Applications
Authors: Navaneethakrishnan Krishnamoorthy, Yuan-Tsan Tseng and Magdi YacoubIntroduction and Objectives
Engineering living tissues or organs critically depends on the use of scaffolds to attract, house and instruct host cells. To achieve this, the scaffolds need to be functionalised using different strategies. One of these strategies relies on the use designer peptides to decorate scaffolds. Peptide linkers have shown increasing importance in the production of bioactive materials for various biological applications. In particular, they act as natural linkers for merging multiple functional domains and for attaching active motifs on the surface to induce cells and to enhance the function of biomaterials. However, intense structural customization of the linkers is required prior to examine them under experimental conditions for challenging tissue engineering applications such as heart valve repair. Thus, we here apply computer-aided molecular design to construct the linkers including essential properties such as multiple motif presentations and binding on scaffolds/nanoparticles.
Materials and Methods
The 3D structures of known functional motif (collagen and fibronectin inducing) with linkers were used in the interactive molecular dynamics simulations, which were carried out under physiological conditions after parameterization of all atomic properties. The simulations are to solve Newton equation of motion for 100 nanoseconds at the parallel super computer using algorithms of Groningen machine for chemical simulations. The trajectories of the simulations were collected at regular interval for analysing molecular behaviours, molecular interactions and structural properties of the linkers.
Results
Our recent modeling shows that the linkers based on valine and alanine can be used for merging dual bioactive motifs, which enhance the stimulation of collagen and fibronectin in human adipose derived stem cells under experimental conditions. By further applying the modeling strategy, here, we are developing linkers in specific conformations with surface attachment property. The computer aided-design used to analyse the structural role of key residues such as proline, serine, alanine, glycine, glutamic acid, lysine and cysteine in different lengths with several combinations to probe favourable linkers. The structural rigidity and self-assembly are the major molecular features that were used appropriately to create the efficient linkers for decoration and functionalization of biomaterials.
Conclusion
The design of customized linkers may offer many advantages for the production intelligent biomaterials with multi-functionality, enhanced bioactivity and to target specific sites/shapes for tissue engineering applications.
-
-
-
A Prospective Study Regarding Factors Related to Unscheduled Revisit Within 72 Hours in Adult Emergency Department. Al Khor Hospital, State of Qatar
Background
Crowding in hospital Emergency Departments (EDs) is a commonly observed problem in all over the world. Although the reasons and mechanisms are different, the major factors are increasing volume of patients seeking medical care in ED services, lack of inpatient beds, and care for non-urgent conditions for patients who identify the ED as their easiest and usual site of care, notwithstanding the typical treatment of patients with serious illnesses or injuries, non availability and accessibility of other medical services in the community.
Objective
To identify the factors related to patients unscheduled revisit in ED, to find out average length of stay in different priority patients and to identify the reason for more than 6 hours stay in ED.
Methods
The alkhor hospital has 110 bed capacities in the north part of Qatar with an annual ED patient volume of 158000.
A prospective study conducted during two months periods from 15/09/2014 to 14/11/2014 in ED. The census sample of patients who had revisit within 72 hours of discharge from the ED was recruited in the Study. CERNER system utilized to collect all revisit patients data including demographic and first visit information. Two experts from ED consultants reviewed the data independently. Further data include average length of stay and reason for more than 6 hours stays in ED were collected by the research team and all the data was analyzed by the author. The factors were categorized in to four types includes: physician related: missed diagnosis, not prescribed medications, treatment error/ patient related: perception of not improved/illness related: complications of disease process, prognosis of disease process/system related: not availability of health care, not availability of health center locally.
Results
During study period 24933 patients visited in ED, 849 were revisited within 72 hours of their discharge which accounts 3.4%. 165 were excluded from the study which includes LWBS,DAMA and absconded patients. The characteristic of revisited patients, more likely young adults between 20–40 years of age (59.79%). Mostly males (78.94%) expatriate (69.73%) 30.27% patient had three visit in ED while 69.73% twice visited. The most of the patients (538) were self-reported in ED during their second visit. The vast majority of patients (542) agreed that they received discharge instructions.
Physician related Factors includes: this includes missed diagnosis (1.6%), adverse drug reaction ((1.3%) and discharged without home medication (8.4%)
Patient related: The 60.26% (331) of the patients perceived that they were not improved with initial treatment. However, among these patients only 8 were admitted in Alkhor hospital and one transferred to another health care facility for expert management. The vast majority of the patients 97.2% patients were discharge from the ED.
Illness related: This was the most common reason for revisit in ED, 52.9% (362) return with same complaints while 21.3% (146) with related complaints in which 97.6% patients were discharged and 1.3% (7) admitted in hospital 22.8% (156) patients reported in ED with new complaint.
System related: The 23.49% revisited patient's living in Alkhor or nearby area, with their primary health center facility 70 km away from their residential area. 30% patients have no health center facility for further follow up. This cause high financial burden for low income workers
During the study period 718 patients found stayed more than 6 hours in ED, which accounts 2.87% of the total ED patients, mean age of 35.43 years. Most of the patients were males (93.3%). The vast majority of patients were priority 3 and 4 which accounts 26.6% and 63.6 % respectively. The mean times for triage and physician assessment were 2.538 hours and 2.571 hrs accordingly and length of stay in ED was 8.365 hrs. The top 5 reasons for delay the patient in ED was waited for assessment by physician (26.3%), waiting for reassessment by physician (20.2%) observation (11.3%) waited for triage by nurse (8.6%) and repeat lab works (8.2%).
The revisited patients distributed unequally in the three shift duty hours. The morning (7 am–15 pm) and evening (15 pm–23 pm) shifts received the highest proportion 43.12% (295) and 38.45(263) compared to night shift (23 pm–7 am) 15.05% (103). Whereas on daily basis of revisited patients maintained almost equal distribution.
In general the average length of stay of priority 2 patients was 2 hours 14 mts, priority 3 was 2 hrs 17 mts where as priority 4 and 5 was 2 hrs 27 mts and 1 hour 54 mts respectively
Conclusion
The patients decision to revisit in ED is complex, it include several factors like poor quality of service, missed diagnosis, financial factors, disease process etc. From our study we found majority of revisit due to illness or system related factors like their perception of disease progress, lack of local health center facility for workers, financial burden etc. Effective educational program and initiation of tele nursing service for discharged patients can avoid unnecessary ED visits.
-
-
-
Development of an Automated, Real-Time Health Monitor and Emergency Alert System for the Elderly
Authors: Francis Enejo Idachaba and Ejura Mercy IdachabaThe state of health of the elderly members of society and the fact that most of this group of persons either live alone or with family creates a need for constant monitoring as they are often times left alone for the greater part of the day when their hosts or family members have to go to work. The most common cause of death among the aged is heart related. Heath related medical emergencies range from heart attacks to strokes which happen suddenly leaving the victims with little or no ability to call for help. This system provides a means of monitoring the heart rate, temperature and blood pressure of the individual. It also has the capability of sending a pre stored message to the control center which passes it on to the nearest medical team or emergency response team indicating the location of the user, the vital signs and some medical history in the event of an emergency when the alert function is activated. These messages are sent using the GSM-SMS technology and are delivered within seconds of activation. The system can be provided by the health care providers, emergency response service providers, HMO and hospitals. The device is to be worn all times by the individuals such that they are monitored in real time. The system also has a panic button which can be activated by the user for medical challenges other than heart related cases where the individual is not able to reach a phone. The system also incorporates a GPS receiver system capable of transmitting the location of the user in the event that the emergency occurred outdoors. The system provides a real time monitoring of the health conditions of the aged and enables a faster deployment of paramedics in the event of a medical emergency.
Introduction
A recent study of over 800 elderly citizens over 60 years in a middle east country showed that the prevalent medical challenges in the order of occurrence are hypertension (59.1%) followed by diabetes mellitus (57.3%), stroke (34.9%), dementia (28.5%), osteoarthritis (24.2%) and Alzheimer (21.4%). The females were observed to have a higher risk compared to the males for obesity (OR = 9.1; 95% CI = 3.51–12.8), followed by osteoporosis (OR = 8.7; 95% CI = 15.10–9.13) and fracture neck femur (OR = 3.9; 95 CI = 2.11–6.91). The result also showed that males are more susceptible to hypertension (OR = 1.4; 95% CI = 1.07–1.85), stroke (OR = 1.3; 95% CI = 1.08–1.89) and renal diseases (OR = 2.4; 95% CI = 1.25–4.54). The prevalence of Hypertension, diabetes and stroke can be monitored using heartbeat sensor and blood pressure sensor. This work utilizes sensors for monitoring the heartbeat, the blood pressure and the body temperature to detect the occurrence of a health emergency and thus enable the triggering of the appropriate response messages.
System design
The system design comprise of sensors for real-time monitoring of the patient and a microcontroller which integrates all the data from the sensor and determines when the patient is in an emergency condition. The system is registered to individual owners and the house address of the users and their medical history is stored in a database of the healthcare provider. The city to be covered by the system is regrouped into service areas with ambulance and paramedic teams located in each service area. In the event of a health crisis or a medical emergency from the user, the microcontroller interprets the sensor output and sends a message depending on the sensor message received from the user to the central control center. The control center working together Mobile operators generates a coarse location of the user and compares it with the user stored address. If the user is at home, the control center sends the user name and phone number to the response team. It also sends the address, basic medical details, the nature of the emergency, phone number of the next of kin to the nearest ambulance and paramedic team covering the service areas within which the user is located.
-
-
-
Epidemiological And Clinical Feature of Newly Diagnosed Childhood Immune Thrombocytopenic Purpura in Qatar
Background
ITP (Immune thrombocytopenic purpura) is the most common bleeding disorder in childhood. It is usually a self-limiting disorder, and most patients recover spontaneously without serious complication. The clinical features of ITP have remained unchanged over the past few decades but there have been many recent changes in management strategies as evidenced by the new international guidelines on the management of childhood ITP. We wanted to study our pediatric inpatient population in Qatar who were admitted with acute ITP over the last 5 years for their presenting features and the management strategies adopted by our pediatricians, most of the these decisions in our institution are in conjunction with the pediatric hematology team. This study would also serve as baseline data, which could be used to compare changes in trends of management in the future.
Methods
This was designed as a retrospective descriptive study. We included previously healthy, newly diagnosed ITP, aged 0 to 14 years, admitted to the pediatric inpatient unit, in Hamad General Hospital, Doha, Qatar from January 2008 to January 2014. Patients with a pre-existing chronic medical illness, including hematological conditions and those that were later diagnosed to have conditions other than ITP, were excluded. Data was collected from the patient's medical records using a specially designed data collection sheet that was used to gather the all the relevant patient related details including demographics, presenting signs and symptoms, investigations, treatment and outcome. The study was approved by the Medical Research Center at Hamad Medical Corporation.
Results
80 patients fulfilled the inclusion criteria with a male to female ratio of 1.1: 1. (38.3%) of the patients were in the 2–5 year age group, which reflects the peak incidence of ITP in children as per literature, (13.8%) of the patients were below the age of 12 months and (16.3 %) were above the age of 10.(34.3%) of the patients were Qatari Nationals, while the rest of them were expatriates residing in Qatar.
(40.5%) reported flu-like symptoms before the presentation of the illness. Most of the parents (>80%) denied any history of known hematological disease in their families.
The platelet counts at admission to the pediatric ward were as follows: (80%) of patients had a platelet count f 20,000 or less, (11.3%) from 21,000 to 35,000, while the rest of the patients had platelet counts between 36,000 to 100,000. The most common clinical features reported were spontaneous sub-cutaneous bruising in (77.5%) of patients, followed by oral mucosal petechial rash in (38.8%), fever in (28.8%), epistaxis in (16.3%), gum bleeding in (7.5%). 3 patients needed observation in the pediatrics intensive care unit due to life threatening bleeding, one of whom had intracranial bleeding.
The treatment modalities used in our patient population were as follows: Intravenous immunoglobulin (IVIG) alone in (86.3%), a combination of IVIG and steroids in (6.3%) and steroids alone (1.1%). Only 5 patients (6.3%) were managed by observation alone, 4 of these patients had platelet counts above 20,000 and none had any significant bleeding clinically. The most common IVIG dosing used was 1 gram/kg/day for 2 days. (28.7%) of our study population was prescribed a second dose of IVIG by their treating physician. The vast majority (74.3%) did not experience any adverse reaction after IVIG treatment. However, (10.8%) had fever, (8%) had vomiting and headache. 2 patients were clinically suspected of aseptic meningitis, post IVIG therapy, based on the treating physician's assessment. In terms of the length of stay in hospital, majority of the children (83.8%) were hospitalized for 1 to 5 days.
At follow up, (66%) children had recovered with normal platelet counts within 1 year from the date of diagnosis, while (34%) had progressed to chronic ITP, which is defined as the persistent thrombocytopenia, lasting beyond 1 year.
Conclusion
Our study showed that the clinical features of Acute ITP in Qatar were similar to those reported from various parts of the world. However, the percentage of chronic ITP was higher in our study population than that quoted in literature. Management decisions in our center, like in many centers around the world, were often based on the physician's clinical judgement, rather than the current established international guidelines.
-
-
-
Factors to Increase Influenza Vaccination Acceptence and Coverage Rate Among Pediatricians
Background
Influenza is a highly infectious but preventable viral illness. Influenza vaccine remain the cornerstone of prevention, WHO encourages annual influenza vaccinations for all children and youth ≥ 6 months of age and those who have chronic illness at risk for the development of complications. Vaccinating pediatricians will reduce their risk of getting the flu and could potentially prevent illness in patients; their positive attitudes play a central role in educating parents and support decision-making to increase vaccine coverage in children's. Immunization schedule in Qatar matched the World Health Organization (WHO) recent recommendations and provide vaccination programs to public accordingly.
Free vaccine campaign started at HMC, Qatar in 2006 for all health care providers in hospital. The influenza target was set at ≥ 70 % vaccination rates. Data from the infection control department shown that there is consistently low compliance with seasonal Influenza vaccine among all health care workers: in 2011, the rate was 37% compared with 68% in 2012.
Although a safe and effective vaccine is available, there is little local data on percentage of vaccinated hospital based pediatricians and their attitudes toward seasonal flu vaccine.
Objective
To assess vaccination coverage rate, attitude and identify several factors to enhance seasonal influenza vaccine acceptance among pediatricians in Qatar
Methods
cross sectional survey was conducted among pediatricians working at different locations in the pediatric department such as pediatrics inpatients ward, pediatric intensive care unit, neonatal intensive care unit and pediatrics emergency department at Hamad Medical Corporation the main tertiary teaching hospital in Qatar, The survey consisted of details demographics, attitudes, uptake of influenza vaccine in the current year and factors influencing vaccine acceptance. The study protocol and Questionnaire was reviewed and approved by the Medical Research Centre. Hamad Medical Corpo-ration, Doha, Qatar. All Statistical analyses were done using statistical packages SPSS 22.0 (SPSS Inc. Chicago, IL).
Result
A total of 63 pediatricians from different department participated in this survey. Our study showed that percentages of participants who received sessional flu vaccination were (78%). Flu vaccination uptake was observed to be (58%) among physicians working in high-risk area such as PICU, NICU and Pediatrics Emergency compared to (42%) on inpatients ward. In order to promote immunization acceptance and coverage rate among pediatricians, use of evidence-based statement to support vaccine effectiveness ranked the highest (42%), followed by (23%) provides free on site vaccination, (20%) participating in multidisciplinary educational campaign and (10%) leadership support and being a role model, and lastly (5%) increase access to vaccine.
Conclusion
Personal experience of seasonal influenza vaccination, Evidence base benefit of vaccine and its safety plays an important factor in physician's attitude towards immunization.
Our finding showed that vaccine coverage among pediatricians working in a hospital setting close to the international target of 80% in healthcare facilities. Good compliance and high acceptance of influenza vaccination by pediatricians will positive impact on children immunization rate in Qatar. Our study described several practical intervention to enhance flu vaccine acceptance and achieve higher coverage rate.
-
-
-
Protein Engineering of Glucarpidase to Improve Cancer Therapy Strategies
Authors: Sayed K Goda, Alanod Alqahtani, Mathew Groves and Alex DomlingAntibody Directed Enzyme Prodrug Therapy (ADEPT) is a technique used in cancer treatment, which convert a prodrug to a powerful cytotoxic drug only in the vicinity of the tumor. The technique relies on a bacterial enzyme, glucarpidase (former name: carboxypeptidase G2, CPG2). Also the glucarpidase is a very effective enzyme for detoxification of methotrexate, (MTX) which serves as an important component of various chemotherapeutic regimens for the treatment of cancer patients.
Repeated cycles of ADEPT and the use of wild type glucarpidase in detoxification are essential but are hampered by the human antibody response to the enzyme. Additionally, glucarpidase has a relatively slow action in detoxification.
The aim of the work is to provide solutions to overcome the pitfalls of the techniques through the application of different strategies.
In our work we successfully isolated new glucarpidase produces from soils. We cloned and overexpressed the novel gene in E.coli. The new recombinant glucarpidase has been characterized. We also produce different variants of our new gene using different strategies. These variants will be investigated to isolate glucarpidase with higher activity and glucarpidase which avoid the immune system.
-
-
-
Mast Cell Proteases as Key Clinical Markers and New Targets for Drug Development in Allergic Disease: Implications for Anti-Doping Policy
Authors: Sayed K Goda, Afrah Al-Yafei, Haya Al Sulaiti, Araf Kyyaly, Mohammed Alsayrafi and Andrew WallsThere have been dramatic increases in the prevalence of allergic conditions throughout the world. Until recently conditions such as allergic asthma and rhinitis, and life-threatening anaphylaxis were relatively rare in Qatar and the Gulf states, but the proportion of the population affected now seem to be approaching the high levels of many Western countries. There is a pressing need for better means for effective diagnosis, for the prediction of those who are at risk of serious reactions and for new treatments.
Our studies focus on the mast cell, a cell type of pivotal importance in mediating allergic disease. Mast cells release a range of potent proteases and other mediators of inflammation. Three unique mast cell proteases, carboxypeptidase, tryptase and chymase may be valuable as markers for anaphylaxis; and even in asymptomatic subjects serum concentrations may be related to susceptibility to severe reactions. We propose to evaluate specific immunoassays for these enzymes as new laboratory tests.
These three genes encoding the three proteases have been codon optimized and synthesized for maximum expression in either E. coli or Pichia pastoris.
The carboxypeptidase A was sub cloned and overexpressed in E. coli using the pET28a. Two variants of human mast cell Tryptase and two variants of human mast cell Chymase were sub cloned into Pichia pasoris vectors pPIC9 or pPICZ alpha for expression.
Native proteases have also been produced from human lung mast cell.
Molecular characterizations of the three recombinant proteases are being done.
The potential for these three proteases as a novel target for therapeutic intervention in allergic disease will be carefully assessed. As drugs taken for allergic conditions are have attracted attention for their potential to lead to enhanced performance in athletes, the studies should be of relevance in the formulation of anti-doping policy.
-
-
-
Building of a Large Scale De-Identified Biomedical Database in Qatar-Principles and Challenges
Authors: Fida K. Dankar and Rashid Al-AliBackground
Electronic Medical Records (EMRs) hold diverse clinical information about large populations. When this information is coupled with genetic data, it has the potential to make unprecedented associations between genes and diseases. The incorporation of these discoveries into healthcare practice offers the hope to improve healthcare through personalized treatments. The Qatar National Genome project aims to achieve this vision by building a warehouse of genome sequencing information linked to de-identified EMR data. The warehouse should facilitate accessibility to research data, but also protect patients’ privacy and confidentiality by employing responsible data de-identification and data sharing mechanisms.
This abstract discusses the privacy and governance challenges encountered during the construction and deployment of the data warehouse. To simplify the presentation, we divide the data management lifecycle into four stages and discuss the challenges at each stage separately: 1) Initial data collection, 2) data storage, 3) data sharing (utilization) and 4) Dissemination of research findings to the community.
Data collection
The data for the Qatari genome project is sought from the community. Thus it is important to consult with the population to establish the basic principles for data collection and research oversight. To achieve that, a community engagement model should be defined. The model should establish:
1. An advocating technique for advertising the project to the community and raising the number of individuals who are aware of the project. The technique should strive to reach different elements within the society, provide clear dissemination of risks and benefits and establish methods for recurrent evaluation of the community attitudes and understanding of the Project.
2. A recruitment strategy for establishing the enrollment criteria and enrollment process:
a. The enrollment criteria defines the basis for enrollment (should it be disease based or volunteer based) and the acceptable age for volunteers, and
b. The enrollment process defines the scope of subjects’ consent (opt in/out or informed consent) and warrants a clear boundary between research and clinical practice.
3. The extent of institutional review board (IRB) and community oversight, given the potential impact of the project on the community, an oversight for the program by the community and the IRB should be discussed and established. The scope includes oversight on data repositories, oversight on research studies as well as oversight on any changes to the protocol (data use agreements, communications, etc.)
Data storage
Foundational documents in modern research ethics stress the importance of reducing harm to participants and maximizing benefits to the society. Re-identification of participants’ identity is one form of harm that can be involuntarily or deliberately inflicted. Personal information derived from EMR records and/or genomic data can be used against the participants to limit insurance coverage, to guide employment decisions, or to apply social stigma. To minimize the risk of harm, the research platform should store de-identified clinical and biobank data while retaining the link between both data sources (the de-identified EMR data and the biobank data). This can be achieved by applying the following two operations:
1. The first operation (known as pseudonymization) identifies a stable and unique identifier(s) (such as Qatari IDs) that is included in both data sources and replaces it with a unique random ID (or pseudonym).
2. The second removes all uniquely identifying information (such as names, record number, and emails) from the structured data and masks all unique identifiers from the unstructured data (such as doctors’ notes). To perform this step properly, we need to determine the uniquely identifying information proper to the Qatari setting. Due to the relatively small population size in Qatar, some regular attributes might prove to be very informative. For example an age of 87 or above and certain professions, such as lawyer, might uniquely identify a participant.
Multiple aspects need to be considered when designing the pseudonymization operation, these include:
1. Ensuring that each subject is assigned the same random ID (pseudonym) across the different data sources. This consistency will ensure that data belonging to a particular subject will be mapped to one record.
2. The pseudonymization process could be reversible or not. Reversible systems allow reverting back to the identity of the subjects through a process called de-pseudonymization. They are used when communication with patients is a foreseen possibility.
3. In case communication with participants is forecasted, then a secure de-pseudonymization mechanism should be specified. The mechanism should define (i) the cases for which re-identification can occur, (ii) the bodies that can initiate re-identification requests, (iii) those that rule and regulate these requests, and (iv) the actual re-identification mechanism.
Data sharing
After the removal of uniquely identifying information, the resulting data is said to be de-identified -but not anonymized. Access to (non-anonymized) biomedical data collected in Qatar is governed by the QSCH “guidelines, regulations and policies for research involving human subjects”. So a critical part in defining a data access protocol is to:
1. Identify and understand data access procedures and requirements set by QSCH, and
2. Identify and understand data access desires of the Qatari community (through surveys, meetings with community representatives, etc.)
3. And finally deploy the gathered policies and requirements along with the collected consents into the design of the data-access platform.
Note that access to the research data platform has to be provided to all research institutes within Qatar. Such as researchers from Hamad Medical Corporation, Qatar Biomedical Research Institute, Qatar Computational Research Institute, Weil Cornell Medical College in Qatar, Qatar University, Hamad bin Khalifa University, Sidra Medical and Research Center and other research institutions. Moreover, the data warehouse is viewed as a platform for worldwide collaborative research projects. With such massive mandate, a principal feature is to have the capacity to foster timely research and discoveries. Data application processes and approvals should be smooth and should not delay project initiation significantly. This cannot be realized using traditional “IRB-based” data-sharing systems. Thus, there will, eventually be a need to (fully or partially) automate the data access process. In other words, we need to design a system to automatically match data access requests with access decisions. In general, access decisions could be provided at multiple access levels. For example in some cases, the requested data could be exported to the investigator premises while in other cases, secure remote access can be imposed. In general, the granted access levels should counter the risk posed by data requests. For example, a request for highly sensitive data (such as HIV data) from an investigator affiliated with a well-established Qatari research institute is inherently less risky than a request for the same dataset by an investigator affiliated with an institution outside Qatar, thus, the second request should receive more access limitations than the first.
Dissemination of findings
Prior work demonstrated that in order to affirm the value of research participation and contribute to public education, it is important to have a mechanism for disseminating research findings to the public. This will keep the community aware of how their participation is facilitating research and improving knowledge in the biomedical field.
The mechanism should also tackle the issue of disseminating specific research findings to specific participants. One of the main challenges in that regard is to define when a finding is considered scientifically valid and when it is considered valuable information for the recipient.
-
-
-
A Genome Wide Association Search for Type 2 Diabetes Genes in Arabic Populations
Authors: Mohamed Chikri, Younes Elachhab, Loic Yengo, Audry Leloire, Martine Vaxilaire and Philippe FroguelType 2 diabetes (T2D) is a chronic condition that emerged as serious medical, social and economic problem worldwide. Qatar was ranked by the IDF among the top 10 countries in the world with the highest prevalence of T2D that exceeds 20%. The causes of T2D are multiple but the contribution of genetic is well recognized. So far, large-scale genome-wide association studies (GWAS) have identified more than 80 susceptibility loci. However, these investigations were carried out mainly on European populations. The main goal of the present study was to conduct a T2D-GWAS analysis in Arab population. A case-control study using 870 T2D cases versus 666 controls was performed to compare allele frequencies across the genome of Moroccan samples. The Illumina Human Core Beadchip was used to genotype 298,930 SNP in these samples. Genotype calls were assigned using the GenCall algorithm as implemented in Illumina GenomeStudio (version 2010.3; Illumina Inc.). Stringent Quality Control (QC) criteria for filtering SNPs and samples for analyses were applied. All statistical analyses were performed using PLINK version 1.07 (http://pngu.mgh.harvard.edu/ ∼ purcell/plink/). Associations of SNPs with T2D were tested using logistic regression (-logistic command in PLINK), assuming an additive genetic model, and with either no adjustment or adjusting on the five first principal components. Correcting for potential confounders was a necessity since the large inflation factor (Genomic Control; GC = 1.176) was detectable in the unadjusted analysis. Increasing the number of PC to adjust for 10 corrected and decreased the genomic control under 1.07 (GC = 1.06). Imputation of non-observed genotypes is possible using linkage disequilibrium at genetic loci and a database of haplotypes from diverse populations (reference panel). This approach was implemented in all Moroccan samples and led to impute up to 30,071,165 variants (SNP + short INDELS). We selected the best SNP candidates for replication using the following criteria: P-value of association below 10–5, Quality of imputation I2 ≥ 0.7 and Minor Allele frequency ≥ 5%. These three criteria led to a shortlist of 154 variants. Some of these variants were redundant because of linkage disequilibrium (r2>0.6). When two variants were in LD we selected the one showing the strongest association. By focusing on independent signals, we selected a list of 26 SNPs for further replication genotyping study. Replication study was performed in additional 1500 Moroccans T2D cases and controls using the Fluidigm genotyping platform and following the vendor instructions protocol. In this replication analysis, only 2 of the 26 SNPs (7.7%) showed nominal evidence of replication with a P-value adjusted
-
-
-
Modified Silk with Cell-Adhesive and Non-Thrombogenic Properties as a Tissue Engineering Substrate
Authors: Matthias Gabriel, Marc Becker and Christian Friedrich VahlIntroduction
Replacement of damaged tissue is nowadays an aim of tissue engineering. This technique involves the use of porous or fibrous structures – the so-called scaffolds – that support the colonization with the desired cell type and which are degraded after fulfilling their temporary supporting function. Basic requirements for the prevailing materials used in this field are nontoxicity, low immunogenicity and cell-adhesiveness. Furthermore blood-contacting devices should exhibit low thrombogenicity.
The biopolymer silk, mainly consisiting of the protein silk fibroin, matches some of these criteria but bare silk does not facilitate cellular adhesion and growth and unfortunately the material is prone to platelet attachment.
In our approach, the chemical surface immobilization of a cell adhesive peptide of silk samples reduces thrombocyte adhesion to a large extent while simultaneously promotes specific adhesion and colonization by endothelial cells (ECs).
The specific interaction of the modification was further demonstrated by fibroblast cell culture.
Materials & Methods
The EC specific adhesive peptide Arg-Glu-Asp-Val (REDV) derived from the extra cellular matrix protein fibronectin was used in our experiments. The peptide was chemically immobilized onto silk fabric scaffold by using hexamethylene diisocyanate (HMDI) as an activator for the substrate. Subsequent hydrolysis of pending isocyanate moieties yielded in primary amino functionalities. REDV was subsequently conjugated directly using a short chain amino-reactive crosslinker or via a bifunctional polyethylene glycol (PEG) spacer. Additionally, silk was modified with amino-functional PEG only. Modified as well as untreated specimens were subjected to cell culture using ECs and fibroblasts. In addition samples were challenged with platelet rich plasma in order to evaluate thrombocyte adhesion. Potential changes in material bulk properties and morphology were checked by scanning electron microscopy, gel permeation chromatography and mechanical testing.
Results & Discussion
Coverage of silk fabric with ECs was greatly promoted through REDV-modification by a factor of 17 (directly coupled peptide) and a factor of 20 (PEG-mediated coupling) respectively after 2 weeks growth in comparison to cell colonization of untreated material. Substrate modification also inhibited initial (24 h) fibroblast adhesion. Thrombocyte attachment was strongly reduced 5-fold as a result of PEG-modification, independent of an additionally conjugated peptide. Mechanical (tensile testing) as well as morphological properties (SEM) were not significantly altered by the chemical treatment. The initial activation of silk showed no detectable influence on the composition (GPC).
Conclusion
Taken together, the feasibility of improving the biological performance of silk, an established biomaterial, was shown. We where able to show that the chemical modification left the basic material properties largely unaffected. These findings may contribute to novel tissue engineering approaches that facilitate the endothelialization of cardiovascular implants such as vascular grafts and heart valves.
Figure 1: Covalent immobilization of the REDV peptide – either directly coupled or PEG-mediated – renders silk an excellent substrate for endothelialization. In addition the presence of PEG alone inhibits the attachment of platelets to a large extend.
-
-
-
An Organic Field Effect Transistor Based Nano Biosensor for the Early Detection of Cardio Vascular Disease – The Most Common Death Causing Disease in Qatar
More LessQatar has one of the world's fastest growing population. The lifestyle and socio-economic situation of the Qatar, like other Arab countries are also vastly changing according to the growing trends. These changes finally reflected in the life expectancy and led to a rise in the “Non-Communicable diseases” or otherwise known as the diseases of longevity like Cancer, Cardio Vascular Diseases (CVD), Diabetes Mellitus (DM), Asthma, Liver Cirrhosis etc. According to the data of Qatar Health Report of the year 2012, Coronary Heart Disease is one of the most common causes of death in Qatar after road traffic accidents [1]. It is the foremost cause of death universally, demonstrating 30 percent and in Qatar it is 13.65 %. Cardio Vascular Disease is not considered as a solitary condition, but it is a collection of diverse conditions that affect both heart and blood carrying vessels. There are many biomarkers which are currently used for the detection of CVD in the early stage. C-reactive protein (CRP) is one among them which is seen in blood plasma which in turn is produced in the hepatocytes of liver. The synthesis of CRP is initiated due to the inflammation response from fat cells and macrophages. The clinical significance of CRP is that it is considered as one of the best authenticated biomarker for Cardio Vascular Diseases.
Organic electronics have established into a sensational area of technology and research to substitute typical inorganic semiconductors. Out of that Organic Field Effect Transistor (OFET) have found new uses in the area of biosensors. OFET biosensors mainly use π-conjugated organic semiconductors as electronic materials which is encapsulated with a biological component which can be either antibodies, DNA, enzymes, proteins or bacteria. These are mainly due to their low manufacturing cost when compared to the traditional diagnosing techniques and faster response time. Another advantage of OTFT biosensors over other sensing techniques are the possibility of miniaturization and the output can be delivered in simple electronic form [2]. The biological component can be incorporated in to the active layer and when a source-drain voltage is supplied, the antibody/antigen binding behavior acts as a resistor in the circuit which will be a current flow as a result of charge carrier transport with respect to the corresponding biological or chemical reaction, which can be calculated and displayed. A linear rise in the drain current of the OFET devise can be observed in proportional to the concentration of the biological component [3]. The response time may vary from 10 to 20 seconds.
In this article we are demonstrating an OFET biosensor for the detection of C-reactive protein using antigen-antibody reaction. The biological component will be encapsulated inside the buffer layer of OFET which is spin coated using poly(3,4-ethylenedioxythiophene–poly(styrene-sulfonate) (PEDOT:PSS). The entrapment process is done by electrochemical polymerization. The device is fabricated in such a way that the antigen-antibody reaction will alter the electron flow of the OFET which can be attributed to the concentration of the C-reactive protein. This will in turn change the characteristic current-voltage produced which is measured using a Keithley electrical measurement system. A linear rise in the OTFT drain current may be observed with proportional to the amount of CRP-Anti CRP complex, which is further calculated and displayed in the form of digits.
References
1. Qatar Health Report of the year 2012.
2. Danesh, J., et al., C-Reactive Protein and Other Circulating Markers of Inflammation in the Prediction of Coronary Heart Disease. New England Journal of Medicine, 2004. 1387 p.
3. Maddalena F. Organic field-effect transistors for sensing applications. Groningen: s.n., 2011. 110 p.
-
-
-
Improving Insulin Therapy Related Knowledge in Type 2 DM Among Physicians in West-Bay Health Center
Authors: Islam Ahmed Noureldin and Amal Al AliBackground
Studies show that diabetics are in high risk for complications, hospitalization, and death from uncontrollability.
Good Glycemic control is an important part of preventive services for diabetic complications. One of the most important treatment options for Diabetic patient is insulin therapy, however there is evidence of poor physician knowledge about types and modalities of insulin therapy. Lack of knowledge will increase the likelihood of medication errors; contribute to poor glycemic control in diabetic patient and increased morbidity and mortality
- Our aim is to improve Insulin therapy related knowledge in type 2 DM Among Physicians in West-bay health center by 20% above baseline by January 2013 and To raise the awareness of that problem among our physicians.
Subjects, Materials & Methods
- The project began on November 2012 with baseline Questionnaire survey as pilot for our study. Followed by This Pre-intervention insulin knowledge Questionnaire conducted to Identify deficient areas in the insulin-Related knowledge with the results showed Mean Percentage of incorrect answers is 62%.
- For more on which brainstorming session was done to identify the root causes of such problem, and depicted using the fish bone diagram.
- A survey questionnaire was designated to assess the main causes of the problem and distributed to nurses and physicians working in NCD clinics.
Prioritization of Reasons of insufficient insulin therapy knowledge demonstrated in Paretto chart.
According to analysis we found three main causes of the problem:
1. a – No Training Program for physicians.
2. No specific form or Ineffective Auditing
3. Less Exposure to Patient with T2DM.
- Our intervention was in the form of:
1- Training program: conducted on December 2012 and it is include a PowerPoint presentation, problems solving cases, and group discussion about insulin types and Pharmacokinetics, concerns about insulin therapy, initiation, titration, and monitoring of insulin therapy, and insulin regimens in PHC setting.
2- Post intervention insulin knowledge Questionnaire: distributed after conducting training program.
- Two audits done retrospectively before intervention & one done after to show average percentage of improvement in knowledge to assess effect of intervention.
- All data was demonstrated using run chart.
Results
Improvement of insulin therapy-related knowledge reflected by mean percentage of correct answers in pre & post questionnaire conducting before and after training program. showed mean correct answers improved from 38% to 78 % in physician in West Bay H.C.
Conclusions
The results of the pre and post questionnaire analyses clearly showed an improvement in the level of insulin therapy knowledge among physicians.
The study indicates the areas that need to be addressed with greater emphasis.
The post test analysis shows that the training program has been successful in significantly improving knowledge of insulin therapy that may help to improve patient safety in PHC.
Next Steps
Share results with training and development dept.
Expansion of insulin therapy training program to include All PHC to transfer the knowledge.
Continue process of education (twice annually).
To analyses the segmented data to see where the improvement can be happen.
-
-
-
Time Course of Platelet Activation Markers as a Potential Prognostic Indicator after Primary Percutaneous Coronary Angioplasty in Qatar
Rationale
Myocardial infarction (MI) is one of the leading causes of death and disability worldwide. In Qatar specifically, cardiovascular diseases account for 20% of the main causes of death in the country, and the cases of MI is rising rapidly. Recent developments in the management of MI, particularly the emergent opening of the Culprit artery by primary percutaneous coronary intervention (PPCI), have resulted in significant improvement of outcome in patients. In spite of that, the early and long outcome following MI varies considerably in different patients, and a significant number develop major adverse cardiac events (MACE) in the first year after successful PPCI. The clinical complications of MI, such as ventricular remodelling and heart failure, are thrombo-inflammatory processes that involve platelet activation and interactions with blood cells, the endothelium, and the myocardium. Increased platelet activation has been reported in numerous cardiovascular diseases including hypertension, atherosclerosis, stroke, acute coronary syndromes, and myocardial infarction. However, the course of platelet activation after MI and its role in adverse remodelling has not yet been assessed.
Objective
The aim of this study is to evaluate the time course of platelet activation markers in patients diagnosed with acute ST-segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PPCI) in Qatar. Findings will be correlated with patients’ clinical outcomes to evaluate the role of platelet activation markers as prognostic markers of disease progression in adverse remodelling after PPCI.
Methods
Platelet activation was assessed by expression of inflammatory markers platelet P-selectin (CD62P), and lysosome-associated membrane protein (CD63), and formation of platelet-neutrophil aggregates (PNA) using flow cytometry. Measurements were done in peripheral blood samples obtained from healthy subjects (n = 25) and from patients at admission to the cath lab at day 0 (n = 55) before PPCI, and 48 hours (n = 51) and, 1 month (n = 48) after PPCI. P-selectin and CD63 expression is defined as the percentage of antibody-positive platelets (P-sel+ and CD63+). PNA are gated by their characteristic forward and side scatter properties and identified as dual-labelled cells in the gate of neutrophils exhibiting leukocyte CD45 and platelet CD42b fluorescence (CD45+/CD42b+).
Results
Platelet P-selectin and CD63 expression were high in patients at day 0 than in healthy control subjects (% of expression: 24.4 ± 3.0 vs 9.0 ± 2.7; p
-
-
-
Synthesis, Characterization, Crystal Structures, and in vitro Antitumor Activity of Palladium and Platinum (Ii) Complexes with 2-Acetyl-4-Methylthiazole Thiosemicarbazone and 2-Acetylpyrazine Thiosemicarbazone
Authors: Hassan Nimir, Norah Al Mohaideb, Mariem Hamad, Awadelkareem Ali, Cenk Aktas, Volker Huch, Michael Veith and Uli RauchThe novel Schiff bases I HAMTTSC (2-Acetyl-4-methylthiazole thiosemicarbazone), II HAPTSC(2-Acetylpyrazine thiosemicarbazone) and their complexes with Pt(II) and Pd(II): 1 [Pt(AMTTSC)Cl], 2 [Pt(AMTTSC)2], 3 [Pd(AMTTSC)Cl], 4 [Pd(AMTTSC)2], 5 [Pt(APTSC)Cl], 6 [Pt(APTSC)2], 7 [Pd(APTSC)Cl], and 8 [Pd(APTSC)2] have been synthesized, and characterized by elemental analysis and spectroscopic studies. The crystal structure of the Schiff bases I, II, and the complex 1 [Pt(AMTTSC)Cl], have been solved by single-crystal X-ray diffraction. The electronic, IR, UV/Vis, and NMR spectroscopic data of I and II and their complexes are reported. The in vitro antitumor activity of the Schiff bases and 1, 2, 4, 5 and 6 complexes against two different human tumor cell lines (HT-29 and HuTu-80) reveals that the complexes are more cytotoxic than their corresponding ligands with IC50 values at the range of 0.1–10 μM. These compounds can therefore be considered as agents with potential antitumor activity.
Molecular structure of 1 [Pt(AMTTSC)Cl]
Introduction
Thiosemicarbazones (TSCNs) are very promising molecules in coordination chemistry because of their pharmacological properties of both ligands and complexes, (1–3) which include notably their antiparasital, (4) antibacterial (5, 6) and antitumor activities (7) depending on the parent aldehyde and ketone and, of course, metal ion. The thiosemicarbazone ligand usually coordinates with a metal through the imine nitrogen and the sulphur atom forming a five-membered ring chelate. Since cis-platin emerged as the most important antitumor drug (8), thousands of metal complexes have been synthesized and characterized in order to study the effect of the metal, the attached group on the structural and kinetic properties involved in the biological activity. (9) However significant problems are still extant, including side effects, toxicity, cancer specificity and acquired resistance. Consequently the development of new compounds outside the usual coordination sphere or of different structural properties is the challenge to cancer research.
Synthesis of the ligands
The ligands 2-Acetyl-4-methylthiazole thiosemicarbazone and 2-acetylpyrazine thiosemicarbazone, were prepared according to the literature (10).
Synthesis of Complexes Pt (AMTTSC)Cl Complex
A solution of K2PtCl4 (0.208 g, 0.5 mmol) in methanol, was added dropwise to a stirred solution of HAMTTSC (0.5 mmol) in 20 mL of methanol. The solution was refluxed for 2 hours and stirred for 24 hours at room temperature. The dark red precipitate was collected by filtration and dried in vacuo. Crystals suitable for X-Ray diffraction were obtained through slow evaporation of the DMF solvent.
Solid, yield: 70.59%, m.p. 236–237°C. Anal. Calc. For Pt(C7H9N4S2)Cl (443.84 g/mol): C, 18.94%; H, 2.04%; N, 12.62%. Found: C, 18.74%; H, 2.18%; N, 12.85%. I.R. (solid state, cm− 1): ν(NH2) 3395, 3267; ν(C = N) 1520.38; ν(C = S) 873.31; ν(N-N)1065.89. 1H-N.M.R (DMSO-d6): δ 2.21, 2.41 (s, 6H, 2CH3), 7.75 (s, 1H); 8.07(b, 2H, NH2).). 13C-N.M.R. (DMSO-d6): δ 13.93, 16.39 (2CH3); 148.59,-154.62(3C ring); 171.92 (HC = N); 183.20 (C = S). Electronic spectra (λmax nm): 270, 391, 531.
Pt (AMTTSC) 2 Complex
A solution of K2PtCl4 (0.208 g, 0.5 mmol) in methanol, was added dropwise to a stirred solution of HAMTTSC (1.0 mmol) in 30 mL of methanol. The solution was refluxed for 2 hours and stirred for 24 hours at room temperature. The pinkish red precipitate was collected by filtration and dried in vacuo.
Solid, yield: 74.36%, m.p. dec.>245°C.Anal. Calc. For Pt(C7H9N4S2)2 (621.69 g/mol): C, 27.05%; H, 2.92%; N, 18.02%. Found: C, 27.01%; H, 3.028%; N, 18.97%. IR (solid state, cm–1): ν(NH2) 3354.48, 3265.78; ν(C = N) 1535.94; ν(C = S) 873.25; ν(N-N) 1075.01. 1H-N.M.R (DMSO-d6): δ 2.21, 2.39 (s, 6H,2CH3), 7.74, 7.34(s, 1H); 8.07 (b, 2H, NH2); 8.51, (b, 2H, NH2). 13C-N.M.R. (DMSO-d6): δ 13.51, 16.39 &13.91, 16.76 (4CH3); 144.02-152.49 &148.57-154.64 (3C ring); 166.07&171.93 (HC = N); 183.22 (C = S). Electronic spectra (λmax nm): 270, 363, 389, 53.
Pd (AMTTSC)Cl Complex
A solution of K2PdCl4 (0.163 g, 0.5 mmol) in methanol, was added dropwise to a stirred solution of HAMTTSC (0.5 mmol) in 20 mL of methanol. The solution was refluxed for 2 hours and stirred for 14 hours at room temperature. The orange precipitate was collected by filtration, washed with ethanol and ether, and dried in vacuo.
Solid, yield: 92.39%. m.p. 236–237°C. Anal. Calc. For Pd(C7H9N4S2)Cl (355.18 g/mol): C, 23.67%; H, 2.55%; N, 15.77%. Found: C, 22.94%; H, 2.68%; N, 15.09%. IR (solid state, cm− 1): ν(NH2) 3426.47, 3304.62; ν(C = N) 1552.37; ν(C = S) 867.85; ν(N-N) 1118.21. 1H-N.M.R. (DMSO-d6): δ 2.25, (s, 6H, 2CH3), 7.64 (s, 1H); 7.93 (d, 2H, NH2).13C-N.M.R. (DMSO-d6): δ 13.83, 16.36 (2CH3); 145.69, 147.79–154.36(3C ring); 169.58 (HC = N); 180.71 (C = S). Electronic spectra (λmax nm): 274, 313, 386,493.
Pd(AMTTSC)2 Complex
A solution of Pd(acac)2 (0.152 g, 0.5 mmol) in CH2Cl2/ CH3OH (30 mL, 2:1 v/v) was added dropwise to a stirred solution of HAMTTSC (1.0 mmol) in 30 mL of methanol. The solution was refluxed for 2 hours and stirred for 24 hours at room temperature. The red precipitate was collected by filtration, washed with ethanol and ether, and dried in vacuo.
Solid, yield: 79.54%. m.p. dec.>174°C. Anal. Calc. For Pd(C7H9N4S2)2, (533.03 g/mol): C, 31.55%; H, 3.4%; N, 21.02%. Found: C, 30.77%; H, 3.62%; N, 19.84%. IR (solid state, cm− 1): ν(NH2) 3308.98, 3257.77; ν(C = N) 1557.07; ν(C = S) 871.65; ν(N-N)1080.62. 1H-N.M.R. (DMSO-d6): δ 1.46, 1.62 (s, 6H, 2CH3), 7.04 (s, 1H); 8.04, 6.74 (d, 2H, NH2). 13C-N.M.R. (DMSO-d6): δ 13.66, 16.18 (2CH3); 147.49, 152.70& 148.44, 155.45 (3C ring); 169.62&171.19 (HC = N);, 182.38 (C = S). Electronic spectra (λmax nm): 289, 348, 448.
Conclusion
New potential anti-cancer Pt (II) and Pd(II) complexes were synthesized through the reaction of the heterocyclic thiosemicarbazone ligands with Pt (II) and Pd (II) ions in 1:1 and 1:2 ratios reactions.
The structures of the synthesized compounds were elucidated on the bases of spectroscopic data (IR. 1H and 13C N.M.R, UV-VIS and XRD).
As the experimental results show, the synthesized Schiff bases reacts with Pt(II) ion in different modes of bonding, they react as tridentate through the mercaptide sulfur ion, the azomethine nitrogen atom and the nitrogen of the ring.
All ligand and complexes tested show a concentration dependent reduction of cell proliferation. The test results show that the change of the ligand metal ratio has significant effects on the antiproliferative activities of the platinum(II) complexes. In general, it was found that complexes were more active than the corresponding ligand. The complex with the formula PtLCl was found to be slightly more active than the complexes with formula PtL2 against HT-29 and HuTu cancer cells line.
References
(1) Kovala-Demertzi D., Boccarelli A., Demertzis M. A., and Coluccia M., In vitro antitumor activity of 2-acetyl pyridine 4N-ethyl thiosemicarbazone and its platinum(II) and palladium(II) complexes, Chemotherapy, (2007), 53,2, 148.
(2) Kovala-Demertzi D., Varadinova T., Genova P., Souza P., and Demertzis M. A., “Platinum(II) and palladium(II) complexes of pyridine-2-carbaldehyde thiosemicarbazone as alternative antiherpes simplex virus agents, Bioinorganic Chem. and App, (2007), 56165, 2007.
(3) Scovill, J.P. Klayman D.L., Franchino C.F., Acetylpyridine Thiosemicarbazones Complexes with Transition Metals as Antimalarial and Antileukemic Agents. J. Med. Chem., (1982); 25, 1261.
(4) Duffy K. J., Shaw A. N., Delmore E., Dillon S. B., Erickson-Miller C., Giampa L., Huang Y., Keenan R. M., Lamb P., Liu N., Miller S. G., Price A. T., Rosen J., Simth H., Wiggal K. J., Zhang L, Luengo J. I., J. Med. Chem., (2002), 45, 3573.
(5) Agarwal R. K., Singh L., and Sharma D. K., Synthesis, spectral, and biological properties of copper(II) complexes of thiosemicarbazones of Schiff bases derived from 4-aminoantipyrine and aromatic aldehydes, Bioinorg. Chem. and Appl., (2006), 59509, 2006.
(6) Pandey O. P., Sengupta S. K., Mishra M. K., and Tripathi C. M., Synthesis, spectral and antibacterial studies of binuclear titanium(IV)/zirconium(IV) complexes of piperazine dithiosemicarbazones, Bioinorg. Chem. and Appl, (2003), 1., 1, 35.
(7) Quiroga A. G., Pérez J. M., López-Solera I., et al., Novel tetranuclear orthometalated complexes of Pd(II) and Pt(II) derived from p isopropylbenzaldehyde thiosemicarbazone with cytotoxic activity in cis-DDP resistant tumor cell lines. Interaction of these complexes with DNA, J. of Med. Chem, (1998), 41, 9, 1399.
(8) Smith J. E., Talbot D. C., Ber. J. Cancer, (1991), 65, 787.
(9) Hacker M. P., Khokar A. R., Brown D. B., McCormack J. J., Krakoff J. M., Cancer Res, (1985), 45, 4748.
(10) De Lima G.M., Neto J.L., Beraldo H., Seibald H.G.L, Duncalf D.J.,J.of Molec.Struc.(2001), 604, 287.
-
-
-
3D Alginate Scaffold for Anatomical Aortic Valve Tissue Engineering
Authors: Albert Ryszard Liberski and Magdi H YacoubBackground
Within the field of biomedicine, alginate applications are numerous, from wound healing and cell transplantation to delivery of bioactive molecules. Recently, alginate based biomaterials are entering into clinical trials for the treatment of myocardial infarction (1). Due to its non-thrombogenic nature, this polymer is very promising for cardiac applications, including as scaffold for heart valve tissue engineering. One pivotal property of alginates in this respect is the possibility to form virtually any shapes (films, fibers, beads) in a variety of sizes. Alginate solutions can form gels in mild conditions in the presence of calcium, by displacement of sodium ions and resulting attraction of the alginate molecules. Our aim is therefore to fabricate 3 dimensional (3D) alginate scaffolds mimicking precisely the anatomical shape of human aortic valves, as a substrate for valve tissue engineering and repair (see Fig. 1).
Methods
We used the gelling properties of alginate solutions to obtain scaffolds reproducing the complex geometry of aortic heart valves in a few easy steps. Briefly, the geometrical and structural design of a typical aortic heart valve (2–4) was obtained using Blender software (5). The generated 3D file was converted into stereo-lithography (STL) format and 3D printing performed in Objet Eden260VS - 3d printer (Stratasys, Edina, Minnesota, USA) using light-curable polyacrylate monomers. After printing the supporting material was removed manually which yielded flexible valve-like structure with sinuses of Valsalva and 3 coapting leaflets. Subsequently, agarose moulds were obtained by casting agarose saturated in CaCl2 solution (2% w/w) into the 3D printed form. Finally, alginate scaffold preparation was carried out by immersing the CaCl2-saturated agarose moulds into alginate solutions.
Results
Calcium ions diffused from the agarose mould and effectively cross-linked alginate solution in close vicinity, resulting in an alginate gel layer. The agarose mould could be easily removed in a subsequent step. The resulting alginate structure closely matched the agarose mould geometry and hence the 3D printed replica of a human aortic valve. Moreover by extending the length of mould immersion into sodium alginate solutions, scaffold thickness and composition could be controlled. Such control allowed forecasting further improvement to facilitate cellularisation and tissue formation and to improve mechanical properties.
Conclusion
Alginate can form versatile and tunable hydrogels which can be cast in 3D configurations that mimic the shape of a human aortic valve. As preparation steps can be freely adjusted to incorporate viable cells, such structures could serve as basis for in vitro tissue formation, which would further improve mechanical properties of the hydrogel. In addition, the ease of chemical modification and functionalization of alginate with cell ligands provides rational tools to increase cell interactions and attract cells in situ, which are important steps in the formation of functional valves in vivo.
Overall, this novel and flexible technique that can be readily integrated with other strategies presents an important potential to create the “ideal” scaffold for producing a living valve substitute.
Figure 1. Alginate shaped in tricuspid valve, ventricular view (a), side view (b), hinge - atrial view (C), and open valve view (D). (Scale bars 1 cm).
References
1. Anker SD, Coats AJS, Cristian G, Dragomir D, Pusineri E, Piredda M, et al. A prospective comparison of alginate-hydrogel with standard medical therapy to determine impact on functional capacity and clinical outcomes in patients with advanced heart failure (AUGMENT-HF trial). Eur Heart J. 2015 Sep 7;36(34):2297–309.
2. Chester AH, El-Hamamsy I, Butcher JT, Latif N, Bertazzo S, Yacoub MH. The living aortic valve: From molecules to function. Glob Cardiol Sci Pract. 2014 Jan 1;2014(1):11.
3. Yacoub MH, Kilner PJ, Birks EJ, Misfeld M. The aortic outflow and root: a tale of dynamism and crosstalk. Ann Thorac Surg. 1999 Sep;68(3 Suppl):S37–43.
4. Yacoub MH. In Search of Living Valve Substitutes. J Am Coll Cardiol. 2015 Aug 25;66(8):889–91.
5. Introduction — Blender Reference Manual [Internet]. [cited 2015 Nov 23]. Available from: http://www.blender.org/manual/getting_started/about_blender/introduction.html
-
-
-
Design an Expert System for the Diagnosis of Pulmonary Tuberculosis
By Rasha BadiPulmonary tuberculosis (PTB) is a common worldwide infection and a medical and social problem causing high mortality and morbidity, especially in developing countries. An expert system for diagnosis of this disease was designed based on expert's knowledge for providing decision support platform to assist fresh graduator (inexperienced) physicians, and other healthcare practitioners to arrive the final diagnosis of TB more quickly and efficie ntly especially in rural areas. Information about pulmonary tuberculosis and its symptoms and treatment from doctor who are specializing in the diagnosis and treatment of tuberculosis were collected.
The system was built using C sharp language and artificial intelligence based expert system in coordinated manner help in the diagnosis of tuberculosis disease and assistance in giving the necessary treatment In addition to giving advice to patients.
-
-
-
Comparative Expression Profile of Organic Cation Transporters in Diabetes and Cancer: Effects of Metformin
Authors: Rohit Upadhyay, Christopher R Triggle and Hong DingBackground and Aim
Organic cation transporters have critical role for absorption, distribution, metabolism, and elimination of many endogenous small organic cations as well as a wide array of drugs. These transporters act as uptake transporters (OCT1, OCT2, OCT3 and PMAT) or efflux transporters (MATE1 and MATE2) for cationic drugs including Metformin. PMAT, OCT1 and OCT3 are expressed in Intestine and may be involved in Intestinal transport of metformin. OCT1, OCT3 and MATE1 expression in liver may facilitate hepatic uptake of metformin. In Kidney OCT1, OCT2 and PMAT may act as influx transporter while MATE1 and MATE2 may act as efflux transporters. Metformin is the first line of drug for diabetes and may have beneficial effects in cancer treatment. Expression profile of metformin transporters may have crucial role in pharmacokinetics of the drug. At present there is very limited data for the expression of these transporters in different cell lines and db/db mice. No in-vitro data is present for comparative expression of these transporters in primary endothelial cells vs. cancerous cells and effect of high glucose/metformin treatment on the expression of drug transporters is still unknown. Therefore, we aimed to investigate the expression levels of OCT1, OCT2, OCT3, MATE1, MATE2 and PMAT in normal/cancerous cell lines as well as mice organs (intestine, liver and kidney) under normo/hyper-glycemic condition and low/high dosage of metformin treatment.
Material and methods
Different cancerous/non-cancerous cell lines (HUVECs, MCF7, PA1, Huh7, HEK293T and MMECs) were cultured in normal/high glucose mediums and treated with low/high dosage of metformin for 7 days. Cells in early passages (P3 to P6) were used and experiments were replicated five times. Mice samples (liver, kidney and small intestine) were collected from wild type (C57BL/6J) and db/db mice after treatment with metformin for 6–8 weeks. Total RNA and proteins were isolated from cell line/mice organ samples and gene/protein expressions were estimated by using real-time PCR and western blotting. Gene expressions were leveled by the endogenous controls (beta-actin and GAPDH) and comparative CT values were estimated. Relative gene expressions were calculated through 2-(ΔΔCT) method. Western blot analysis was done after normalizing densitometry data of transporter proteins with endogenous control protein (β-actin or GAPDH).
Results
We detected expression of all selected metformin transporters in endothelial cells and in majority of cancer cell lines. Comparative gene expressions of metformin transporters in all of the selected cell lines were estimated. The levels of OCT1/OCT2 expressions between non-cancerous/cancerous cell lines were significantly modulated (P
-
-
-
Anti-Neoplastic Effects of Annonacin against Renal Cell Carcinoma
Authors: Shankar Munusamy, Akila Gopalakrishnan, Sreenithya Ravindran, Feras Alali and Ali H. EidBackground and Objectives
Renal cell carcinoma is the most common and lethal form of all renal cancers, and accounts for 4.1% of all cancer cases in Qatar. Mutations to Von-Hippel Lindeau (VHL) gene in renal cells activates hypoxia inducible factor-1 alpha (HIF-1α) response pathway, and contributes to increased proliferation and progression to renal cell carcinoma. Hence, chemotherapeutic modalities available to treat renal cell carcinoma are targeted toward modulation of the VHL-HIF response pathway. Annonacin, a potent cytotoxic mono-tetrahydrofuran acetogenin found in Annonaceae plants, has been demonstrated to exert anticancer activity against breast cancer; however, its therapeutic potential against renal cell carcinoma is yet to be determined. Hence the objective of this study is to investigate anti-neoplastic potential of annonacin in renal carcinoma cells.
Methods
We investigated the effect of annonacin - at concentrations ranging from 0.5 to 2 μM – on cell viability (using MTT assay and Alamar blue assay), and the protein expression of markers of HIF signaling pathway (HIF-1α), mTOR pathway (Thr-389 phosphorylation of p70S6 kinase), cell cycle progression (p21 levels), and apoptosis (caspase-3 expression) in CaKi-2 cells, a human renal carcinoma cell line. The cells were treated with annonacin for 24 or 48 hours and assessed for the aforementioned parameters.
Results
hour annonacin treatment caused a significant and dose-dependent decrease in the viability of CaKi-2 cells, i.e., 42% in 0.5 μM, 36% in 1 μM and 29% in 2 μM annonacin treatment groups as compared to control set at 100%. This was further confirmed by Alamar blue assay, which revealed a significant decrease in the viability of CaKi-2 cells upon treatment with annonacin for 48 h. The expression of HIF-1α was reduced by 68% at 24 h in CaKi-2 cells treated with 2 μM annonacin. In addition, the expression of p21 (a key molecule that inhibits transition of cells from G1 to S phase in cell cycle) was induced by 1.34-fold in 0.5 μM annonacin-treated cells indicating an arrest in G1 phase of cell cycle. This was further confirmed through cell cycle analysis using Tali cytometer, in which annonacin treated groups (0.5 μM and 1 μM) showed cell cycle arrest at G1 phase, i.e., 57% of cells in G1 phase with 0.5 μM annonacin treated vs. 7% of cells in G1 phase in control group. In addition, a dose-dependent decrease in the phosphorylation of p70S6 kinase (a downstream target of mTOR) was observed with annonacin treatment at both 24 and 48 h end-points. This suggests that treatment of annonacin has possibly led to the inhibition of mTOR, in addition to suppression of HIF-1α activation, and underscores the cross-talk between HIF pathway and mTOR signaling pathway in renal cell carcinoma.
Conclusions
Our findings demonstrate that annonacin treatment (at concentrations ranging from 0.5 to 2 μM) inhibits HIF-1α and mTOR activation and causes cell cycle arrest at G1 phase and induces apoptosis in renal cell carcinoma. These findings indicate that annonacin exerts anti-cancer effects via modulation of HIF and mTOR signaling pathways, resulting in alterations in the cell cycle and activation of apoptosis in renal cell carcinoma. In conclusion, our study for the first time unveils the therapeutic potential of annonacin to inhibit the progression of renal cell carcinoma. Further studies in vivo are required to establish its efficacy to treat patients with renal cell carcinoma.
Funding Source
This study is supported by an intramural grant (#QUUG-CPH-CPH-14/15/7) funded by the Office of Academic Research, Qatar University, Doha, Qatar.
-
-
-
Design of a Time-Frequency Algorithm for Automatic Eeg Artifact Removal
Authors: Boualem Boashash, Samir Ouelha and Sadiq Ali Maqsood1) The method
The injuries suffered by newborns during birth are a major health issue. To improve the health outcomes of sick newborns using EEG measurements, a number of recent studies focused on the use of high-resolution Time-Frequency Distributions to extract critical information from the collected signals [1]. Several algorithms have been proposed. A major problem in the implementation of such algorithms for fully automated EEG signal classification systems is caused by artifacts. In particular, previous studies have shown that a respiratory artifact looks like a seizure signal and can be misinterpreted by the automatic abnormality detection system thus resulting in false alarms. Hence, the successful removal of the artifacts is important, as shown in several previous studies [2]; and, there are two basic approaches for this: (1) use machine learning technique to detect and reject EEG segments corrupted by artifact; but this would result in the loss of EEG data [2]. (2) Correct EEG segments corrupted by artifacts; some artifacts can be corrected by a simple filter in a frequency domain, e.g. notch filter can be used to remove 50 Hz noise. This approach does not require any reference signals. For more complicated cases, when the spectrum of artifacts overlaps with the spectrum of EEG signals, blind source separation (BSS) algorithms can be used. Typically a multi-component EEG signal is transformed into a linear combination of independent components (that can be interpreted as channels (ICs)) by blind source separation techniques such as the independent component analysis (ICA) or canonical correlation analysis. The independent channels that are corrupted by artifacts are identified either manually or automatically using correlation information from a reference signal. The artifact free signal is then constructed by combining only artifact-free ICs.
The abovementioned artifact correction approach has two problems:
1) Sometimes, multicomponent or multi-channel BSS methods fail to split artifacts from sources, i.e. some independent components may have some useful EEG information.
2) In some cases only single component or channel recordings are produced.
The empirical mode decomposition (EMD), a time-frequency (TF) filtering algorithm has been used to remove artifacts from single channel multicomponent recordings as well as remove artifacts from ICs obtained as results of ICA algorithm [3]. The EMD splits a single channel multicomponentchannel multicomponent EEG recording or the given IC into a number of intrinsic mode functions (IMFs), thus converting a single channel multicomponent recording into several monocomponent signals that can be interpreted as a multi-channel EEG signal. One way to remove artifacts is to simply discard IMF sources in the signal reconstruction [4]. Another approach is to treat IMFs as separate components (or channels) and then apply multi-component (or multi-channel) BSS algorithms to remove artifacts [5].
From a signal processing perspective, the EMD cannot resolve close signal components in the time-frequency (t,f) domain. So, if some artifacts are closely placed to EEG signals in the (t,f) domain, the EMD will fail to separate them.
In this study, the aim is to design a new EEG artifact removal algorithm that uses TF filtering and high resolution time –frequency distributions (TFDs) to extract close signal components.
The key steps of the proposed method are given below:
1. Analyze EEG signal using a high resolution TFD;
2. Localize the signal components in the (t,f) domain by estimating their IF using component linking method [6].
3. Once the signal components are located in the (t,f) domain, they can be extracted by TF filtering. In this study, the fractional Fourier transform is used as a TF filter to separate signal components [6].
4. Identify signal components corrupted by artifacts using prior information or correlation from reference signals.
5. Once the artifactual components are identified, they can be removed during the inverse blind source separation (BSS) transformation by simply subtracting them from the EEG signal.
2) Results and discussions
The proposed TF filtering algorithm can be used to remove respiratory artifacts that cause a major is a major problem in the automated implementation of EEG signal classification algorithms as its morphology is similar to that of seizure.
Let us consider a simulated EEG seizure signal corrupted by the respiratory artifact. The EEG signal is then given by
s(t)=Seiz(t)+artif(t) (1)
Previous studies have shown that aan EEG seizure signal can be modeled by a non-linear FM signal which generalizes simpler piece wise linear FM models used in earlier studies i.e.:
Seiz(t)=cos(2π[1e(–6) t3+0.075t]) (2)
The respiratory artifact which appears as a quasi-regular rhythmic activity is modeled as a pure sinusoid.
artif(t)=cos(2π0.052t) (3)
This signal is sampled at 32 Hz. The simulated signal, s(t), is analyzed using the adaptive directional TFD (ADTFD) as shown in Figure 1 [6]. The proposed TF filtering algorithm is then applied to extract signal components. The extracted components are shown in Figure 2. The EMD is also applied to separate signal components. The EMD algorithm decomposed the given signal into 6 IMFs. The IMFs closest to the desired seizure and artifact signals are plotted.
Experimental results show the EMD fails to correctly extract sources from time duration 0 to 4 s as two sources are close to each other in the (t,f) domain during this time interval. When signal components become well separated in the (t,f) domain, i.e. from 4-s to 8-s, the EMD algorithm accurately extracts the signal components. The proposed TF filtering method yields an improved superior performance as it correctly extracts the sources even when they are close to each other in the (t,f) domain. The superior performance achieved by the proposed method is due to the selection of a high-resolution TFD that results in an accurate IF estimation for close signal components.
3) Conclusion and Future Works
A time-frequency filter is designed for the removal of EEG artifacts. The approach is applied to the removal of simulated respiratory artifacts from simulated signals. The algorithm assumes that the artifacts and EEG background have TF signature that are non-overlapping in the (t,f) domain. The algorithm can be extended for more realistic situations by using methods that can estimate the IF of intersecting signal components using directional information for example. Moreover, the proposed TF filtering algorithm (just like the EMD) decomposes a signal into a number of components thus making a single channel multicompoonent signal into several monocomponents that can be interpreted as a multi-channel signal.
Bibliography
[1] B. Boualem, G. Azemi and J. M. O'Toole., “Time-frequency processing of nonstationary signals: Advanced TFD design to aid diagnosis with highlights from medical applications.,” Signal Processing Magazine, IEEE, vol. 30, no. 6, pp. 108–119, 2013.
[2] B. Boashash, G. Azemi and N. A. Khan, “Principles of time–frequency feature extraction for change detection in non-stationary signals: Applications to newborn EEG abnormality detection,” Pattern Recognition, p. 616–627, 2015.
[3] M. De Vos, W. Deburchgraeve, P. Cherian, M. Vladimir, R. Swarte, P. Govaert, G. H. Visser and S. Van Huffel, “Automated artifact removal as preprocessing refines neonatal seizure detection,” Clinical Neurophysiology, vol. 122, no. 22, pp. 2345–2354, 2011.
[4] H. Zeng, S. Aiguo, Y. Ruqiang and H. Qin, “EOG artifact correction from EEG recording using stationary subspace analysis and empirical mode decomposition,” Sensors, vol. 13, no. 11, pp. 14839–14859, 2013.
[5] B. Mijovic, M. De Vos, I. Gligorijevic, J. Taelman and S. Van Huffel, “Source separation from single-channel recordings by combining empirical-mode decomposition and independent component analysis,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 9, pp. 2188–2196, 2010.
[6] B. Boashash, A. N. Khan and T. Ben-Jabeur, “Time–frequency features for pattern recognition using high-resolution TFDs: A tutorial review,” Digital Signal Processing, vol. 40, pp. 1–30, 2015.
-
-
-
Reversal of Right Ventricular Hypertrophy and Dysfunction by Treprostinil in a Rat Model of Severe Angioproliferative Pulmonary Arterial Hypertension
More LessPurpose
Pulmonary arterial hypertension (PAH) is a devastating cardiovascular disease of the pulmonary vasculature that remains poorly understood. Despite a number of available FDA approved drugs, survival remains low, reaching an estimated 5-year survival as low as 27%. This severe disease is characterized by dysfunction and eventual failure of the right ventricle (RV) of the heart. The main focus of therapeutic strategies thus far has been to target the pathways of pulmonary vascular remodeling that lead to the hypertensive phenotype. It is not known however, if there is an added therapeutic benefit in targeting the RV directly. Prostacyclin analogues are among the most widely used therapies for PAH. However, it is unknown whether they confer protection exclusively via attenuating pulmonary vascular remodeling and constriction or if RV myocardio-specific mechanisms are also involved. Moreover, their use in severe models of PAH has not been adequately tested. Insight into these two major unknowns could not only blaze the trail of new effective therapies for PAH to improve survival, but would also open new avenues for targeting other major forms of heart failure. To address gaps in knowledge of the underlying responses to prostacyclin, the analogue treprostinil was used in a pre-clinical rat Sugen-hypoxia (SuHx) model of angioproliferative severe PAH that closely resembles the human disease.
Methods
Male Sprague-Dawley rats (300g) were implanted with ALZET osmotic pumps containing vehicle or treprostinil (900 ng/kg/min), injected concurrently with a bolus of Sugen (SU5416; 20 mg/Kg) and exposed to 3 wk hypoxia (10% O2) followed by 3 wk normoxia (21% O2). RV function was assessed using pressure-volume loops measured using an admittance catheter and hypertrophy assessed by Fulton Index (FI; RV/LV+Septum wet weight).
Results
Treprostinil significantly reduced SuHx-associated RV hypertrophy and rise in systolic pressure (FI: 0.26 ± 0.02, 0.58 ± 0.04 & 0.37 ± 0.05, P
-
-
-
Body Mass Index and Pattern of Diabetes in Qatar – A Retrospective Study of 529 Patients with Obesity
Authors: Manik Sharma, Saad Al Kaabi and Rajvir SinghBackground
Qatar ranks among the top countries with highest prevalence of diabetes and obesity. Obesity is generally measured by a Body mass index (BMI). BMI has been found to be independent risk factor for development of diabetes. Also, diabetes when associated with obesity leads not only to its poor control but also causes increased long-term complications from diabetes.
Aim
To delineate the pattern of obesity among residents of Qatar and to classify them as per World Health Organization criteria. Secondary objectives included assessing the pattern of diabetes with increasing body mass index.
Method
All consecutive obese adult patients attending pre-surgical screening endoscopy clinic over an 18 months period were included. Patients under 14 years of age and those who had previous surgical treatment were excluded. Body mass index (BMI) was calculated as per standard criteria [weight (Kg)/Height (meters) 2] and was then classified as per World Health Organization criteria. Overweight, Type 1, Type 2 and Type 3 obesity was defined as BMI more than >25, >30, >35 and >40 respectively. Diabetes was defined as fasting plasma glucose ≥ 7 mmol/l. All patients underwent gastroduodenoscopy to assess the presence of helicobacter infection and evidence of mucosal inflammation prior to surgical treatment of obesity.
Results
A total of 529 patients with a mean age of 36.8 years were included 31.4% of obese patients were in 15–30 year age group. The mean weight, height and BMI were 123.6 Kg, 1.65 meters and 45.2 respectively. Overweight, Type I, II and III obesity was seen in 3 (0.6%), 30 (5.6%), 95(17.8%) and 401 (76%) patients respectively. Overall 34.5% had associated comorbid diseases. Type 2 Diabetes were seen in 11.1% of the patients. Diabetes was seen in 0%, 26.7%, 8.4% and 6.7% in overweight, Type I, Type II, Type III obesity respectively. Diabetes was significantly lower among very severely obese patients (Type III) as compared to those with overweight, moderately and severely obese patients (Overweight, Type I and II) (27/401, 6.7% versus 16/84,19% p = . 03).No correlation was found related to age, sex or helicobacter infection.
Conclusions
Among the residents of Qatar, 11% of obese patients were found to have diabetes. The highest prevalence of diabetes was seen in people with BMI of 25–30 (type 1 obese people). Diabetes prevalence did not increase with increasing obesity.
-
-
-
How MERS-CoV Helped Overcome Communication Barriers in Qatar
Background
As a conventional type of communication, health education usually face several barriers that make its outcome fall short. Among many, lack of interest, distraction, and rejection are well documented barriers to engaging audience in a communication process leading to behavioral change. Despite the novel corona virus which was responsible for the Middle East Respiratory Syndrome (MERS-CoV) created public concerns, it, on the other hand, paved the road to effective health education via raising receivers’ attention. With the aim to highlight the context and factors attributed to educate the public during epidemics, this study documented how the outbreak of MERS-CoV offered valuable opportunities to communicate critical educational messages on the recommended preventive behaviors and practices.
Methods
In this retrospective study, we documented the timeline of MERS-CoV key events in Qatar, along with the disseminated health education messages that were captured by the print media during the period Sep 2012 through Nov 2013.
Results
The media documented that one of the first two reported cases worldwide was a Qatari national. A significant turn of the public's risk perception about MERS-CoV took place when studies documented that camels are thought to play critical role in the virus transmission to humans. Six months after the identification of the first case, this relationship was confirmed when it was declared that the MERS-CoV was isolated from camels in Qatar, provided that raising camels is a social norm and an embedded cultural practice in the country and across the region. Nevertheless, MERS-CoV cases and deaths continued to be reported.
Out of 153 news stories reported on MERS-CoV, 12 major developments either reporting confirmed cases or deaths were identified in Qatar. Two Press conferences, sixteen press releases, and two interviews were counted, all from competent authorities. As the novel virus captured the media attention, all aspects of the new virus were extensively reported, ranging from the basic information about the virus traits, the clinical signs and symptoms, treatment outcome of cases, to the ongoing researches, epidemiological findings of the most vulnerable persons, the zoonotic nature of the disease, and the recommended course of action. The public pressing demand for updates and information drove the media interest to arrange talk shows and interviews with the high health officials to give firsthand accounts about the virus and the prevention and control efforts.
While fresh MERS-CoV cases were reported from The Kingdom of Saudi Arabia, fears were growing that Hajj season may allow for a large scale spread of the virus. The publicized health education messages at that time called upon the most-at-risk group to postpone going to Hajj and Umrah, be assessed for medical fitness, get vaccinated against seasonal flu, and avoid the crowded and badly ventilated areas. This group involved elderly and patients with chronic illnesses or impaired immunity. Afterwards, frequent hand washing and drinking pasteurized camel milk or consuming well-cooked camel meat was advised along with minimizing close contact with symptomatic persons.
Discussion
The timeline of MERS-CoV events along with the communication activities in response to them gave a strong indication about the correlation between the media interest and public concern of a particular subject in hand, and the opportunities created by this momentum to communicate key information and recommended course of action by the competent authorities to satisfy the public's needs on the other hand.
Three main factors influenced how MERS-CoV was perceived in Qatar: its unfamiliarity, the epidemiological link to camels, and the way media had portrayed it. Like any other exotic risk, MERS-CoV's acknowledged unfamiliarity even to health officials seduced the media to fill the uncertainty vacuum by persistently focusing on the similarities with the deadly SARS epidemic that erupted in 2002, thereby allowing for scary scenarios to seed in the public's imagination. It was then announced that a kind of relationship had been established between the infected persons and camels before a Qatari scientific team declared that the life MERS CoV was isolated from an infected camel. The immediate result of this perceived risk was heightened public attention and interest. However, the repeatedly announced symptoms of suspected cases allowed for better identification and induced voluntary reporting of cases to healthcare facilities.
Substantial proportion of the communication process usually devoted to achieve the preparatory steps of seizing the audiences’ attention besides making sure that the content matters to them. Whereas the public need for information was being satisfied through news releases and press conferences, health education messages constituted a prime ingredient of the communication content.
Despite of the denial and stigma linked to the unfamiliar disease, the communicated health messages had a tangible influence on giving the target audience the information necessary to take decisions on the personal and community level. According to records of the medical Hajj committee, response to the pre-travel medical assessment and vaccination was remarkable. Patients complied with the isolation requirements. Nevertheless, Information, Education, and Communication (IEC) materials were not prepared prior to the press conferences, indicating missed opportunities.
After all, no significant rejection to the recommended course of action was identified.
Study limitations
As this study was based on reviewing the content of print media, other types of mass media were excluded. Moreover, it was important to determine the extent to which the target communities relied on official press releases and press conferences to gain information related to MERS-CoV.
Conclusion
Couple of factors contributed to the successful engaging of the target communities to adopt the recommended course of action: the perceived risk of the novel virus which made the public highly attentive, the timing of the health education messages that usually coincided with the critical disease developments, and the assignment of credible well-known resource officials from the competent authorities. As the public uptake of behaviors recommended by trusted authorities tends to be very high during epidemics, efforts should be made on designing health education messages to be injected within media products like press conferences and press releases during early preparedness phases.
-
-
-
A Hexokinase II Derived-Cell Penetrating Peptide Targets the Mitochondria and Triggers Apoptosis in Cancer Cells
More LessMost cancers are characterized by a high rate of glycolysis and overexpression of mitochondrial-bound isoforms of hexokinase, an enzyme that phosphorylates glucose in an ATP-dependent manner and this commences the first committed step in glucose metabolism. Type II hexokinase (HK II) plays a paramount role in metabolic reprogramming in tumors and its association with the voltage dependent anion channel (VDAC), a major channel for transport of metabolites and ions across the mitochondrial membrane, inhibits apoptosis in cancer cells and is therefore an important therapeutic target. A peptide corresponding to the mitochondrial membrane-binding N-terminal domain of HK II (pHK II) can potentially compete with the endogenous protein for binding to mitochondria and trigger apoptosis. In vitro studies in HeLa cells showed that coupling of pHK II to a short penetration accelerating sequence (Pas: FFLIPKG) enhances the peptide's intracellular delivery and cytosolic release, followed by localization to the mitochondria. Cell viability assays revealed that pHK II-pas was considerably more effective in inhibiting cell growth compared to pHK II alone. Moreover, pHK II-pas displayed an enhanced ability to deplete cellular ATP levels and induce apoptosis. Mitochondrial function analysis showed that exposure to pHK II-pas peptide resulted in a significant decrease in glycolytic capacity and glycolytic reserve, as well as basal oxygen consumption rate (OCR), spare respiratory capacity, and ATP turnover. Importantly, these effects were correlated with HK II release from mitochondria. Thus, the mode of action of pHK II-pas involves release of the HK II protein from the mitochondrial membrane resulting in loss of mitochondrial membrane potential, decreased cellular ATP levels and finally apoptosis. Our results underline the potential of the pHK II-pas cell-penetrating peptide (CPP) as an innovative and effective anti-tumor therapeutic strategy.
-
-
-
How Common is Bacterial Meningitis in Patients with Urinary Tract Infection below the Age of 60 Days?
More LessIntroduction
Urinary tract infection (UTI) is one of the most common pediatric infections. UTI may be associated with bacteremia and even meningitis in small babies, warranting full septic work up including a cerebrospinal fluid (CSF), especially in infants below the age of 60 days. Literature regarding the co-existence of meningitis in infants diagnosed with UTI is conflicting. It is critical to be able to correctly identify and treat any co-existing meningitis, as both the choice and duration of antibiotic used for UTI, is often insufficient to effectively treat meningitis.
Objective
The primary objective of this study was to determine the rate of co-existing bacterial meningitis in infants below the age of 60 days with a diagnosed urinary tract infection (UTI) and to determine if age, sex and prematurity and bacteremia were risk factors.
Method
A retrospective observational study was conducted at Hamad General Hospital, a tertiary medical institution in the State of Qatar. Patients under the age of 2 months hospitalized with a first episode of UTI from January 1, 2008 to December 31, 2013 were included in the study. UTI was defined as urine culture growing a single organism, with a colony count of greater than 103, where in the urine sample was obtained by either catheterization or supra-pubic aspiration. Infants with pre-exisiting clinical conditions like spina bifida or meningomyelocele and those diagnosed with congenital renal anamolies, were excluded from the study because of their higher likelihood of developing UTI. Infants with a questionable diagnosis of UTI (not in accordance with American Academy of Pediatrics definition of UTI) were also excluded.The study was approved by the Medical Research Center at Hamad Medical Corporation.
Results
113 patients met the inclusion criteria. 51 patients (44.3%) were neonates (0?28 days old) and 64 patients (55.7%) were between the age group of 29 to 60 days. 43.5% of the infants were male, and most (86.1%) were term. All 113 patients had culture proven UTI. The commonest pathogens causing UTI were Escherichia coli (38%), Klebsiella pneumoniae (15%), Enterococcus faecalis (13%), Group B Streptococcus sp(7%), and Citrobacter (6%). As per the routine practice in our institution, blood culture was ordered in all but one patient. Among these 112 patients, 3 (2.6%) had bacteremia. All 3 patients were Term female babies, 2 of them were neonates and 1 was between 29 to 60 days of age. Of the three pts who had a positive blood culture, CSF study was done in two babies, which was negative and 1 patient's family refused CSF study. A cerebrospinal tap was done in 78 patients i.e. 69% of the sudy population. None of these patients had a postive CSF culture. Physicians were more likely to order a CSF study in babies who were neonates (80% had a CSF study, as opposed to 60.3% of babies in the 29 to 60 day age group).
Conclusion
Our study demonstrated that in the 78 patients with culture proven UTI, who had a CSF study, none of the patients had co-existing bacterial meningitis. Our results are reflective of several other studies which also show a low risk of meningitis in patients with UTI. We tried to overcome some of the limitations of these studies by maintaining very strict criteria for diagnosing UTI. CSF study is a part of the septic work up in neonates. In contrast for patients between the age group of 29 to 60 days, more selective approach to lumbar puncture is warranted.
-
-
-
Double Network Hybrid Hydrogels with Nanocomposite Structures for Cartilage Tissue Engineering Applications
Authors: Ali Mohammed, Julian R Jones and Theoni GeorgiouHydrogels have become a popular source of research for cartilage tissue engineering but have been limited by their brittle nature at high water contents. Double network hydrogels (DNHG) are innovative materials that possess the ability to hold high water content whilst maintaining high mechanical strength, but require more accurate control over these properties. This study aims to achieve this goal by introducing a new concept by incorporating functionalized sol-gel nanoparticles (xSNP) as macro cross linkers rather than the conventional chemical cross linkers. DNHG are formed by a 1st network (1NW) polyelectrolyte and 2nd network (2NW) neutral polymer. This study investigates two separate DNHGs; polyacrylic acid (PAAc) and poly 2-acrylamido-2-methylpropane sulfonic acid (PAMPS) were the 1NW, chosen for their biocompatibility and hydrophilic nature. They were cross linked with amino-SNP (ASNP) and vinyl-SNP respectively. Polyacrylamide was chosen as the 2NW for both gels for its intrinsic strong mechanical properties. The aim of this study is to understand the effects of size and concentration of xSNP as a novel cross linking agent in DNHG for precisely controlling the properties of the gels. SNPs of 20, 50 and 100 nm were synthesized by the Stöber process, and functionalized in situ with 3-aminopropyl triethoxysilane and vinyl TEOS. The xSNP concentrations in the DNHGs were 0-50 wt. ? of the 1NW. SNPs were studied under TEM, SEM, FTIR, DLS, zeta potential and confocal microscopy to confirm size and functionalization. 1NW polymers were polymerized and cross linked in situ with xSNP under UV light; ASNP used carbodiimide chemistry to cross link with PAAc and VSNP was cross linked using a UV initiator. 1NW were soaked in 2NW solution and UV polymerized to form the DNHG. FTIR, swelling and water uptake studies were performed on heat/vacuum dried DNHGs. Compressive and dynamic mechanical properties were studied for fracture cyclic loading. DNHG cross sections were used for SEM and TEM imaging. Increasing size and concentration of xSNP caused a reduction in both water up take and swelling properties, providing evidence for higher cross linking in the DNHG. Water uptake ranged from 1230 ? for the control (0 wt. ? xSNP) to 750? for 50 wt. ? with 100 nm VSNP. Water content reduced from 93? for the control to 76? for 50 wt. ? with 100 nm VSNP, in the range of natural cartilage water content. Compressive strengths of the DNHGs increased with increased ASNP conc. and size up to a fracture stress of 15 MPa
with 75? water content, providing evidence that the SNPs are acting as cross linkers in the 1NW rather than fillers. Cross sections of the DNHGs under SEM and TEM show homogenous dispersion of xSNP within the structure, indicating successful incorporation. FTIR data of the DNHG after 3 drying and saturation cycles show Si-O-Si bands supporting the evidence of xSNP incorporation into the DNHG. These results show potential for further research and application of sol-gel nanoparticles in hydrogel applications. The best hydrogels from this reserach were chosen to be optimised. As the photopolymerisations were done under open atmosphere it is understood that atmopsheric O2 will interact with the monomer solution and inhibit the polymerisation from completing. This leads to shorter chain polymers and a lower degree of monomer to polymer conversion; hence leading to less polymer entanglement and lower mechanical integrity. Oxygen can be depleted from the monomer system by introducing glucose oxidase (GOX). Oxygen is eaten up by this enzyme in the presence of glucose to produce hydrogen peroxide. Full oxygen depletion is reached at 200 nM GOX, and 100 nm glucose. The enzyme works best at pH 5-6 therefore it was not possible to optimise the first network polymer AMPS.
-
-
-
Using the Transtheoretical Model to Enhance Self-management Activities in Type 2 Diabetic Patients: A Systematic Review
By Yara ArafatBackground: Many health organizations are always highlighting the importance of health promotion, and disease prevention, due to the high incidence of chronic diseases that are spread worldwide and increasing continuously. One of the most prevalent chronic diseases is diabetes mellitus (DM). Many studies conducted in developed countries proved that lifestyle changes in patients resulted in a reduction in the prevalence of diabetes, and that there's a link between DM, and behavioral, clinical, and economical outcomes. Furthermore there was an affiliation between knowledge, attitude, and practice (KAP), and DM. Even though self-management of type 2 diabetes is necessary in order to improve quality of life, many patients still have a problem with being able to self-manage diabetes. Many models and interventions were tested to enhance self-management but none were successful so far. Self-management is a socio-behavioral problem, and the use of a model such as the transtheoretical model (TTM) could improve it. TTM is one of the most commonly used behavioral models. It was first introduced in the 1980s by Prochaska and DiClemente to explain how people change their behavior, but not why they change. It is a model of choice that focuses on the decision making capabilities of individuals. This model is different to alternative approaches to health promotion in that its primarily focus is not on social and biological behavioural influences. It is a psychological health promotion model about the intention of change. It is a model of choice that focuses on the decision making capabilities of individuals. It first uses the baseline information, with an aim to alter self-efficacy, cues, or other psychosocial factors using five TTM principles: Precontemplation, Contemplation, Preparation, Action, and Maintenance. Objective: The objective of this study is to collect enough evidence using a systematic review in order to assess the use of TTM in improving self-management activities in type 2 diabetic patients. Self-management activities include following a healthier diet, exercising more regularly, and an enhanced medication adherence. Methods: The systematic review was conducted between February and May 2015. PubMed (n = 83), Medline (n = 126), Science direct (n = 985), and Cochrane (n = 62) were the databases searched with predefined terms relating to TTM interventions for type 2 diabetic patients. A second extensive search was conducted using google, and google scholar (n = 2) to retrieve articles relevant to the research. The search strategy aimed to identify articles in which the Transtheoretical model had been applied and which had been published in English between 2000 and March 2015. In order to ensure that all potentially relevant articles had been identified, the search terms included “Transtheoretical model”, “Sociobehavioral”, “social changes”, “diabetes”, and “self-management”. All study designs were included and no limits were set to articles comparing the behavioral model to other approaches. The methods used for this review followed the PRISMA statement (Preferred Reporting Items for Systematic Reviews and Meta-Analyses). The systematic search was conducted in March 2015. The initial search of the above strategy yielded 1,153 articles. These articles were reviewed by the primary author for relevance to the aims of the review. Retained articles were then assessed for relevance to the aims based on the title and the abstract using the inclusion criteria. Articles identified as potential for inclusion were then retrieved. Each step during the selection process was conducted by two researchers, and in case of disagreement, consensus would be reached with the aid of a third researcher. Results: There was consensus in the review team that the 10 papers met the inclusion criteria. The 10 studies included were published between 2003 and 2011, and were conducted in the US (n = 6), Canada (n = 1), Trinidad and Tobago (n = 1), Scotland (n = 1), and one was unspecified. In all 10 studies, the majority of participants at baseline were at the Precontemplation/Contemplation or Preparation stage, and after the TTM intervention the majority of patients were at the Action or Maintenance stage. Four studies did not specify which stage had the highest number of participants at baseline and post TTM intervention. In one study, the highest number of participants was at the preparation stage (39.1%) at baseline, and after the TTM intervention the highest number of patients was in the action phase (45.7%) indicating an advancement through the stages of change. Moreover, in 4 studies most of the patients at baseline were in the pre-action stage, but at follow-up after the TTM interventions most of the participants moved to the action or maintenance stage. In one study, the greatest number of participants was at the action/maintenance stage pre- and post- the TTM intervention. All studies demonstrated some positive outcomes self-management due to implementing TTM. Four studies reported a significant reduction in glycosylated hemoglobin (HbA1c), 5 studies reported improvements in diet after TTM, and Participants exercised more in 2 studies. In one study there was progress towards reaching participants’ goals whether it's better adherence, diet, or more exercise. However, using the TTM had no change on medication use in any of the studies included. Moreover, Different study designs were used in all studies. 2 studies were pre-test/post-test. In addition, there were 3 Randomized controlled trials (1 was an RCT, 1 was a randomized split plot design where there was a group receiving the usual care and another receiving the intervention, and another study was a cohort randomized controlled prospective trial). One study was a quasi-experimental study. Three studies were reviews; one was a preliminary study which is an economic evaluation of a theoretical cohort of patients. The other one was a study describing how resources and supports for self-management (RSSM) and strategies of the transtheoretical model intersect to produce a comprehensive approach resulting in cutting-edge diabetes Program, and the last review was determining the impact of TTM in changing the unhealthy dietary habits of type 2 diabetic patients. Moreover, one article followed a cross sectional study design which consisted of questionnaires. Conclusion: Ten articles using TTM to self-manage type 2 diabetes were identified and critically reviewed. The narrative findings from this systematic review provide evidence that TTM interventions are effective in promoting exercise, and encouraging participants to pursue a healthier diet. However, the effect of TTM on medication adherence has not been clearly identified yet, and it should be studied in future research.
-
-
-
Development and Validation of an Allelic Frequency Database for Qatari Population using 13 Rapidly Mutating Y-Str Multiplex Assay
More LessDifferentiating male lineages using non-recombining Y-chromosomal genetic markers is highly informative for tracing human migration and for forensic studies. Recently, it has been shown that the level of male lineage resolution can be enhanced by analysing Rapidly Mutating (RM) Y-STRs. The aim of this study was to develop an allelic frequency database for Qatari population to evaluate the resolution power of 13 RM Y-STRs. The overall haplotype diversity (HD) was 100% It was found that the markers which contributed the most toward high HD were DYF399S1 and DYF403S1a/b. Together with their value for paternal male relative differentiation, these RM Y-STRs will be a valuable asset for forensic casework. AMOVA test was performed between Qatari population in comparison to Gulf countries, Middle East, and several worldwide population data sets. FST values were also calculated. Geography was found to account considerably for the pattern of population sub structuring. The RM Y-STR markers showed remarkable haplotype resolution power in the Qatari population, high gene diversity and sufficient robustness for a diverse range of applications.
-
-
-
Prevalence of Gastrointestinal Protozoa in Feral Cat Population in Qatar
Introduction: Doha city has a high feral cat population that is estimated to outnumber its human inhabitants by 2-3:1 with a total population of 2-3 million cats according to Qatar Cat Control Unit (QCCU). Doha had a significant rodent problem for decades as in many cities throughout the world. It was difficult to control this huge rodent number. Therefore, cats were introduced in the 1960s, but without any consideration of the possible knock-on effects to human health. Introduced cats have colonized and reproduce rapidly around food and water resources in both urban and rural areas. It is known that cats are natural host for a wide range of helminths and protozoa. Since there were no plans to eliminate cats after they have been introduced to the country, the density of cats increased in an uncontrollable manner. This high cat population has an obvious risk for human and different diseases such as toxoplasmosis would be expected. Hospital records show that human toxoplasmosis is quite widespread in the city, with up to 35% of women of childbearing age being reported to be seropositive, and 41% of the elderly persons of both sexes in the population. These findings highlight the role cats might be playing in the transmission of protozoa in the society. Cats are also hosts to other closely related species of intestinal protozoa. For example, cats can harbour Isospora spp., Cryptosporidum felis, Giardia intestinalis and Blastocystis spp. Given the high density of cats in the city, it is clearly important to assess the prevalence of protozoal infections among these animals as a first step towards achieving a better understanding of their role in the transmission of human infectious disease of feline origin eventually. Objectives: Since Doha has a high feral cat population, there is a need to understand the role of cats as vectors of human protozoal infections. Our preliminary data indicate that Blastocystis spp. and Toxoplasma gondii is highly prevalent among the residents of Doha. In this project, it was proposed to estimate the prevalence of gastrointestinal protozoa including Giardia intestinalis, Cryptosporedium parvum and Blastocystis hominis in the feral cat population in Doha. A total of 264 fecal samples will be collected from feral cat population from different geographical locations of Doha. Advanced technologies will be used including DNA extraction, RT-PCR and sequencing to provide an accurate assessment of the prevalence. Methodology Study area and Population: Fresh stool samples were collected from cats in different areas in Qatar. In this study, 37 areas were divided into two geographical regions: outside Doha and inside Doha based on occupation of people. Cats were trapped during the winter (November–April) and the summer (May–October) seasons of 2015. Traps were prepared with fish heads or canned cat food. Cats were retrieved from traps and assessed for sterilization status. Pregnant, lactating female cats and cats estimated to be less than 6 months old were immediately released. Cats older than 6 months were eligible for the study. In order to prevent any repetition and re-sampling of cats already and treated earlier, cat's ear will be tagged with a small metal tag. All project ethical approvals were obtained before the beginning of the project. Samples Collection Fresh stool samples were collected from sterilized cats during the period from February–September 2015 and stored at -20 °C by veterinary laboratory of stray cat control unit in Ministry of environment (department of animal resources). A total of 264 samples were processed in order to achieve the aim of this project. Samples were collected in sterile containers labeled with site, where the cat is found, gender and date of collection. The samples were kept and transported on ice and frozen directly after the sample collected. Fecal Examination In order to extract the DNA of the enteric pathogens samples were warmed at 4 °C and approximately 200 mg of the stool sample were used for examination. QIagen miniamp stool kit was used to extract DNA from the sample following manufacturer's instructions with minor modifications. Lyses buffer ASL was added and mixed with each stool sample. Since cat stool is hard to break, tissue rupture machine was used to ensure the homogenization of the sample and increase DNA recovery. This is followed by vortexing the samples and incubating them at 95 °C for 10 minutes to insure complete lysis. After lysis, samples were centrifuged for 10 minutes at 4500 rpm in order to separate and pellet the stool particles. After that, supernatant were placed in new microcentrifuge tubes. Using InhibitEx binding reagent DNA-degrading substances and PCR-inhibitors were separated and removed from the sample. The DNA InhibitEx matrix was centrifugated twice at 14,000 rpm for 3 minutes to pellet the stool and any impurities. 15 μL of proteinase K, 200 μL of the supernatant and 200 μL of the buffer AL were all added to new microcentrifuge and incubated at 70 °C for 10 minutes. Proteinase K is used to digest protein and remove contamination and inactivates nucleases which degrade the DNA during the purification process. However, in order to make proteinase K work high temperature is needed to denature proteins. Therefore, samples are incubated at 70 °C for 10 minutes. Supernatant part containing DNA was then transferred to a Qiagen Minispin column. Two different washing buffers with optimized pH and salt concentration were added to eliminate the digested proteins and any other impurities. Samples were centrifuged at 14,000 rpm before the addition of each buffer. Finally using AE buffer the DNA was eluted. DNA concentration was measured using Nanodrop (Thermo Fisher Scientific, USA). Primers and Probes Using primer designing software, the primer and probe sets used for detecting parasitic pathogens were designed based on data available in National Center for Biotechnology Information (NCBI) databases. Targeted genes were chosen based on published data and studies describing their sequences, uniqueness, and conservation. Real-time PCR Samples were analyzed by uniplex real-time PCR using Applied Biosystems Cycler 7500. Protocols were finalized after adjustment of the respective concentrations of primers, probes and the evaluation of several cycling protocols. A proposed protocol based on available literature was a starting point. Two different fluorescence reporter dyes were used in Real-time PCR. SYBR Green was used for Blastocystis hominis and TaqMan for other targeted parasites. For both fluorescence reporter dyes amplification reactions were performed in a total 20 μL volume in each well with 17.5 μL master mix and 2.5 μL DNA template. For each plate prepared positive controls consisted of internal controls provided by Hamad Medical Corporation (HMC). Both positive and negative controls were run for each sample. Sample PCR results were compared with both controls and analyzed using 7500 software v2.3. Results: A total of 264 of stray cat samples were trapped for examination of enteric parasite. The samples were classified according to their gender, area and season. Table (1) summarized the frequency of the cat population examined. Three protozoal parasites (Giardia intestinalis, Cryptosporidium parvum and Blastocystis hominis) were examined using real-time PCR. According to PCR results obtained previously, Giardia intestinalis was the only protozoa positively detected. Table (2) shows the prevalence of examined protozoa in cat samples. Figure 1 and Fig. 2 show the interaction between Giardia intestinalis infection and other independent variables (gender, season and area). Table 1. Number of stray cats examined by season, gender and study site from Qatar during 2015 Season Winter Summer Male Female Male Female Site N N N N Outside Doha 44 (16.67%) 39 (14.77%) 25 (9.47%) 23 (8.71%) Inside Doha 34 (12.88%) 20 (7.58%) 37 (14.01%) 42 (15.91%) Total 78 59 62 65 *N, number of samples; the number in brackets indicates the percentage of prevalence. Table 2. Number of subjects in each category and the prevalence (%) of the three species of protozoa by gender, season and area Number of subjects Giardia intestinalis Cryptosporidium parvum Blastocystis hominis Gender Male 140 5 0 0 Female 124 7.2 0 0 P 0.443 (NS) NS NS Season Winter 137 6.5 0 0 Summer 127 5.5 0 0 P 0.719 (NS) NS NS Area Outside Doha 131 5.34 0 0 Inside Doha 133 6.77 0 0 P 0.628 (NS) NS NS *NS: not significant **The highest prevalence in each category is in bold italics for emphasis. Benefits to Qatar This study will provide important data for the public healthcare, which they can exploit to determine the role feral cat might be playing in zoonotic diseases in Doha. The training in research methodologies will also foster the interest in research the undergraduate students have and therefore add to the pool of qualified researchers in Qatar.
-
-
-
Enteric Protozoa Associated with Acute Diarrhea in Hospitalized Children in Qatar
Authors: Amal Ibrahim, Shaikha Al-Abduljabbar and Marawan Abou MadiIntroduction: Diarrhea is the passage of three or more watery stool in a period of 24 hours (WHO, 2013). Types of diarrhea include acute watery diarrhea, acute bloody diarrhea known as dysentery and persistent diarrhea (WHO, 2013). It is caused by an infection of different pathogens including bacteria, viruses and parasites through fecal–oral transmission (WHO, 2013). Moreover, it can also be caused by food intolerance to certain food substances and as a side effect of certain medications such as laxatives (Burton & Ludwig, 2015). Diarrhea occurrence is most frequently associated with conditions of poor environmental sanitation and hygiene, poverty, inadequate water supply and limited education (Nelson & Masters, 2014). Worldwide, acute diarrhea disease is considered as the second cause of mortality and morbidity in children according to the World Health Organization (WHO, 2013). In 2012, WHO reported 1.9 million diarrheal cases in children aged under the age of five accounting for 18% of all deaths. The clinical manifestations of diarrhea in pediatric patients include abdominal pain, nausea, vomiting and fever (WGO, 2012 & Maas et al., 2014). Diarrhea in children can lead to many consequences such as malnutrition, diminished growth and impaired cognitive development (WGO, 2012). Severe diarrhea can also result in life-threatening dehydration (Galvao et al., 2013). Thus it is important to replace the fluid and electrolytes by oral rehydration solution. Diarrhea is usually self-limiting. However, in cases of diarrhea persisting for longer than 1 week, broad-spectrum antimicrobial agents are administered to treat bacterial and parasitic infection (Koletzko & Osterrieder, 2009). Intestinal protozoa that are most commonly associated with diarrhea in children include Blastocyst, Dientamoeba fragilis, Giardia lamblia, Cryptosporidium species and Entamoeba species (Maas et al., 2014). Having updated information about the prevalence of these protozoan parasitic infections will aid in faster diagnosis and thus treatment. - Research question and objectives: Research Question: What are the most common protozoa and the risk factors for diarrhea in children under the age of 15 admitted to Hamad Medical Corporation (HMC). Objectives: To identify the prevalence of protozoa pathogen and the risk factors such as gender, age, season and geographical region associated with diarrhea in children. Materials and methods: Study subjects and sample collection: A total of 391 Diarrheal stool samples were collected from March-July 2015 in a sterile container from pediatrics patients (0- 15 years) admitted to HMC with diarrhea. The samples were transported on ice by Dr. Abu Madi's research group and frozen immediately at -70 °C. All required ethical approvals for the project were obtained from Medical Research Centre. - Stool examination: To recover the DNA of the enteric pathogens samples were thawed at +4 °C and 200 mg of the stool sample where weighed in a sterile 14 ml Falcon tube (BD Falcon). DNA was extracted using QIagen miniamp stool kit (Qiagen, Germany) following manufacturer's instructions with minor modifications. The extracted DNA samples were analyzed by uniplex real-time PCR using Applied Biosystems Cycler 7500. The protocol of the available literature has been used as a starting point. However, it was finalized by optimizing the concentrations of primer and probes and evaluating several cycles. The two different fluorescence reporters were used in which SYBR Green was used for Blastocyst, and TaqMan probe was used for D. fragilis, G. lamblia, Cryptosporidium and Entamoeba. For both reporters, Amplification reactions were performed in a 20 μL volume for each well with 17.5 master mix and and 2.5 DNA template. However, the mastermix of SYBR Green consist of 10 μL SYBR Green Mastermix reagent (Qiagen, Germany), 2.2 μL of primer mix, and 5 μL of PCR grade water H2O (Sigma, Germany). Whereas, Taqman reaction consists of 10 μL HotStar Taq Mastermix reagent (Qiagen, Germany), 1.3 μL of primer mix, 0.07 μL of probe and 6.2 of PCR grade water H2O (Sigma, Germany). The initial incubation step is carried out at 95 °C for 15 min to activate the HotStar Taq DNA polymerase, followed by a 40-cycle amplification program consisting of 15 s at 94 °C, 30s at 57 °C, 30s at 72 °C, and a final extension step at 72 °C for 30s. For each plate, internal positive controls were run consisted of positive samples brought from Hamad medical cooperation (HMC). Definition of variables: All Birth dates and collection dates were recorded and the ages of the subjects were categorized into five classes by years, 1.1-1.9, 2.0-4.9, 5.0-9.9 and 10.0-14.9. The collection dates were classified according to the season into summer (May-October) and winter (November-April). The subjects in this study came from 34 different countries. For the purpose of analysis, the subjects were grouped into four geographical groups. These were as follows: Qatar (N = 97), from three countries in the Arabian Peninsula (N = 16, Yemen, Saudi Arabia, Bahrain); from five countries in the Eastern Mediterranean (N = 41, Jordan, Lebanon, Syria, Iraq, Iran); from 7 countries in Asia (N = 131, India, Pakistan, Sri lanka, Bangladesh, Nepal, Mauritania, Philippines); from 7 countries in Africa (N = 86, Nigeria, Egypt, Tunisia, Sudan, Djibouti, Eritrea, Moroccan); from 10 countries in Europe (N = 20, Canada, Poland, UK, Greek, US, Holland, Spain, Italy, Venezuela, France) Statistical analysis: Prevalence data are shown with 95% confidence limits calculated using (https://www.mccallum-layton.co.uk). For determining the significance of different classes in each category, chi-square test was conducted using crosstabs descriptive statistics in IBM SPSS software. A p-value less than 0.05 is considered statistically significant. Results: Screening for gastrointestinal pathogens using multiplex RT-PCR A total of 391 pediatrics patients participated in this study during the period of March-July 2015. Out of the 391 diarrheal patients (173 females and 218 males), 41 (10.7%) were positive for at least one protozoa (Table 1 and 2). Blastocyst was detected most frequently, in 4.1% (16/391), followed by D. fragilis in 3.3% (13/391), Cryptosporidium in 2.8% (11/391), G. lamblia in 2.0 (8/391) and Entamoeba histolytica in 0.3% (1/391) (Table 1). Most of diarrhea samples in the study came from the age group of 0-1 year (119/391) followed by 2-4.9 years (108/391), 1.1-1.9 years (105/391), 5-9.9 years (45/391) and 10-14.9 years (14/391) (Table 2). However, protozoa infections were highest at the age group of 5-9.9 years with a prevalence of 21.1% (Table 2 and Fig. 1). Blastocyst and Cryptosporidium showed the same pattern of infections among the age groups with the highest prevalence at the age group 5-9.9 years (Table 2). Whereas, G. lamblia and D. fragilis showed the highest prevalence among the age group of 10-14.9 (Table 2). Females had a higher prevalence than males in infections with Blastocyst (6.4%), Cryptosporidium (4.0%) and G. lamblia (2.9%) (Table 2 and Fig. 2). Whereas, males had a higher prevalence than females in infections with D. fragilis (4.1) and Entamoeba histolytica (0.5%). A total of 34 countries categorized into 6 geoprahical regions were sampled in this study, but most of them were from Asia, Qatar & Africa regions (Table 2). However, the prevelance of protozoa infections was the highest among Europe (15%), followed by Qatar (14.1%), Arabian Peninsula (12.5%), Asia (9.9%), Africa (8.1%) and Eastern mediterean (7.3%). Most of the diarrheal samples were collected during the summer season from May to July (Table 2 and Fig. 3). However, protozoa infections had an overall higher prevalence during the winter season that is March and April (12.5%) (Table 2 and Fig. 4).- Association of protozoa infections with age, gender, geographical distribution and seasonAlthough, most of the variables (i.e. gender, age, season) have shown a high value in one of the categories, the difference was not statistically significant (p>0.05) (Table 2). The only significant variables were the age in combined, Blastocyst and Cryptosporidium infections and the gender in Blastocyst (p < 0.05) (Table 2). Blastocyst and Cryptosporidium infections affects the combined protozoa infections and have both the highest prevalence among the age group of 5-9.9 years with the prevalence of 15.6 and 11.1 respectively (Table 2). Blastocyst infections in Females have higher prevalence than male with a prevalence of 6.4% (Table 1). Conclusion: This study has demonstrated that protozoa parasitic infections are still a public health problem in pediatrics patients with Blastocyst, Dientamoeba fragilis and Cryptosporidium being the most common respectively. Therefore, protozoa parasitic infections should be tested for in children complaining from diarrhea. The study also highlights the use of molecular techniques in diagnosis of protozoa parasitic infections.
-
-
-
Does the Transmission of Viral Infectious Diseases Depend on Social Network Contacts, Weather Conditions and Animal Ownership? A Look at Common Cold in Qatar
More LessCommon cold is a viral infection of the upper respiratory tract that brings discomfort to people for few days to few weeks. Although, seasonal common cold is widespread during wintertime, the disease can be contracted all year round. Initially common cold was believed to be link to exposure to cold air because of its widespread during that period; the disease has been categorized as infectious disease there after (William and Sheldon, 2009). Common cold can bring about serious economic hardship or downfall to workforce because it often results in absenteeism from work or school (Babak et al., 2009). The disease is the most commonly encountered infectious disease in human and most frequent illness managed by general practitioner. It was reported that about 25 million people visit doctors yearly in the USA alone with common cold (Heikkinen and Jarvinen, 2003). Common cold can be caused by a variety of virus that depends on a number of factors such season and age, with rhinoviruses been the most common cause of common cold. Heikkinen and Jarvinen, 2003 provide extensive literature on common cold that include causes, epidemiology, clinical diagnosis, treatment and prevention. The effect of weather conditions is very crucial in infectious disease modeling. Social network contacts are vital when seeking to understand and predict the spread of infectious diseases inhuman populations. The transmission of infectious disease has been linked to social contact behavior (Willem et al., 2012, Wallinga et al., 2006) and animal contacts (Kifle et al., 2015, Jones et al., 2008). The frequency of infection may depend on the number of social and animal contacts. The spread of infectious disease can be control through understanding the dynamics of social contact behavior and animal contact. It is reported that adults get the illness two to three times in a year while children are infected five to seven times a year (Babak Amra et al., 2009, Heikkinen and Jarvinen, 2003) The objectives of this project are: a) To explore the dynamics of transmission of the common cold between different age groups. b) To investigate the effect of social contact patterns on the disease incidence. c) To estimate the effects of other associated risk factors such climatic and environmental variables. d) To investigate the effect of socioeconomic background such as family size, education, contact type and so on. To explore the influence of animal ownership on the frequency of the infectiousness. Conceptual Framework and data collection This study will look at the relationship between common cold and climatic changes with the effects of social contact through the development of flexible predictor statistical models. We shall combine flexible statistical models for network data to study these relationships. The data sets will be collected by the undergraduate students over a period of 3-4 months within Qatar through the use of surveys. The social network contact survey which includes the illness status and some demographic variables will be conducted within and outside Qatar University. Recruited participants will cut across different age groups, nationality and gender. An adapted version of the social contact survey POLYMOD (Improving Public Health Policy in Europe through the Modelling and Economic Evaluation of Interventions for the Control of Infectious Diseases) will be used for contact diaries. The participants will be asked if they could be contact again via email or telephone (in a month or so) for completion of the second diary. Firstly, they will be asked to answer few demographic questions such as, the number of family member, age, gender, country and educational attainment. Secondly, for each participant, the daily number of social contacts will be recorded as well as contact type. Participants will be asked if they engaged in a direct conversation with someone else at most three meters away or touched someone else (e.g. shaking hands or kisses on the chick), this was considered as a “physical” contact, even if not a word was spoken. In additional to social network contact, participants will be asked about their interaction with animal. Ownership of animal which is defined as having at least one live animal in the household in which the participant was spending the majority of his/her time. Animals with be categorized into four classes: pets (cat, dog, fish), livestock (horse, sheep, camel, cow), poultry (chicken, turkey, pigeon) and “other”. And lastly, each participant will be asked about his/her illness status will be asked (such as onset date and severity). The climatic data sets that will be used are mean daily temperature, humidity and dust aerosol. The collection of the daily climatic data will commence at least a weeks before the survey. Few weeks later (between 2-4 weeks), participants will be contacted again via email or telephone with similar follow-up questionnaire for the second social contact diary. Modeling techniques Logistic regression is a technique used for making predictions when the dependent variable is a dichotomy, and the independent variables are continuous and/or discrete. For analysis of social network data (clustered), random effect term will be added to the regression model to account for the correlation in the data. The resulting model is a mixed model including the usual fixed effects for the regressors plus the random effects in the predictor. Development of generalized linear mixed models (GLMM) for dichotomous data has been an active area of statistical research. Several approaches, usually adopting a logistic or probit regression model and various methods for incorporating and estimating the influence of the random effects, have been developed. The mixed-effects logistic regression model is a common choice for analysis of correlated dichotomous data and is arguably the most popular GLMM.
References
Babak A, Hamid S, Shahin S, Mohammad G. (2006) Prevalence of the Common Cold Symptoms and Associated Risk Factors in a Large Population Study. Tanaffos 5(3): 13-17.
Heikkinen T, Jarvinen A. (2003) The common cold. The Lancet, 361 (9351): 51-59.
Jones K, Patel N, Levy M, Storeygard A, Balk D, Gittleman J, et al. (2008) Global trends in emerging infectious diseases. Nature 451: 990-993. doi: 10.1038/nature06536 PMID: 18288193.
Kifle YW, Goeyvaerts N, Van Kerckhove K, Willem L, Faes C, Leirs H, et al. (2015) Animal Ownership and Touching Enrich the Context of Social Contacts Relevant to the Spread of Human Infectious Diseases. PLoS ONE 10(7): doi:10.1371/ journal.pone.0133461.
Wallinga J, Teunis P, Kretzschmar M. (2006) Using data on social contacts to estimate age-specific transmission parameters for respiratory-spread infectious agents. American Journal of Epidemiology 164: 936-944.
Willem L, Van Kerckhove K, Chao DL, Hens N, Beutels P. (2012) A Nice Day for an Infection? Weather Conditions and Social Contact Patterns Relevant to Influenza Transmission. PLoS ONE 7(11): doi:10.1371/journal.pone.0048695.
-
-
-
AMPK Activation Attenuates Albumin-induced Alterations in Renal Tubular Cells In Vitro
Authors: Soumaya Allouch and Shankar MunusamyBackground: Chronic kidney disease (CKD) is characterized by progressive decline in renal function; if left untreated, it ultimately results in end-stage renal disease (ESRD), a condition that demands either dialysis or kidney transplant for survival. CKD and ESRD are associated with a multitude of complications ranging from increased hospitalization to accelerated cardiovascular events and mortality. Currently, type-2 diabetes and hypertension are the two major risk factors for CKD. With the increasing incidence and prevalence of these conditions globally, the patient population with CKD is expanding worldwide. According to local sources, CKD affects about 13% of Qatar's population, and the prevalence of ESRD, the advanced phase of CKD, in Qatar was found to be 212 per million patients. The increased risk of complications associated with CKD in conjunction with its high prevalence in Qatar and in the rest of the world, necessitates its prevention and management as a high national and international priority. Elevated urinary albumin excretion (commonly referred to as proteinuria) is not only a hallmark of renal disease, but also strongly associated with the development and progression of CKD. Albuminuria is thought to induce endoplasmic reticulum (ER) stress, consequently triggering AKT pathway and resulting in inhibition of AMP-activated kinase (AMPK). AMPK, a fuel sensor present in cells, is primarily involved in the regulation of fatty acid oxidation and ATP synthesis. Inactivation of AMPK was found to trigger mTOR (mammalian target of rapamycin) pathway, and subsequently inhibit autophagy (a defense mechanism) and induce epithelial-to-mesenchymal transition (EMT). These signaling changes eventually accelerate renal cell apoptosis, and manifest into CKD. Thus, the objectives of this study are: 1) to standardize and characterize an in vitro model of albumin-induced renal cell injury using normal rat kidney proximal tubular (NRK-5E) cells, and 2) to explore the effect of AMPK activation on ER stress, AKT, mTOR, EMT, autophagy and apoptosis that are thought to mediate renal cell injury during proteinuria using the developed in vitro model of albumin-induced renal cell injury. Methods: NRK-52E cells were grown to 60% confluency and then serum-starved for 24 hours to arrest cell proliferation. Cells were then exposed to albumin, at concentrations ranging from 1 to 30 mg/ml, for 24 to 72 hours. At specific endpoints, cells were assessed for induction of ER stress and alterations in the status of AKT, AMPK, mTOR and autophagy and changes in cellular senescence via x-galactosidase (an enzyme that is expressed in senescent cells) staining. Following standardization of albumin-induced renal cell injury model, studies were performed in the presence and absence of AMPK activator metformin (1 mM) for 24 to 72 hours. Cells were then assessed for alterations in the status of AMPK, AKT and mTOR, and the markers of ER stress, EMT, autophagy and apoptosis. Results: Exposure to albumin for 72 hours caused a dose-dependent increase in cellular senescence in NRK-52E cells. In contrast, cells exposed to albumin for 24 and 48 hours did not reveal any marked changes in cellular senescence. A 4-fold induction in ER stress marker CHOP and the EMT marker a-SMA was noted. Moreover, higher concentrations of albumin, particularly 30 mg/ml, caused severe induction of ER stress and EMT, marked by 20-fold increase in CHOP and 6-fold increase in a-SMA respectively. Similarly, the phosphorylation of AKT and P70S6K (a downstream target of mTOR) was increased by more than 1.5-fold in cells subjected to albumin treatment. In addition, albumin treatment caused a dose-dependent reduction in AMPK phosphorylation and about 66% decrease in the expression of autophagy marker LC3-II. The above changes were observed in conjunction with prominent dose-dependent induction of apoptotic markers - caspase-3 and caspase-12 ranging between 1.5 to 3.5-fold and 3 to 5-fold respectively in cells exposed to albumin. In contrast, metformin co-treatment restored the levels of phosphorylated AMPK, and suppressed activation of AKT and P70S6K in NRK-52E cells exposed to albumin. Notably, metformin also prevented albumin-induced EMT; this was marked by a 50% decrease in a-SMA and a 60% increase in E-cadherin expression. In addition, 2.5-fold increase in LC3-II expression was noted. Intriguingly, the pro-apoptotic protein CHOP was induced following treatment with metformin; nonetheless, the expression of apoptotic markers caspase-12 and caspase-3 was reduced by 80% and 70% respectively, indicating that metformin protected the cells against albumin-induced apoptosis. Conclusion: Albumin treatment induces ER stress, and activates AKT, EMT and apoptosis, with concomitant decreases in autophagy and inactivation of AMPK in renal tubular cells. Activation of AMPK via metformin treatment suppresses AKT and mTOR activation, and prevents EMT and apoptosis, but increases autophagy and ER stress in renal tubular cells. Further studies are required to understand the mechanisms by which metformin differentially modulates ER stress and apoptosis in renal cells under proteinuria. Together, our findings suggest that AMPK activation via metformin could serve as a potential therapeutic strategy to prevent and/or treat the development of CKD in patients with established proteinuria.
-
-
-
Antimicrobial Modification of LDPE Using Non-thermal Plasma
Authors: Salma Habib, Mariam Ali S A Al-Maadeed and Anton PopelkaLow-density polyethylene (LDPE) represents polymer having good chemical and physical characteristics for which it is widely used in many applications, such as biomedical and food packaging industry. This polymer excels by good transparency, flexibility, low weight and cost which makes it suitable material compared to non-polymer packaging materials. However, its hydrophobicity cause many limitations for antimicrobial activity which can result in absence of some characteristics required in food packaging applications. For that purpose, some researches have done experiments to modify the polymer surface to increase the surface free energy (hydrophilicity). This can be done by introducing some polar functional groups into the LDPE surface which will permit an increment of its surface free energy and so its wettability or adhesion without any disruption in its bulk properties [1]. One of the most preferable modification techniques is known as non-thermal radio-frequency discharge plasma, and it is preferred technique due to the ability to modify only thin surface layer leading to noticable improvement of the surface properties [2].Moreover, it represents environmentally friendly technique since it does not require the use of any hazardous chemicals or dangerous radiations and therefore non-thermal plasma is highly recommended for food packaging applications [1]. In addition, the surface modification of LDPE can lead to the enhancement of the antimicrobial activity, which was the main purpose of this research. Food packaging materials requires preventing any growth of bacteria, fungal, or any other microbial organisms for health and food safety. Some approved preservatives are commonly used directly in foods to preserve them form microorganisms growth and spoilage. Nowadays, some innovative ways are applied to graft acrylic acid on polymers surfaces [3] for biomedical applications to create an effective layer for an immobilization of antibacterial agents and this results in bacteria prevention on the LDPE surface. In this research, we focused on grafting of sorbic acid as one of the most commonly used preservatives in food and beverage for being safe, and effective in bacteria inhibition (whether pathogenic strains or spoilage kinds), molds, and yeasts [4]. It is also used in cosmetic industries since it has good compatibility with skin and it is easily usable [5]. For the potential enhancement of the antimicrobial efficiency, chitosan representing antimicrobial agent was used for the immobilization on sorbic acid created layer. Chitosan (a derivative of chitin polysaccharide) was chosen as a natural occurring antimicrobial agent (from crabs shrimps, and other sea shells [5]) that has strong and effective antimicrobial activity along with its nontoxicity, biofunctionality, biodegradability, and biocompatibility [6]. In this study, the LDPE surface was modified by several modification steps. The first step involved the modification of the LDPE surface by non-thermal radio-frequency discharge plasma as a radical graft initiator for the subsequently polymerization of sorbic acid containing double bonds. In the next step, grafting of sorbic acid was carried out immediately after plasma treatment allowing the interaction of plasma created radicals on LDPE surface with sorbic acid. Final step was focused on the immobilization of chitosan on grafted sorbic acid platform. Each modification step was analyzed by different analytical techniques and methods to obtain detailed information about the modification process. The surface parameters changes after modification of the LDPE surface, such as surface free energy (contact angles measurements), graft yield (gravimetric measurements) surface morphology (scanning electron microscopy and atomic force microscopy) and chemistry (Fourier transform infrared spectroscopy with attenuated total reflectance) were obtained allowing understanding the modification process.
Acknowledgement
The authors gratefully acknowledge use of the services and facilities of the Center for Advanced Materials (CAM) and Central Laboratory Unit of Qatar University, Qatar.
References
[1] S.K. Pankaj, C. Bueno-Ferrer, N.N. Misra, V. Milosavljević, C.P. O'Donnell, P. Bourke, et al., Applications of cold plasma technology in food packaging, Trends Food Sci. Technol. 35 (2014) 5-17. doi:10.1016/j.tifs.2013.10.009.
[2] T.D. Martins, R.A. Bataglioli, T.B. Taketa, F.D.C. Vasconcellos, M.M. Beppu, Surface modification of polyelectrolyte multilayers by high radio frequency air plasma treatment, Appl. Surf. Sci. 329 (2015) 287-291. doi:10.1016/j.apsusc.2014.12.010.
[3] A. Popelka, I. Novák, M. Lehocký, I. Junkar, M. Mozetič, A. Kleinová, et al., A new route for chitosan immobilization onto polyethylene surface., Carbohydr. Polym. 90 (2012) 1501-8. doi:10.1016/j.carbpol.2012.07.021.
[4] S.S. Sumner, J.E. Marcy, The Effect of Sorbic Acid on The Survival of Staphylococcus aureus on Shredded Cheddar and Mozzarella Cheese By Alison K. Roberts Thesis submitted to the Faculty of Virginia Polytechnic Institute and State in partial fulfillment of the requirements for t, (n.d.). http://scholar.lib.vt.edu/theses/available/etd-03102003-151240/unrestricted/ALISONTHESIS.pdf.
[5] F. Devlieghere, A. Vermeulen, J. Debevere, Chitosan: antimicrobial activity, interactions with food components and applicability as a coating on fruit and vegetables, Food Microbiol. 21 (2004) 703-714. doi:10.1016/j.fm.2004.02.008.
[6] M. Aider, Chitosan application for active bio-based films production and potential in the food industry: Review, LWT-Food Sci. Technol. 43 (2010) 837-842. doi:10.1016/j.lwt.2010.01.021.
-
-
-
The Impact of Long-term Medicines Use: Linguistic Validation of the Living with Medicines Questionnaire
By Amani ZidanIntroduction: Polypharmacy (or the use of multiple medications at the same time by the same patient) could expose the patient to health risks and add an extra burden on the life of these patients in addition to the burden of illness. The Living with Medicines Questionnaire (LMQ) was developed to assess the burden of polypharmacy from the patient's perspective. This tool includes items relating to the use of medication and expressed as statements for which the respondents indicates his/her agreement using five-points Likert-type. There is a need to make available such a measure to contribute information generated from the Arabic-speaking world and to share research findings through an Arabic version of the LMQ, which is culturally equivalent to the original English tool. Objectives: We aimed at translating and culturally adapting the Living with Medicines Questionnaire into the Arabic context through a structured process utilizing best practices in translation and cultural adaptation. Methods: As means for adhering to best practice, permission to use the LMQ was sought from the developers, and a protocol for its translation and cultural adaptation was developed using guidelines developed by the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) for the translation and cultural adaptation process for patient-reported outcomes measures. Two forward translations were produced, compared, and reconciled into the first reconciled version. This version was then back translated into English and compared with the original tool leading to the second reconciled version. The emerging Arabic version of the LMQ was cognitively tested among purposively selected individuals to test the linguistic and cultural equivalence and produce the final Arabic translation. The results were documented and shared with the developers of the LMQ. Results: A comprehensive protocol, with the potential to inform future similar studies elsewhere, was developed and used as a guide to produce an Arabic version of the LMQ that is representative of the original tool, and suitable for the Arabic culture. No major issues were found in the demographics section of the questionnaire or the instructions to answer the questions. Issues identified and related to cultural and linguistic equivalence of some terms were resolved by re-wording some items in the tool. A total of seven people were purposively selected to be interviewed in order to assess the LMQ Arabic version in areas related to comprehension, time burden, and acceptability. Individuals were selected with consideration to balanced gender distribution, age, ethnicity/nationality, education, and all with Arabic language as their mother tongue. The cognitive debriefing exercise generated comments regarding the original tool's construct and its Arabic equivalent, which were communicated to the developers of the LMQ for their consideration while conducting further comparative studies. Conclusion: Through following methods based on best practice, we have joined the international efforts into the development of the first questionnaire aiming to measure medication burden in the Arabic-speaking region. We now make available a culturally equivalent Arabic translation of the Living with Médicines Questionnaire for use in Arabic speaking countries in research and/or clinical practice. However, further validation tests are needed to be conducted among Arabic-speaking population.
-
-
-
Anti-proliferative and Anti-metastatic Effect of Aqueous Extract of Origanum Syriacum on Aggressive Human Breast Cancer Cells
Authors: Amal Shahada Alkahlout, Ali Eid, Ipek Goktepe and Alaeldin SalehAround the world as well as in Qatar, breast cancer is characterized among the highest rates of mortalities that are cancer related. Alarmingly, statistics have shown that the incidence of breast cancer is slightly higher in Qatari women than other Arab countries in the region. This evidence suggests the importance of focusing research on understanding and treatment of breast cancer in Qatar. Current treatment options for breast cancer includes; chemotherapy, surgery, and radiotherapy. Chemotherapy and radiotherapy are associated with undesired side effects; thus, many people tend to look for alternative treatments. Herbal treatments have been used as an alternative to traditional cancer therapies in recent years. Several studies have shown that herbs contain bioactive compounds, including flavonoids, steroids, as well as others, which exert anti-oxidative, anti-proliferative, and anti-inflammatory properties. One of the most commonly used herbs in the Arabian Gulf region is Origanum syriacum. It is known to have anti-oxidative effect. Unfortunately, studies are extremely limited in terms of understanding its anti-carcinogenic effect. Therefore, this study was carried out to determine the effect of O. syriacum aqueous extract (OSE) on an aggressive type of breast cancer (MDA-MB-231) cells. O. syriacum extract was prepared by dissolving the ground dried leaves in water and then drying it using a rotarvapor. Viability of MDA-MB-231 cells in the presence or absence of increasing concentration of OSE was examined by MTT assay. The flow cytometry was used to test the cell cycle progression in the presence of OSE. Moreover, the migratory capacity of MDA-MB-231 cells was determined by Boyden chamber and scratch assay. The invasiveness of MDA-MB-231 in the absence and presence of OSE was investigated using the Matrigel coated wells. Furthermore, adherence of MDA-MB-231 cells to fibronectin was tested with and without OSE. The oxidative stress of the different concentrations of OSE against MDA-MB-231 was determined using ROS-glo assay. Finally, western blot analysis was performed to test the metastatic ability (occludin expression) as well as autophagy marker (LC3A/B expression). The results indicated that OSE decreased the proliferation of MDA-MB-231 cells in a time and a concentration dependent manner. The highest anti-proliferative effects of OSE were observed at concentrations of 0.8 mg/ml and above after 24, 48 and 72 hrs of exposure. Furthermore, OSE arrested cells in G1 phase of cell cycle. Also, migratory capacity of MDA-MB-231 cells declined in the presence of OSE at a concentration of 1.2 mg/ml. Moreover, OSE inhibited the adhesive property along with the invasiveness of aggressive breast cancer cell line. Supporting the above results, an increase in occludin expression was observed in cells treated with OSE indicating that O. syriacum extract has an anti-metastatic capacity. Additionally, the production of ROS as well as the expression of LC3A/B proteins increased in MDA-MB-231 cells treated with OSE at concentration of 1.2 mg/ml. Our results demonstrate that Origanum syriacum may have a potential to be used as a supplemental therapy for patients suffering from malignant breast cancer. Further insight into understanding molecular mechanisms and safety of OSE using in vivo studies should be carried out to fully understand its activity at the molecular level and determine its safe use in the treatment/prevention of breast cancer.
-
-
-
Antibiotic Susceptibility and Plasmid Profile of Vibrio Vulnificus Isolated from Mussels in Qatar
Authors: Mohammed Aldulaimi, Sahila Abd.Mutalib, Màaruf Abd.Ghani and Noura AlhashmiAntibiotic Susceptibility and Plasmid Profile of Vibrio vulnificus isolated from mussels in Qatar Vibrio vulnificus infections are the worldwide public health problems associated with illnesses resulting from consumption of raw or partially cooked seafood and exposure with the contaminated sea water. Infections of V. vulnificus are reported in many different countries in America, Europe and Asia notably in USA, South Korea, Taiwan, Malaysia and Saudi Arabia. The aim of this study is to isolate and identify V. vulnificus from Mussels in Qatar and detect antibiotic susceptibility and plasmid profiles. A total of 87 Mussels, 50 from Doha and 37 from Akhor were examined for the presence of V. vulnificus using Thiosulfate-citrate-bile salts-sucrose agar (TCBS) and Chromagar Vibrio (CV), 18% of Doha samples were positive and 13.5% of Alkhor samples were positive, 9 of 14 isolates from Doha and 5 of 14 isolates from Alkhor. The antibiotic susceptibility tests were performed for 12 antibiotics by the disc diffusion method. For molecular identification of the isolates, the 16S ribosomal RNA gene fragment was amplified by polymerase chain reaction (PCR) and the nucleotide data were subjected to Basic Local Alignment Search Tool (BLAST) for analysis, Sequence comparison with public databases resulted in 96-100% similarity with V. vulnificus in 15 (60%), 99-100% similarity with V. paheamolyticus in 5 (20%); and 5 (20%), were identified as the non-vibrio species, The analysis of evolutionary relationships among isolates in this study was represented in seven cluster groups. The cluster A referred to non-vibrio isolates whereas the clusters B, C, E, F, and G included V. vulnificus.
V. paraheamolyticus was only represented in the cluster D. 16S rDNA-based identification confirmed the conventional identification for isolates. The morphological and biochemical test detected V. vulnificus in 32% of the Mussels samples all of which showed resistance for two to eight antibiotics. Plasmids were found in 88% of the isolates with 10 different profiles having two to four plasmids. Based on 16S rRNA gene sequence homology 15 isolates were identified as V. vulnificus, five as other Vibrios and five as nonvibrios. We have observed a marked difference between morphological methods and molecular methods in identification of V. vulnificus indicating the in adequacy of the morphological technique in discriminating the Vibrio species. The occurrence of V. vulnificus in the Mussels samples is quite high so consumption of uncooked and semicooked Mussels should be avoided in order to prevent food-borne infection by this pathogenic bacterium. There are observed a marked difference between morphological methods and molecular methods in identification of V. vulnificus indicating the in adequacy of the morphological technique in discriminating the Vibrio species. The occurrence of V. vulnificus in the cockle samples is quite high so consumption of uncooked and semicooked cockles should be avoided in order to prevent food-borne infection by this pathogenic bacterium. Antibiotic susceptibility. Bacterial isolates showing resistance to more than six antibiotics contained more than three plasmids. Plasmids are known to carry antibiotic resistance genes. Otherwise, V. vulnificus isolates in which no plasmids could detected also has resistance to some of antibiotics, that mean antibiotic resistance genes may also be carried on the chromosome of bacteria.
-
-
-
Predicting Weight Loss in Online Social Media
Authors: Tiago Oliveira Cunha, Ingmar Weber and Hamed HaddadiObesity is a major public health problem which adversely impacts mortality and quality of life, also it is associated with significantly increased risk of more than 20 chronic diseases and health conditions [Thiese et al., 2015]. According to the World Health Organization, the prevalence of obesity has nearly doubled within the last 30 years, which has led to an impressive estimate of 402 million obese people worldwide. The etiology of obesity is complex and encompasses a wide range of genetic, physiological, behavioral, cultural, social and environmental factors. Before the appearance of online social media, factors associated with obesity could only be measured in the real-world. However, with social environments moving online, the escalating number of interactions in online communities has created a great opportunity to study huge amount of user-generated content that comprises various topics related to obesity. These topics include people's experiences, recommendations and feedback about certain medications, medical procedures, diets or exercises, and emotional support in the form of encouragement, sympathy, and success stories. Analyzing and exploring these topics can give health practitioners various insights into community dynamics, such as the effects of online social support on community members and the profile of influential members, as well as provide important information to design effective online health intervention strategies [Bennett and Glasgow, 2009]. Online communities can be used to understand and promote health behavior as well as disseminate health innovations. But still little is known about how these communities can help enhance health conditions, such as weight loss. Advantages of online communities include access to many peers with the same health concerns, convenient communication spanning geographic distances, and anonymity (if desired) for discussion of sensitive issues [Hwang et al., 2010]. In this study, we are interested in answering the research question of whether it is possible to use online user generated content to predict success or failure of weight loss and weight maintenance. Concretely, this work investigates if there is a relation between online users behavior and the likelihood of them losing weight in an online Reddit weight loss community, namely “loseit”. Data collected include posts, comments and other metadata (i.e., timestamp, user name, number of upvotes) from August 2010 to November 2014.
In total, we obtained 70, 949 posts and 922, 245 comments. These data were generated by 107,886 unique users. The community encourages users to post their weight and progress along time, sharing experiences. Our aim is to show that social media can be exploited to help health practitioners to understand obesity dynamics, delivering more personalized treatments and improving patient-centered care. In this direction, our findings aid health practitioners to design early warning systems or effective online health interventions strategies that can be incorporated into social media platforms and lead to more effective treatment. These systems may provide great benefit to patients, for example, by integrating recommendation systems that can help users make important decisions, such as choosing the right type of diet or exercise for their obesity condition. We believe that exploring novel approaches to understand and address obesity is crucial to realize Qatar's National Vision 2030 of a healthy population.
-
-
-
Inhibition of the Akt Kinase Down-regulates ERK, Bcl-2 and Survivin and Suppresses Proliferation and Survival of Murine VEGF-dependent Angiosarcoma Cells
Introduction: Angiosarcomas are rare and malignant neoplasms that involve abnormal proliferation and migration of cancerous endothelial cells. Angiosarcomas can arise in any region of the body, but tend to be found in the skin, soft tissue, and liver. They are associated with high mortality rate due to their aggressiveness and high rate of metastasis. Therapeutic strategies for treating angiosarcomas involve use of cytotoxic drugs and radiotherapy. However, patient-resistance to these approaches is commonly reported. Hence, there is a need to further understand molecular mechanisms underlying angiosarcomas and explore novel therapeutic targets. Furthermore, because abnormal angiogenesis underlies nearly all types of cancer, a better understanding of cellular pathways regulating cancer endothelial cell function may also lead to development of novel anti-angiogenic therapies. The PI3K/AKT/mTOR signaling pathway plays an important role in regulating cell proliferation and is also regarded as one of the most commonly dis-regulated pathway in cancer. Inhibitors have been developed targeting these signaling proteins and their therapeutic potential has been evaluated in different studies. However, there is little known about the relevance of this signaling pathway to angiosarcomas. Therefore, in this study, we explored the anticancer therapeutic potential of targeting the PI3K/AKT/mTOR signaling pathway using inhibitors of PI3K, AKT or mTOR in a murine VEGF-dependent angiosarcoma cell line. Methods: Cell culture: MS1-VEGF cells (mouse endothelial cells capable of inducing angiosarcomas) were used. Cells were maintained in Dulbecco's Modified Eagle Medium, which was supplemented by 5% fetal bovine serum, 1% penicillin/streptomycin, and 11 mM glucose. Concentration-response experiments: The inhibitors used were: PI3K - LY294002 (10 μM); Akt – AKTi 1/2 inhibitor (AKTi, 10 μM); and mTOR - Temsirolimus (4 μM). LY294002 is a selective inhibitor of PI3K, but is not yet in clinical use. AKTi 1/2 is a non-ATP competitive inhibitor of Akt isoforms 1 and 2. Temsirolimus is a specific inhibitor of mTOR, and has been clinically used for the treatment of advanced renal cell carcinoma. The treatments were conducted over a 48 h period, with DMSO serving as the solvent control. Experiments was repeated at least three times using cells cultured in 12 or 6-well sterile tissue culture plates (Falcon). Cells were incubated in a 5% CO2 incubator at 37 °C. At the end of the treatment period, cell counts were performed and trypan blue dye exclusion was used to assay cell viability. This was done by pipetting a small volume of cell suspension mixed with trypan blue onto a dual chamber counting slide (Bio-Rad TC20 Automated Cell Counter). Proliferation Assay: In order to further assess the effectiveness of the inhibitors, proliferation assays were performed. Cells plated in 96-well plates with the inhibitors for 48 h were treated with the CellTiter 96 Aqueous One Solution reagent (Promega) and incubated in the CO2 incubator at 37 °C for 3 h. A PerkinElmer 2104 plate reader was used (EnVision software) to measure absorbance (at 492 nm). Western blots: Western blots were used to detect the expression of several proteins linked to the PI3K/Akt/mTOR pathway and also to study changes resulting from treatment with inhibitors. Primary antibodies recognizing mTOR, AKT, PI3K, eNOS, LC3B, Beclin-1, ERK, Survivin, and Bcl-2 were used. All proteins probed have known roles in cell proliferation/apoptosis/autophagy. Images were captured using GeneSnap software on a PerkinElmer Geliance 600 imaging system. Quantitiy One (Biorad) was used to analyze Western blot data. Flow cytometry: Cell cycle analysis was performed on the LSR Fortessa analyzer (BD Biosciences) by staining fixed and permeabilized cells with propidium iodide. Data was processed with FACS Diva 8.0 software (BD Biosciences). Data analysis: All the data was analyzed using the statistical software GraphPad Prism 5.0 (GraphPad Software, Inc. CA, USA). Data is presented as mean ± SEM. Statistical analysis was performed with using ‘t’ test or one-way analysis of variance (ANOVA). Post-hoc comparisons between groups after ANOVA were performed by Tukey's multiple comparison tests. ‘p’ values less than 0.05 were considered to be statistically significant. Results: Incubation of MS1 VEGF cells with LY294002, Akt-i ½ or Temsirolimus caused a reduction in cell number, indicating reduced cell proliferation. The Akt-i ½ was the most effective among the three and also caused a strong reduction ( < 25% viability) in cell viability (unlike LY294002 and Temsirolimus). Proliferation assays performed in 96-well plates indicate a reduction in MS1 VEGF cell proliferation with all three inhibitors, and Akt-i ½ was again the most effective in these assays. Cell cycle analysis revealed a robust increase in ‘sub G0/G1 population’ after treatment with Akt-i ½ (Control - 8% vs Akt-i ½ - 43%), suggesting an increase in cell death. To investigate the mechanisms underlying the actions of the inhibitors, western blot experiments were performed. The data (n = 4) demonstrated down-regulation of the anti-apoptotic Bcl family protein Bcl-2 and the inhibitor of apoptosis protein survivin after treatment with Akt-i ½ (but not LY294002 or Temsirolimus). Treatment with Akt-i ½ also reduced the phosphorylation of Akt and ERK proteins. Furthermore, there was a strong increase in the expression of the autophagy marker LC3B-II after treatment with Akt-i ½. Immuno-staining experiments confirmed aggregation of LC3B-II after Akt-I ½ treatment, suggesting an induction of autophagy. Inhibition of autophagy by 3-methyladenosine (3-MA) reversed Akt-i ½-induced LC3B-II puncta formation and also significantly enhanced Akt-i ½-induced cell toxicity. This suggests that autophagy induction acts as a cell survival mechanism after Akt inhibition. To investigate if Bcl-2 and survivin could be downstream effectors of Akt-i ½, experiments were performed using specific inhibitors: YM155 for survivin and TW37 for Bcl-2. A combination of YM155 and TW37 induced robust changes in cell cycle, increased the ‘sub G0/G1 population’ (Control - 9% vs. TW37 + YM155 - 38%) and reduced the proportion of cells in G0/G1, S as well as G2/M phase. Discussion: The data revealed promising anti-proliferative actions for LY294002, Temsirolimus and Akt-i ½ in MS1 VEGF cells. The drug, Akt-i ½, drug was particularly effective and also substantially reduced cell viability. The data also suggest that Bcl-2 and survivin may be critical components of the anti-proliferative action of Akt-i ½ thus making it a highly effective agent. The data also revealed that the cells induced autophagy as a survival mechanism when Akt was inhibited. In conclusion, multiple signaling pathways and proteins regulate MS1 VEGF cell proliferation and survival, which are targeted by Akt-i ½. Future studies: Future investigations will focus primarily on direct evaluation of apoptosis by flow cytometry and if possible experiments in nude mice to evaluate in vivo drug efficacy.
-
-
-
Evaluation of a Cumulative Performance-based Assessment for Pharmacy Students in Qatar
Authors: Ahmed Sobh, Kyle John Wilby, Mohamed Izham, Mohammad Diab and Zubin AustinBackground: Objective structured clinical examinations (OSCEs) are considered the most psychometrically robust form of clinical skills assessment in the health professions. In 2014, the College of Pharmacy at Qatar University (CoP-QU) piloted the first cumulative OSCE for graduating students in collaboration with the Supreme Council of Health and the University of Toronto. Since then, interest has grown in measuring the psychometric properties of this examination to ensure adequate reliability, validity and defensibility. Objectives: This study aimed to evaluate the psychometric properties of the OSCE conducted in 2015 at the CoP-QU. A secondary objective of this study was to identify quality improvement opportunities for design, implementation, and evaluation of the OSCE. Methods: The psychometric analysis occurred as follows: We calculated cut scores and pass rates of the 10 stations being used in the OSCE assessment using borderline regression method. Predictive validity of undergraduate courses grades with OSCE grades were calculated using correlation and regression statistics. Concurrent validity of similar cumulative exams were evaluated using Pearson correlation. Risk of bias was calculated using Spearman correlation between assessors' analytical (checklist sheet of required tasks to be performed in a station) and global (the score of whole performance including communication skills on a scale from 1 to 5) scoring. Content validity was assessed quantitatively using 18 student-feedback forms and qualitatively through focus groups with OSCE participants and contributors (total of 5 assessors, 3 students, 3 administrators, 3 standardized patients). Interrater reliability was assessed using intra-class correlation coefficients (ICCs). Construct validity was evaluated by comparing interrater reliability between the first and second OSCE cycles. Cronbach's alpha was used to determine internal consistency of students' performance in all stations in terms of global and total scores. Correlation statistics were conducted at α level < 0.05. Results: Out of 50% allocated for global score and 50% for analytical score per station, and based on the cut scores calculated for every station, average pass rate per analytical checklist grades in all stations was 70.4%, while average pass rate calculated for total scores in all stations was 79.2%. Four courses simulating professional skills of OSCE, two adapted undergraduate formative OSCEs, and a Medicinal Chemistry course, the control, correlated with the OSCE grades as follow, 0.72 (P < 0.01), 0.47 (P < 0.05), 0.43 (P>0.05), 0.65 (P < 0.01), 0.78 (P < 0.01), 0.61 (P < 0.01), and 0.36 (P>0.05) respectively. OSCE grades can be moderately predicted by Professional skills course grades (52.3%) and its practical assessment (61.2%). Average correlation between analytical and global grades for all assessors was 0.52. A total of 90% of the stations were deemed to reflect practice, according to student perceptions. The average ICC of analytical checklists scores, global scores, and total scores were 0.88 (0.71-0.95), 0.61 (0.19-0.82), and 0.75 (0.45-0.88) respectively. Cronbach's alpha of students' performance in global scores across stations was 0.87, and 0.93 in terms of total scores. Conclusion: The cumulative OSCE conducted in 2015 showed acceptable validity and reliability as a high stakes examination and therefore is suitable to be implemented as a mandatory core curriculum component for student pharmacist assessment in Qatar.
-
-
-
Metal Organic Framework as a Potential Drug Carrier for Pulmonary Arterial Hypertension
Pulmonary arterial hypertension (PAH) is a progressive, debilitating and fatal condition [1]. Current PAH therapy relies on vasodilator drugs, which are seriously limited by their systemic side effects. We suggest that advances made in the field of nanomedicine could be used to improve the utility of drugs to treat PAH. Whilst not currently used clinically, metal organic frameworks (MOFs) such as the Material from Institute Lavoisier (MIL) class, are good candidates since (i) they can be tailored to accommodate different types of drugs including those with the molecular weights of PAH medications (MW; 300-500) [2], (ii) are biocompatible and biodegradable [3]; (iii) have a large internal surface area and low density with commensurate high drug loading capacity; (iv) are thermal and mechanically stable; and (v)have a long drug release period with the ability to incorporate different functional groups [2, 4-6]. However, the idea that nanomedicines can be used to treat PAH is very new and the use of MOFs in this regard is untested. Thus, we must first: (i) validate their chemical structure/stability, (ii) establish MOF cytotoxicity and effect on inflammatory responses in relevant cell types and (iii) investigate their behaviour in vivo model. In PAH, endothelial cells are critical cells to target. This is because the role of endothelial cells in releasing a delicate balance of vasoactive hormones is disrupted in PAH where cardioproective mediators such as nitric oxide and prostacyclin are reduced whilst the release of the constrictor peptide endothelin (ET)-1 is increased. Indeed, current PAH drugs work to boost nitric oxide and prostacyclin pathways and to block ET-1 receptor signalling. Aim: The aims of this work are to (i) synthesis and characterise MOFs designed to accommodate PAH drugs; (ii) to investigate the cytotoxic effect of MOFs in a comprehensive range of cell models relevant to PAH in vitro and the toxicity and distribution of MOFs in vivo. Methods: The nanoporous iron MOF (MIL-89), and a polyethylene glycol formulation (MIL-89 PEG) were prepared as previously described [2], then characterized using infrared spectroscopy (IR), powder X-ray diffraction (PXRD), thermogravimetric analysis (TGA) and scanning electron microscopy (SEM). Endothelial cells grown from human blood progenitors of control subjects and PAH patients were cultured as we have done previously [7] and the effect of MOFs on viability determined using the AlamarBlue® Assay. In addition the effect of MOFs on endothelial cell inflammatory function was determined by measuring the release of the cytokine CXCL8 and on markers of PAH disease by measuring ET-1 using specific ELISAs. To investigate the effects of MOFs in vivo rats were injected with the MOF MIL-89 (50 mg/kg) in glucose solution twice a week for various times up to two weeks, while the control group was injected with glucose solution only. Animals were killed and tissues including; blood, heart, lung, brain, thymus, liver and kidney as well as urine and faeces were collected at days 1,3,7,10 and 14. Results: MIL-89 and MIL-89 PEG retained functional groups, and were crystalline, spherical and stable in air up to 200 °C. Neither preparations caused toxicity in cells grown from control donors or patients with PAH at concentrations up to 10 μg/ml (Fig. 1). Interestingly, both preparations of MOFs displayed anti-inflammatory effects; inhibiting CXCL8 and ET-1 release from endothelial cells from healthy donors as well as from PAH patients (Fig. 1). These MIL-89 had no affect on body weight of the rats and did not cause any gross changes in their lungs (Fig. 2). Conclusion: Both MIL-89 and MIL-89 PEG represent non-toxic potential drug-carriers with predicted molecular capacity for the current PAH medications, which include treprostinil sodium, bosentan and sildenafil. Furthermore, they both display some evidence of anti-inflammatory properties in vitro that may be of therapeutic benefit in the treatment of PAH. MIL-89 had no overt toxic effects in vivo, although these will need to be explored in more detail in future studies.
References
1. Archer, S.L., E.K. Weir, and M.R. Wilkins, Basic science of pulmonary arterial hypertension for clinicians: new concepts and experimental therapies. Circulation, 2010. 121(18): p. 2045-2066.
2. Horcajada, P., et al., Porous metal-organic-framework nanoscale carriers as a potential platform for drug delivery and imaging. Nat Mater, 2010. 9(2): p. 172-178.
3. Huxford, R.C., J. Della Rocca, and W. Lin, Metal-organic frameworks as potential drug carriers. Curr Opin Chem Biol, 2010. 14(2): p. 262-268.
4. Kızılel, S.K.a.S., Biomedical Applications of Metal Organic Framework, 2010, Department of Chemical and Biological Engineering: Koc- University, Rumelifeneri Yolu, 34450, Sarıyer, Istanbul, Turkey.
5. Ferey, G., et al., A chromium terephthalate-based solid with unusually large pore volumes and surface area. Science, 2005. 309(5743): p. 2040-2042.
6. Horcajada, P., et al., Metal-organic frameworks as efficient materials for drug delivery. Angew Chem Int Ed Engl, 2006. 45(36): p. 5974-5978.
7. Reed, D.M., et al., Morphology and vasoactive hormone profiles from endothelial cells derived from stem cells of different sources. Biochem Biophys Res Commun, 2014. 455(3-4): p. 172-177.
-
-
-
Identification and Structural Analysis of Natural Product Inhibitors of Human Alpha-Amylase to Target Diabetes Mellitus
More LessIntroduction: Noninsulin dependent diabetes mellitus (NIDDM) or Diabetes Type 2 is a major health challenge worldwide. In 2014 the World Health Organization (WHO) revealed that the percentage of adults with fasting glucose ≥ 7.0 mmol/L accounts for 9% globally. Moreover, the highest percentage was in the Eastern Mediterranean Region with 26.8 ± 0.4%. Type 2 diabetes is a complex metabolic disorder associated with high level of glucose in blood (hyperglycemia) which lead to long-term pathogenic conditions such as: neuropathy, retinopathy, nephropathy and a consequent decrease in quality of life and increased mortality rate.
Starch is the main source of energy for most living organisms; in humans it is digested over several stages that involve different amylolytic enzymes, such as α-Amylase and α-Glucosidase. Alpha Amylase is the major secretory product (about 5–6%) of the pancreas and salivary glands, playing a core role in starch and glycogen digestion. The control of postprandial hyperglycemia is an important strategy in the management of Type 2 diabetes; lifestyle modification and/or the use of medications such as insulin and α-Glucosidase inhibitors are the available treatments to date. Acrabose is a prominently used α-Glucosidase inhibitor for diabetes and obesity control, however it has many side effects and limitations. Numerous in vivo studies (REFS) have shown that many plant extracts inhibit the key enzymes of digestion (α-Amylase and α-Glucosidase) and the use of naturally occurring inhibitors is potentially the most effective and safest approaches for treating diabetes. Aims: Short-term aim: To clone, express and purify human α-Amylase protein using different yeast expression systems. Followed by protein (c0)crystallization and structural analysis.
Long-term aim: Screening natural products (plant extracts) based on their traditional use followed by co-crystallisation of selected inhibitors. This will be complemented by inhibitors designed in silico based on the ANCHOR.QUERY approach (REF). Finally, identified compounds will be characterised by biophysical and kinetic studies. Methodology: Human α-Amylase has been cloned into two vectors pHIPZ-4 and pPIC9k, each with its own set of primers, restriction enzymes and dedicated expression host (pHIPZ-4: Hansenula polymorpha and pPIC9k: Pichia pastoris, respectively). After transformation, the grown colonies were tested for the presence of the target gene by colony PCR and digestion with cloning enzymes. Positive colonies were re-inoculated in growth media and recombinant plasmid was recovered. The plasmids were then transformed into competent yeast cells by electroporation. For H. polymorpha, cells were grown in minimal media with glucose (MM/G) for two days followed by induction of expression with 0.5% (v/v) MeOH. Protein was purified by lysing the cells and passing the lysate through Ni-NTA beads. Finally protein was identified with by Western blot using HisProbe-HRP antibody. Results: Human α-Amylase gene was successfully cloned in both vectors pHIPZ-4 and pPIC9k according to colony PCR and digestion judgment on Agarose gel. However, the expression of α-Amylase protein in H. polymorpha system is insufficient to support the downstream work. Analysis of secreted expression using Pichia pastoris is currently underway and the results will be reported at the meeting. Conclusion: Human α-Amylase was successfully cloned in both vectors and stably transformed to E. coli competent cells. The positive colonies were confirmed for the present of targeted gene then transformed to yeast cells.
-
-
-
Elucidating T7-like Bacteriophage Isolated from Qatar's Sand
By Aya AbdelaalA bacteriophage is a virus that infects a bacteria, it uses the bacteria as a host to further replicate by controlling the replication and protein synthesis machinery of the cell. They are composed of a protein capsule and DNA or RNA genome. Bacteriophages are used in the treatment of bacterial infections, known as the phage therapy. This results from phages invading the bacteria, as they undergo a lytic cycle, where the replication and protein synthesis machinery is used to produce virions, that later cause the cell to lyse, thus killing the bacteria. This technique has higher efficiency compared to the use of antibiotics as bacteria can develop a resistance against antibiotics, while remaining susceptible to the infection of the phage. In addition, phages can evolve and adapt to new mutations that might arise in the bacteria. Lastly, phages might be engineered to contain a survival gene needed by the bacteria to ensure that the bacteria will replicate and synthesize the new virons, leading to lysis and death.
This research explores the potential of the use of bacteriophages in the treatment of water, to disinfect the water from microbial bacterial and other bacteria that were used to detoxify the water. Phages were previously extracted from Qatar's sand and were used as the infecting agents. The model host chosen is Arthobacter bacteria, which is a denitrification bacteria, as it is similar to the bacteria used in water treatment. Denitrifying bacteria reduce nitrates to nitrogen-containing gases, allowing for the recycling of nitrogen back to the atmosphere.
The goal of my research is to analyze the genome of a phage extracted from Qatar's Sand. A culture of Arthobacter grown in Smeg media was used as a host. Unlike the normal optimal growth temperature of Arthobacter at 32 °C, it was shown that higher phage titer was produced when the culture was grown to saturation in smeg media at 37 °C. In addition, the culture was initially grown in luria Broth which is rich in nutrients including tryptone, yeast extract and NaCl. However, low phage titer was produced compared to switching the media to smeg media, which contains Middlebrook 7H9 broth base with supplements of albumin, dextrose and salts.
After manipulating the growth conditions of the host culture to obtain the highest titer, the optimum conditions were found to be a saturated culture of Arthobacter grown in Smeg media at 37 °C. Later, lysis conditions of the phage were optimized by varying several factors. First, the type of the top agar and plates used for pour plating were varied. Several top agars were used this includes LB top agar, LB top agar in 1 mM CaCl2 and PYCA top agar. It was concluded that the top agar that resulted in the highest titer was LB top agar in 1 mM CaCl2. Several plates have been used for pour plating this include LB plates and PYCA plates. Plaques formed using LB plates produced unclear plaques, this unclearness is an indication of persistence of the lysogenic state and not the lytic state. In the lysogenic state the phage genome gets incorportated into the host genome and replicates along with the cell cycle, thus remaining in a dormant state. On the other hand, in the lystic cycle, the phage uses the replication and protein synthesis machinery to produce more phages, that will later lead to lysis. In contrast, PYCA plates containing CaCl2 formed clearer plaques, indicating dominance of the lytic state. Therefore, addition of calcium in the top agar and the plates aided in the phages’ shift from the lysogenic cycle to the lytic cycle. This correlates with previous work, where calcium was shown to be essential for the penetration of the phage's genome into the host.
Finally, the incubation temperature of the plates containing the infected host Arthobacter by Qatar's sand's phage was varied. Initially the plates were incubated at 32 °C, however no plaques were formed. When the temperature was later increased to 37 °C, lysis was observed and a high phage titer was obtained.
After obtaining a high titer lysate, the DNA was isolated and a sample of the phage was sent to obtain an electron micrograph image of the phage. A restriction enzyme digest was preformed on the isolated DNA and the profile created was compared to the profile of lambda phage. Both the phages are lytic and lyse at similar temperatures, which allows for comparisons.
Lastly, motifs were identified between the Qatar's sand bacteriophage and T7 bacteriophage using blast tools in a previously extracted and sequenced Qatar's sand phage, primers will then be designed to verify if they exist in the actual phage.
Future work will focus on the analyzing the sequence of the phage to identify potential lysis genes, In addition, strengthening the lytic ability of the phage by cloning the gene of lysozyme into a T7-based vector system and test its adaptability to temperature using Arthrobacter as the host system.
-
-
-
A Digitally Controlled Pseudo-Hysteretic Buck Converter for Low Power Biomedical Implants
Authors: Paul Jung-Ho Lee, Amine Bermak and Man Kay LawBackground: Low power biomedical implants usually harvest energy from a small inductor coil or optical energy sources. Those sources can supply very limited amount of energy to the target system because of poor power transfer efficiency and size limitation. Thus, we have to supply as much as energy directly to the load, without wasting energy from auxiliary devices from electrical driving. To save the energy dissipated as heat, when the power supply voltage is excessively large compared to the voltage at the load, we can choose a class-H amplifier-like strategy, where supply voltage tracks the voltage waveform at the load. Among many power conversion topologies that can modulate the supply voltage, the SMPS (Switching Mode Power Supply) is the most promising one; because reverse energy recovery can be used by taking back the charge accumulated on the load capacitor. The CCM buck converter, shown in Fig. 1, can possibly work as a voltage tracking power supply modulator. However, we must employ a complicated auxiliary circuit components, such as the Type-III compensator, which greatly hampers its application for biomedical implant applications due to several external passive components. Proposed Power Converter: Therefore we propose a proposed digitally controlled hysteretic buck converter that is composed of 3 parts: power conversion, digital control, and pulse generation. Its controller can be implemented without bulky external passive components, but can quickly adapt the fast transient with a simple digital controller which incorporates just 1 comparator.
Figure 2 shows the power conversion part of the proposed buck converter. It is composed of power PMOS (W/L = 2 mm/0.5 μm), NMOS (W/L = 1 mm/0.5 μm), an active diode amplifier driving the NMOS, a 1 uH inductor, and a 1 uF capacitor. The power supply is 3.3 V. It adopts a typical buck converter configuration, where target switching frequency is 10 MHz. The NMOS active diode circuit is employed for minimizing the conduction loss across the NMOS body diode, when the energy stored in the inductor is released. The first stage of the active diode is common gate differential amplifier, of which the positive terminal is connected to GND and the negative input terminal is connected to the switching side of the inductor. The second stage of the active diode is common source amplifier stage, which serves to boost gain and increase the slew rate. Because it uses the negative feedback, the stability should be carefully checked. The simulated gain of the active diode was around 60 dB, with a 3 dB bandwidth of about 100 kHz.
In biomedical implant applications, the fast transient response is important, because the required power supply voltage can abruptly change, i.e. when an electrical stimulator changes the phase from anodic pulse to cathodic pulse. Thus, we propose a digital controller which can support such a fast transient response, which can make a voltage discursion of 1 V in less than 1 us. Figure 3 shows the proposed pseudo-hysteretic controller, for driving the power PMOS of the proposed power converter. It receives the reference voltage and the current output voltage as inputs, and then compares them. It asserts ‘1’ to report to the digital pulse generator (fsm_pulse_gen), when the reference voltage is higher than output voltage. The digital pulse generator increases the duty ratio when the comparator output is ‘1’, and decreases the duty ratio when the comparator output is ‘0’. The key of the control mechanism is binary weighted duty control. In this scheme, initially, the duty can jump by a predetermined maximum, being 16, and then, when the SMPS output approaches to the initial target voltage, the incremental amount decreased by half, being 8, and then again by half, being 4, and so on. The simulation result of the proposed power converter with the pseudo-hysteretic controller, which tracks reference voltage is shown in Fig. 4. In the beginning the converter operates in CCM, when the output voltage rapidly catch up the reference voltage; this is done by the digital controller which crank up the duty cycle to the maximum in a short period. Once the output supply voltage begins to be stabilized by crossing the reference voltage line, the converter changes the operation mode from CCM to DCM. Conclusion: We introduce a digitally controlled hysteretic buck converter with an active diode, intended for biomedical implant applications, featuring low-power consumption and fast transient response. For achieving these features, the associated digital controller employs the binary weighted duty update scheme, with overhead of a simple input processing comparator. An NMOS active diode can further decrease the wasted energy source by removing the loss that can be incurred from body diode conduction.
-
-
-
Diabetes Awareness Among High School Students in Qatar
By Sara AmaniDiabetes is a disease that occurs when there is an abundance of glucose in the blood stream and the body cannot produce enough insulin in the pancreas to transfer the sugar from the blood to other areas in the body for energy. Type 1 and Type 2 diabetes are both prevalent in Qatar however, Type 2 diabetes is more common and it is mainly what is causing the epidemic Qatar is facing. Type 1 diabetes is inherited and is not related to eating habits or lifestyle, and it is diagnosed in juvenile years. However, Type 2 diabetes can be caused by obesity, unhealthy eating habits, and lack of exercise and overall fitness. It is treatable but not curable and can be treated by maintaining a healthy diet, regular exercise, insulin intakes, and oral medication.
Diabetes is the 3rd most common cause of death in Qatar after road accidents and heart disease (http://www.worldlifeexpectancy.com/country-health-profile/qatar). In fact, in 2011, Qatar ranked 6th for the highest percentage of diabetes in the world. The amount of overweight adolescents in Qatar is very large resulting in higher diabetes rates amongst children and young adults. The percentage of overweight children doubled from 17% in 1997 to 35% in 2007 according to The Peninsula newspaper. Awareness is crucial at this age because most of the human body fat is put on during teenage years when the body is constantly growing and it is very important to maintain a healthy weight at this time.
Students at the secondary school level should understand whether or not they are at risk of developing this disease compared to ordinary people. Each student should know their chances of risk due to their family relations, history, and habits. In Qatar we find that the parents of many students our age face diabetes because of the habits of their parents and so on, therefore all of us are at a higher risk. The fact that fast food is so available and practical keeps people, especially those at a young age constantly consuming such unhealthy products. With the statistics of obesity so alarmingly high, education and awareness is the only key to solving the problem. We plan on increasing the diabetic awareness at our school by conducting surveys, presenting posters, interviewing students, as well as working hand-in-hand with the Qatar Diabetes Association (QDA) to be as successful as possible.
Although many efforts are being made by the government, many students do not know enough about the danger that they may be putting themselves in, and the consequences that result because of their actions. Many students at this age are allowed to choose what they eat and their diets are not regulated by parents anymore, therefore they need to learn to make proper choices. The high temperatures in Qatar also make it inconvenient to exercise outside, however there are numerous alternatives. The goal of this research is to enlighten our peers with this given information and compare their knowledge of diabetes before and after. Awareness is the first step to prevention, and that needs to begin at a young age. Research Methods: The first step to our project on spreading awareness about diabetes is to know the level at which the students are already aware. Therefore, using Google Forums, we will create a survey with questions about how much the students already know about diabetes, if they know of family history, if they have been tested, if they know the symptoms, etc. Once we have fully created the survey, we will email the link to all of the high school students at our high school asking them to each fill out the survey with the appropriate responses. In order to insure that we have accurate gatherings, we will each interview an equal amount of students randomly and record their answers to work from there.
After receiving 189 responses from the high school students we began to analyze the results. Only 22% of the students had been tested for or against diabetes. Out of those who had been checked 48% had checked more than a year ago. 58% of the students knew of family members with diabetes. Most had checked the correct symptoms for diabetes along with a lot of unrelated ones like coughing or vomiting. Also for the things you could do to prevent diabetes, 8% responded with “washing your hands”, 30% responded with “not smoking”, and surprisingly even 3% responded with “dressing modestly”. After witnessing the high percentage of wrong responses, it was made clear that we definitely had to increase our effort in educating our peers.
Once discussing the open ended question of “What do you know about diabetes?” we ranked each student's response out of 10 (Based on the below scale). Then, we performed a chi-squared statistical test to compare our expected value of 7/10 to 3.323/10 which was the average student rank. This proved a significant difference between the levels of awareness we expected from the students, to what we actually received.
The data from the survey proved the indeed lack of awareness among the students of the high school level. We created fun awareness posters and posted them on glass doors to the cafeteria, science wing and specific lockers where they were most visible to the student population. Results and Discussion: Upon the completion of our presentation to the 10th grade health class, we conducted a second survey regarding what the students had learned and the quality of our presentation. The table below shows the responses received from the students after the presentation. Conclusions: Unfortunately at the beginning of this research, we were shocked at noticing the extreme lack of awareness amongst our student population. Considering the high rates of diabetes in the State of Qatar, we expected a basic understanding about the disease especially since many people here are prone to it. It is very alarming that which current trends it is expected that 73% of men and 69% of women would have reached obesity in the country by 2030. Indeed it can be stated that a lot of work needs to be done.
One of the major purposes of this research was to gain insight about this issue and it is quite obvious that these high rates are leading to a huge problem in the nation. Fortunately Qatar has been working hard to increase awareness with events like National Sports Day and hosting events like the 2022 world cup to promote fitness in the country. As mentioned previously, since diabetes type 2 is the major concern because it is developed later in life and is preventable, young students are the target for spreading information too. Habits develop during the adolescent years so spreading the word among students at this age is critical.
This research was very successful in promoting awareness in our own school, but from what we have learned, there is a long way to go in order to decrease the high statistical quantities of diabetes rates. Benefit to students: The Qatar National Research Strategy (QNRS) principle were designed to increase the quality of research occurring in Qatar. The third most important area of research in Qatar was health and under the Addressing National Health Priorities column, Type II diabetes was first in importance. We feel that by following the research plan addressed by H.H. Sheikha Moza that our research could have a strong impact on the research culture in Qatar indeed.
Some of the specific benefits to the students participating in this project include the following;
- How to conduct literature survey
- An increased knowledge and understanding about Diabetes, its causes, preventions and symptoms
- The seriousness of the issue worldwide and in Qatar
- The realization that the awareness of the students they spoke to increased so much that they can now give a lecture about the topic to their peers
- Presentation skills, video, poster, lecture
- Survey design
- How to conduct statistical analysis using statistical software
- Incorporating creative ways in educating their peers
- Communicating with professional societies such as Qatar Diabetes Association
- Took part in the joint program of Qatar Diabetes Association Sports Day event for the Action on Diabetes Campaign
- Interviewing skills as the asked questions of students and health care professionals working with diabetes
- Time management with creating a Letter of Intent and Proposal Report on time
- How to work as a team and how to divide the tasks to cover all requirements efficiently and effectively
- It is noteworthy to mention that the entire research methodology and the final report were fully prepared and conducted by the two students involved. The teacher's role was solely advisory
Qatar National Research Strategy (QNRS) mentions that diabetes is the 3rd most important area of research for QNRS.
Presenting the information about Diabetes to the high school student body, will not only benefit the students and teachers themselves in insuring their health and being aware of what may happen if they do not control some unnecessary desires, but it will inform them about the SSREP program in Qatar. As students participating in a research project during our secondary school years, presenting to our peers will allow them to realize the benefits that come from conducting research projects and inspire them to contribute in the following years.
Acknowledgement
Qatar National Research Fund – SSREP Program
-
-
-
Is it the time for Hepatitis E virus (HEV) Testing for Blood Donors in Qatar?
Authors: Gheyath Nasrallah, Laila Hedaya, Fatima Ali, Abdellatif Alhusaini, Enas Al Absi, Mariam Sami and Sara TalebBackground: HEV is the etiologic agent of acute hepatitis E. Although HEV usually causes a self-limiting infection, the disease may develop into a chronic or fulminant form of Hepatitis. Sporadic HEV infections spread in several developed countries; however, outbreaks usually occur in regions where sanitation is low, in particular, in developing countries where water flooding frequently occurs. In addition, religious background, life style, hygienic practices, and the economic status have been linked to HEV infection. Fecal-oral is the established route of transmission, however, infections through blood transfusion were recently documented in many developed and developing countries. This recent finding raises the question: is there is a need for HEV screening prior transfusion or transplantation? Studies related to this issue, in the Middle East are scarce. Although the CDC HEV epidemiological map, classifies the Arabian Gulf countries including Qatar as endemic or highly endemic, to the best of our knowledge, no HEV population –based epidemiological study were conducted in Qatar. HEV infection is usually detected using IgM and IgG serological tests and confirmed by molecular tests for detection of viral RNA. Yet, commercially available HEV serological kits are not validated, and needs further investigation. Aim and Methods: Qatar has a diverse population due to increased number of expatriate workers. The majority of these workers usually come from low economic status countries that are highly endemic for HEV such as Egypt, Sudan, India and other South East Asian countries. This fact highlights the need for an epidemiological study concerning HEV prevalence in Qatar. Accordingly, we hypothesize that HEV seroprevalence in Qatar is elevated, and therefore, there is a risk of HEV transfusion transmitted infections in Qatar's blood bank. The goals of this study are (i) to investigate the seroprevalence of HEV (anti-HEV IgM/IgG) among healthy blood donors in Qatar and (ii) to evaluate performance of 5 common commercially available anti-HEV IgG and IgM kits (manufactures by Wantai Biological Pharmacy, China; MP Biomedicals & Diagnostic Automation, USA; and Euroimmun & Mikrogen Diagnostik; Germany). All of these kits are solid phase ELISA based, except the Mikrogen kit which is immunoblot based. A total of 4056 blood donor samples were collected from healthy blood donors visited the Blood Donation Center at Hamad Medical Corporation (HMC) over a period of three years (2013-2015). For seroprevalence study, plasma were separated and tested for the presence of HEV IgG and IgM using Wantai ELISA kit, which, is the most commonly used serological kit according to literature. For statistical analysis, chi-square test was performed and results were considered statistically significant when the p-value were less than 0.05. Results: Out of total 4056 analyzed samples, almost one quarter of blood donors, 829 (20.45%) tested positive for anti-HEV IgG and only 21 (0.52%) blood samples tested positive for anti-HEV IgM. As shown in Fig. 1, HEV seroprevalence was associated with age group (P
-
-
-
Antimicrobial and Cytotoxic Activity of Streptomyces sp. Isolated from Desert Soils, Qatar
More LessStreptomyces are Gram positive aerobic bacteria from the phylum Actinobacteria with close to 570 known species. They are popular for providing a variety of compounds having medicinal properties including - antibiotics, antifungals, antitumor amongst others. Various researches in the past have tested these properties of Streptomyces sp. and species including Streptomyces avermitilis, Streptomyces venezuelae, Streptomyces aureofaciens, Streptomyces clavuligerns and Streptomyces erythrens have been found effective in producing these varied compounds. For instance, S. avermitilis produces avermectins which are used to treat river blindness while S. venezuelae secretes chloramphenicol. Additionally S. venezuelae has been suggested as ideal test organism for studies based on physiology and also for analysis of differentiation on biochemical basis (Chater, 2013). Although a high number of metabolites of Streptomyces are now available in the health care industry as effective drugs for a variety of diseases, increasing number of cases of antibiotic resistance is threatening global public health. The emergence of resistance has resulted in drug ineffectiveness and there is a wide search for suppressing these strains. Such resistance has been identified to have occurred due to both phenotypic and genotypic modifications (Suzuki, Horinouchi, & Furusawa, 2015). Properties of Streptomyces and increasing cases of antibiotic resistance have fuelled research to identify more and more species of Streptomyces and to look out for novel metabolites released from them. Other reasons for the need of identifying newer compounds include the breakout of new diseases in the second half of the last century, incompetence in fighting naturally resistant bacteria such as P.aerogenosa that causes fatal infections and the toxic effects resulting from consumption of currently available antibiotic drugs (Sanchez & Demain, 2011).
Hence this research attempted to study antimicrobial properties of three Streptomyces species isolated from the desert soil of Qatar. The antimicrobial properties were assessed firstly through direct testing against five test organisms - Escherichia coli and Pseudomonas sp. as Gram-negative bacteria, Candida albicans as fungi, Staphylococcus aureus and Streptococcus faecolis as Gram-positive bacteria. The three strains designated as sp. A, sp. B and sp. D exhibited good inhibition of the test organisms. Acetone, ethanol, ethyl acetate and methanol were used to prepare extracts of the three species and were used to re-assess antibacterial properties and also determine anticancer and antifungal properties. Antimicrobial properties were re-tested using disc-diffusion and puncture method while anticancer properties were studied by subjecting HCT-116 cancer cells to two different concentrations of extracts – 0.05%(v/v) and 0.5%(v/v). Acetone extracts showed some kind of inhibitory pattern hence a third concentration of 5%(v/v) was tested. Antifungal properties were examined by testing all extracts at 10%(v/v) against Aspergillus niger and Penicillium sp. Acetone extracts of all three species A,B and D displayed high inhibition of Aspergillus niger with 99.07% ± 0.12, 99.2% ± 0.01 and 99.19% ± 0.00 inhibition percentages respectively, and also inhibited growth of Penicillium sp. with 82.62% ± 1.62, 79.63% ± 0.11 and 87.44% ± 0.2 inhibition percentage respectively (% as compared to acetone control). These extracts were then re-tested at two other concentrations of 2.5%(v/v) and 5%(v/v). While the extracts at these concentrations were effective against Aspergillus niger, they could not inhibit growth of Penicillium sp.
References
Chater, K. F. (2013). Streptomyces. In S. M. Hughes (Ed.), Brenner's Encyclopedia of Genetics (Second Edition) (pp. 565–567). San Diego: Academic Press.
Sanchez, S., & Demain, A. L. (2011). 1.12 – Secondary Metabolites. In M. Moo-Young (Ed.), Comprehensive Biotechnology (Second Edition) (pp. 155–167). Burlington: Academic Press.
Suzuki, S., Horinouchi, T., & Furusawa, C. (2015). Suppression of antibiotic resistance acquisition by combined use of antibiotics. Journal of Bioscience and Bioengineering(0). doi: http://dx.doi.org/10.1016/j.jbiosc.2015.02.003
-
-
-
A Close Look at the Genome and the Unbalanced Stratification of the Qatari Population
Microsatellites are segments of the DNA comprised of repeated sequences of 4 to 8-base pair units that are found throughout the genome of eukaryotes. Most microsatellites are located at non-coding regions of the genome and consequently mutations in the microsatellite regions are often not causatives of disease. This allows these regions to be highly polymorphic in a population and gives a signature DNA marker for each individual. At the same time, it is often expected to see a wide genetic diversity of alleles present in the populations. In humans, microsatellites, or short tandem repeats (STRs) are standard genetic markers used for human identification in forensic cases and parentage determination.
Databases of allele frequencies from various ethnic groups have been established in various parts of the world. In Qatar, as close-kin marriages are customary, homozygosity and possible reduced genetic variability have been a concern. A previous study, however, has concluded that the standard forensic markers are a valid tool for human identification because no substantive reduction of genetic variation has been observed as a result of consanguinity in the Qatari community.
In a more recent study, it has been determined that the Qatari population is subdivided into three main ethnic groups, of Bedouin, African or Persian ancestry. This segregation has been genetically significant through studies in single nucleotide polymorphism, or SNP studies. Since SNPs and microsatellite DNA are inherited in a similar fashion, it is expected that there are different allele frequencies for assessed microsatellite loci for each of the populations. Moreover, the allelic heterogeneity in a population is closely linked to interbreeding. Since the Qatari Bedouin population has been closely associated with the practice of consanguinity as evidenced through SNP studies, it is therefore also expected to see higher homozygosity in the Bedouin subpopulation as compared to the other two subpopulations.
In recent years diabetes occurrences in the Qatari populations have reached an epidemic level. Like many other diseases where both lifestyle as well as genetics may play a role in the onset of this disease, the microsatellite loci may serve as markers genetically linked to some of the non-communicable disease such as diabetes.
The main aim of this study is to understand the genetic variability across the subpopulations of the Qatari nationals. The result can be used to develop new forms of personalized health care that is specific to members of the stratified Qatari subpopulations. The information allows for more efficient treatments and better management to the growing Qatari populations. To accomplish these goals, blood samples are collected from 300 individuals, with 100 from each subpopulation. AmpFISTR® Identifiler® Plus PCR Amplification Kit is used for a multiplex analysis of 15 tetranucleotide loci. The resulting data are analyzed to produce allele frequencies of each of the loci for the corresponding subpopulations. The gene diversity within and among the subpopulations are analyzed and the detection of consanguinity through the application of Hardy-Weinberg is discussed. The sub-profiles for each of the three Qatari subpopulations – Bedouin, African and Persian – are presented. Finally, the concept of personalized health care with respect to diabetes is introduced and the clinical applications relevant to the populations are discussed.
-
-
-
Reflex System for Intelligent Robotics
Authors: Ahmad Yaser Alhaddad and John-John CabibihanBackground and Purpose: Great advances have occurred in the field of robotics in the past few years. The integration of robotics in our daily life became not only limited to manufacturing or industrial usage, but also in health care delivery, aerospace, humanitarian aids and others. Most of the existing robots systems rely on the programmer to set the rule it plays within the working environment or rely on a trainer to teach the system what should and need to be done and where they are ought to move. Other robotic systems might involve more intelligent systems to explore and handle tasks within their environment. Most of these systems are usually situated to work within well organized and planned environment. Having modifications on any of the parameters of the environment might produce unpredictable consequences. Depending on the complexity of the system and how intelligent it is, the consequences might be unfavorable in achieving the goals intended and reducing oneself-damages.
Species in nature represents rich source of innovative ideas and creative concepts that can be investigated by researchers. Nature has been inspiring scientists into developing new ways of looking at things, by observing the various living organisms' behaviors in their own habitats. Behavior-based roboticists are concerned with the development of robots based on observing and the studying of neuroscience, psychology and ethology of animals in nature. Humans, animals and plants physiology is yet another rich source of researching potential (Fig. 1). For example, reflexes in living organisms represent a means of survival in the outer environment and means of regulating internal body operations. If we could observe and try to mimic some of the reflex behaviors, we could end up with a machine (E.g. Robots) that has the ability to avoid dangerous situations and keep the outer structure intact.
Figure 1: The potential of reflex systems in intelligent robotics. Objective: Adopting an intelligent reflex system in the robot system similar to that found in humans, animals, and plants can have significant advantages on the overall behavior of the system. A reflex system can improve the risk avoiding capabilities in the unfavorable scenarios. Design: The approach toward reflex based robotic system involves the intensive investigation and review of the fundamental concepts found in the reflex systems of human, animals, and plants. Attention to details, such as the behavior of the organism when subjected to a certain stimulus and the latency it takes for the reflex arc to execute the right response, are among the most important things to consider when trying mimicking the behavior of a living organism. A deduced conceptual model should be based on the distinguishing components found in the reflex arc. An actual design based on this proposed model, will include the basic components that can be achieved by using electronic/mechanical components that are at the same time analogous in function to the ones found in the reflex arc. For example, to mimic the temperature sensing capabilities of a human hand, a simple one-point temperature sensor will not be sufficient to give a desirable realistic result. Instead, a sophisticated flexible array that is capable to sense the temperature at any point must be used. Another design consideration is the controlling method to be used. Will it be centralized or decentralized or a mix of both? Regardless of the answer the controlling mechanism involved should be independent of a central controller (i.e. the brain) and it must be localized to achieve the desirable fast response as that founded in the reflex arc. Conclusion: The reflex based robotic system will be unique and innovative for the applications intended. The system can be incorporated with pre-existing systems to add value especially in the field of medical robotics and more specifically in prosthetics. Artificial reflex systems will add great value, protective feature and life-like sensation for a smarter prosthetic artefacts. With the implementation of the reflex arc at the right latencies and order, the gap between artificial and the actual hand should get narrower.
Acknowledgment
This publication was made possible by the support of an NPRP grant from the Qatar National Research Fund (NPRP 7-673-2-251). The statements made herein are solely the responsibility of the authors.
Keywords
Intelligent Robotics, Reflex System, Prosthetics
-
-
-
Preparation and Characterization of Letrozole-Loaded Poly (D,L-Lactide) Nanoparticles for Breast Cancer Therapy
Authors: Bayan Alemrayat, Abdelbary Elhissi and Husam YounesIntroduction: Breast cancer has been ranked first as the most prevalent type of cancer and the leading cause of cancer-related mortality among women worldwide. Letrozole (LTZ), an aromatase inhibitor, has been shown to be an effective and relatively safe agent for the treatment of hormonally-positive breast cancer in postmenopausal women. However, the drug suffers from poor water solubility and rapid metabolism, leading to low oral bioavailability, and thus, lower anticancer effects at target sites. Interestingly, polymer-based nanoparticles (NPs) have been reported to be effective drug delivery systems as integrating drugs into these carriers have presented substantial improvements in drugs tissue distribution and tissue selectivity with superior pharmacokinetic profiles. Therefore, this study was designed to incorporate LTZ into an FDA approved polymer; poly (D,L-Lactide) (PDLLA) nanoparticles to improve its physiochemical properties and bioavailability. Methods: Emulsion-solvent evaporation technique was used to produce LTZ-PDLLA NPs. Briefly, 250 mg PDLLA was mixed with different w/w ratios of LTZ (10-30%) in 20 ml dichloromethane. The prepared solution was slowly poured via a syringe into an aqueous phase (140 ml) to form an emulsion which was followed by a two-step sonication. The emulsion was sonicated using Branson® B5510 ultrasonic cleaner at 40 KHz frequency for 30 minutes, vortexed for 2 minutes, then sonicated again for another 30 minutes. Solvent was allowed to evaporate completely by stirring for 2 hours at room temperature. The resultant dispersed particles were centrifuged at 8500 rpm, 5 °C for 2 hours. Supernatant was discarded and the pellet comprising the NPs was dried under vacuum over 48 hours. The obtained NPs were characterized using Scanning Electron Microscope (SEM), Zetasizer, Differential Scanning Calorimeter (DSC), X-Ray Diffractometer (XRD), & Ultra Performance Liquid Chromatography (UPLC). Results: LTZ-PDLLA nanoparticles were prepared with a high yield that reached 85%. The NPs were spherical in shape with smooth surfaces across all LTZ loadings. An increase was seen in particle size from 242 nm to 365 nm upon increasing LTZ concentration from 0% to 30% w/w. Such finding was expected since larger contents of LTZ would definitely contribute to the increase in the diameter of the enclosing polymer. Particles were polydisperse in general with a polydispersity index (PDI) ranging from 0.38 to 0.44 and this was mainly due to the fact that non-uniform force was applied to each droplet injected into the aqueous medium while producing the emulsion. DSC and XRD analyses confirmed the crystalline nature of LTZ that was lost after being incorporated into the amorphous polymer, PDLLA. This will have a great impact on the dissolution rate and later on the release rate from PDLLA in which amorphous particles tend to be released easily and in a more controlled fashion than crystalline counterparts. The actual content of LTZ loaded inside PDLLA was expressed as entrapment efficiency and calculated via UPLC analysis through subtracting the amount of LTZ present in the supernatant from the initial amount of LTZ added in each formulation. Very high entrapment efficiency was obtained with all formulations ranging from 87.9% with 10% LTZ-PDLLA up to 96.7% with 30% LTZ-PDLLA. As such, high concentrations of LTZ can be delivered to the target sites with minimum drug loadings. Conclusion: LTZ-PDLLA nanoparticles were successfully prepared with high entrapment efficiency using emulsion-solvent evaporation technique. The physiochemical properties and entrapment efficiency were dependent on LTZ concentration. Future work should focus on reducing the wide size distribution by formulating monodisperse particles which would allow uniform tissue distribution and longer sustained release actions upon administration. Additionally, in-vitro testing is needed to evaluate the efficacy and safety of the new formulations.
-
-
-
Favoring Inhibitory Synaptic Drive Mediated by Gabaa Receptors in the Basolateral Nucleus of the Amygdala Efficiently Reduces Pain Symptoms in Neuropathic Mice
More LessPain is an emotion and neuropathic pain symptoms are modulated by supraspinal structures such as the amygdala. If the central nucleus of the amygdala is often named as the “nociceptive amygdala”, little is known on the role of the basolateral amygdala. Here, we monitored the mechanical nociceptive thresholds of in a mouse model of neuropathic pain and infused modulators of the glutamate/GABAergic transmission in the BLA via chronically-implanted can nulas. First, we found that NMDA-type glutamate receptor antagonist (MK-801) exerted a potent anti-allodynic effect whereas a transient allodynia was induced after perfusion of bicuculline, a GABAA receptor antagonist. Potentiating GABAA receptor function using diazepam (DZP) or etifoxine (EFX, a nonbenzodiazepine anxiolytic) fully but transiently alleviated mechanical allodynia. Interestingly, the anti-allodynic effect of EFX disappeared in animals incapable of producing 3alpha-steroids. DZP had a similar effect but of shorter duration. As indicated by patch-clamp recordings of BLA neurons, these effects were mediated by a potentiation of GABAA R-mediated synaptic transmission. Together with a presynaptic elevation of miniature IPSC frequency, the duration and amplitude of GABAA mIPSCs was also increased (postsynaptic effect). The analgesic contribution of endogenous neurosteroid seemed to be exclusively postsynaptic. This study highlights the importance of BLA and of the local inhibitory/excitatory neuronal network activity while setting the mechanical nociceptive threshold. Furthermore, it appears that promoting inhibition in this specific nucleus could fully alleviate pain symptoms. Therefore BLA could be a novel interesting target to develop pharmacological or non pharmacological therapies.
-
-
-
Cathepsin B Induced Cardiomyocyte Hypertrophy Requires Activation of the Na+/+ Exchanger Isoform-1
By Sadaf RiazBackground: Progression of the heart to failure is primarily caused due to significant remodeling of both the extracellular matrix (ECM) and subcellular organelles, a hallmark of cardiac hypertrophy (CH). Uncontrolled ECM remodeling occurs as a result of the activation and increased proteolytic activities of proteases such as Cathepsin B (Cat B) and matrix metalloproteinase-9 (MMP-9) (1, 2). Previous studies have suggested that the activation of Cat B is induced by the acidification of the peri and extracellular space (3-5). In various forms of carcinomas, this pericellular acidification coincides with the activation of the cardiac specific pH regulator, the Na+/H+ exchanger isoform-1 (NHE1) (5, 6). Increased activation of NHE1, similar to Cat B, is involved in the pathogenesis of various cardiac diseases including CH (7-10). Moreover, the activation NHE1 has been shown to activate Cat B in various reports. CD44 was shown to interact with NHE1 which created an acidic microenvironments leading to Cat B activation in a breast cancer model (5). Moreover, NHE1 and Cat B have shown to directly interact with each other and cause ECM degradation in another breast cancer model (4). Taken together, the evidence suggests that NHE1, through its pH regulating property, might be mediating the activity of Cat B in pathological states. A previous report has demonstrated that pericellular acidification redistributed the Cat B containing lysosomes to the cell surface and caused the secretion of Cat B into the extracellular compartment (3). Interestingly, the NHEs have also shown to cause acidic extracellular pH which induced lysosome trafficking and subsequent release of Cat B into the ECM in a prostate cancer cells (11). Moreover, several broad and specific NHE inhibitors were able to inhibit this effect (11). Once into the extracellular compartment, Cat B can degrade the ECM (12) and facilitate further ECM degradation by activating other proteases such as MMP-9 (13, 14). MMP-9 activity has been shown to be increased in various models of heart failure (15, 16) (17, 18). Previous studies have also shown that MMP-9 activity was increased in CCL39 cells upon the stimulation of NHE1 with phenylephrine (19). Interestingly, Cat B and MMP-9 were shown to directly interact with NHE1 and cause ECM degradation in breast cancer (4). Whether NHE1 induces the activation of Cat B, which in turn activates MMP-9 and contributes to cardiomyocyte hypertrophy remains unclear. Methods: H9c2 cardiomyocytes were treated with 10 μM Angiotensin (Ang) II for 24 hours to stimulate NHE1 and to induce cardiomyocyte hypertrophy. Cells were further treated with or without 10 μM EMD, a NHE1 inhibitor, or 10 μM CA-074 methyl ester (CA-074Me), a Cat B inhibitor, for 24 hours. After treatments, Cat B messenger ribonucleic acid (mRNA) levels were measured through Reverse Transcription-Polymerase Chain Reaction (RT-PCR). Furthermore, changes in the cardiomyocyte hypertrophic marker, ANP mRNA, were also assessed by RT-PCR analysis. The localization of Cat B in lysosomes was measured using LysoTracker Red dye. Autophagy was measured through the analysis of the autophagic marker, microtubule associated light chain 3-II (LC3-II). The secretion of Cat B from the intracellular to the extracellular space was assessed by measuring Cat B protein expression in the media. MMP-9 activity was also measured in the media by gelatin zymography and assessed for its contribution to the Cat B hypertrophic response. Results: Immunoblot analysis revealed that Cat B protein expression, both pro and active forms, was significantly elevated at the 10 μM Ang II concentration (136.56 ± 9.4% Ang II vs. 100% control, 37 kDa and 169.84 ± 14.24% Ang II vs. 100% control, 25 kDa; P
-
-
-
The Social and Spiritual Factors Affecting Chronic Renal Dialysis Patients in Gaza Strip
More LessBackground: End-Stage Renal Disease (ESRD) is a progressive worsening of kidney function over a period of months or years. It is a complex debilitating disease that needs a lifelong treatment. Because patients with ESRD cannot be cured of their underlying conditions and mostly underwent hemodialysis program, it usually leads to many physical and medical consequences and complications, and beside them, there are lots of concealed social and spiritual factors that can affect people who have this disease or are on renal dialysis. Some studies about medical and clinical consequences of ESRD and renal dialysis were conducted but this study will be the first one to determine the factors affecting the social and spiritual wellbeing of patients who are on renal dialysis in Gaza Strip. Objectives: It is important to give a detailed picture about the social and spiritual wellbeing of patients who are on renal dialysis to help the medical professionals to recognize the social and spiritual variables, so early intensive intervention can be performed once necessary. Methods: A total of 120 patients who had ESRD and were treated with hemodialysis completed face to face questionnaires. A self-designed questionnaire has been used; the end result is a questionnaire consisted of 6 sections including demographic data, physical, social, psychological and spiritual wellbeing, degree of coping with current condition, uncertainties about health in future, self-esteem and dependency, and the impact on marital relationship. Results: Among 120 participants, 55% were females and the mean age was 48.5 (SD: 16.7).
An eighty one point seven percent were unemployed and 81.7% of the participants were of low educational level. Thirty percent of the patients have family history of hemodialysis; 55.6% of them are first degree relatives and 44.4% for the second and third degree relatives. Seventy two point two of patients have co-morbidities, mostly hypertension (49.4%). Fatigue (93.8%) and insomnia (56.2%) are the two major physical complaints after the process of hemodialysis, however, (53.3%) of the patients felt more comfortable after it.
Seventy seven percent of the patients suffered from a financial impact and 60.3% had weak social relationships. Sixty percent considered that the process of hemodialysis makes their life restless to the extent that makes their daily activities to be negatively affected by 73.8%.
Among 85 married patients, the sexual performance and the sexual desire were negatively affected by 54.2% and 52.2% respectively. Only 50% of the patients stated that they have a goal they want to achieve in their life. Seventy eight percent of the patients were uncertain about their health and 67.3% were worried from about the future. However, 70% of the participants claimed that spiritual devotions and stronger faith has made them more able to accept their disease and deals in a positive manner towards being involved in the hemodialysis program. Conclusion: Social and spiritual well-being should be considered as important predictive factors for a better quality of life in hemodialysis patients. Results also suggest that assessing and addressing social and spiritual well-being among hemodialysis patients may help in providing a holistic medical care.
-
-
-
Origanum Syriacum Inhibits Proliferation, Migration, Invasion and Induces Differentiation of Human Aortic Smooth Muscle Cells
Authors: Sara AlDisi and Ali EidCardiovascular diseases (CVDs) are still the number one cause of morbidity and mortality both in Qatar and worldwide. A major risk factor of CVDs is atherosclerosis, the hardening of blood vessels caused by decreased diameter and formation of plaque. A key player in atherosclerosis prognosis is the switch of vascular smooth muscle cells (VSMCs) phenotype from their undifferentiated state to a synthetic one. The synthetic state of VSMCs is characterized by an increase in proliferation, migration and invasion to the lumen of blood vessels, contributing to the atherosclerotic plaque. Ineffectiveness of current treatments has lead to an increasing interest in herbal medicine, possibly because they are cheap and produce little side effects. Origanum syriacum, commonly known as Zataar, is an important constituent of the Mediterranean diet; a diet correlated with lower risk of CVDs. O. syriacum is also reported for its antioxidant and anti-inflammatory activities, an indication of its possible anti-atherosclerotic activities. However, O. syriacum effect on atherosclerosis or CVDs is not well studied. This is why we chose to study the effect of the ethanolic extract of O. syriacum (OSEE) on the proliferation, migration, invasion and differentiation of human aortic smooth muscle cells (HASMCs). Cell Titer-Glu assay was used to study OSEE effect on HASMCs viability. Cells were incubated with OSEE (0, 0.5, 0.1 and 0.2 mg/ml) for 24, 48 and 72 hours. OSEE has showed to exert a significant anti-proliferative effect on HASMCs. This effect, though, seems to be concentration-dependent, but not time-dependent. The optimum concentration, 0.2 mg/ml, significantly decreased HASMCs viability at 24 and 72 hours by 52.5 ± 10.39% and 47.6 ± 9.83% compared to control, respectively. A scratch-wound assay was used to determine OSEE's effect on migration of HASMCs. A monolayer of cells was scratched and wound size was measured every 2 hours for 24 hours. OSEE significantly inhibited the migratory capacity of HASMCs compared to untreated cells. Cells that were incubated with 0.2 mg/ml of OSEE for 24 hours showed 65.07 ± 12.58% less migration than the control. To measure the invasive capacity of HASMCs, Matrigel-coated BD BioCoatTM filter inserts were used. Cells were incubated in serum free media with or without 0.2 mg/ml of OSEE, and number of invasive cells was counted after 24 hours. OSEE has shown to significantly decrease the invasive capacity of HASMCs by 79.82 ± 5.69% compared to control. To study effect of OSEE on differentiation of HASMCs, western blotting was used to measure calponin-h1 activity. Cells were incubated with or without 0.2 mg/ml OSEE for 24 hours and lysate was analyzed. OSEE increased the expression of calponin-h1 by 147.19 ± 72.33% compared to control. These results indicate that OSEE possess anti-atherosclerotic abilities by modulating the phenotype of HASMCs. This modulation returns HASMCs to their differentiated state, as shown by calponin-h1 increase. It also exhibits this modulation by inhibition of the synthetic state phenotypes of proliferation, migration and invasion of HASMCs. This anti-atherosclerotic effect should be further studied by possibly investigating OSE's effect on specific pathways that leads to migration and invasion of HASMCs, such as ERK1/2 and MAPK pathways, as well as MMPs expression.
-
-
-
Self-assessed Attitude, Perception, Preparedness and Perceived Barriers to Provide Pharmaceutical Care in Practice Among Final Year Pharmacy Students: A Comparative Study between Qatar and Kuwait
Authors: Rasha Abdullkader Mousa Bacha and Alaa Talal El-GergawiBackground: Pharmaceutical care (PC) is changing pharmacy practice in to patient -centered care and personalized medicine approach. It focuses on maximizing drug therapy outcomes and improving patient's quality of life (QOL). Pharmacy students who are the future pharmacy practitioner, need to have adequate knowledge, skills and positive attitudes to apply PC when they graduate. However, comparative studies are limited among pharmacy students from different pharmacy schools within the Middle East region about the PC teaching received, preparedness to deliver the service in practice, and expected barriers. Objectives: This study's aims are exploring the attitudes and perceptions of final year pharmacy students in the College of Pharmacy at Qatar University (QU-CPH) and Faculty of Pharmacy in Kuwait University (KU-FoP) towards PC, assessing students’ preparedness to provide PC when they graduate and investigating the perceived barriers against the application of PC. Methods: Descriptive, cross-sectional, web based survey was used to collect data. The study instrument was developed based on validated tools: Pharmaceutical Care Attitude Survey (PCAS) and Preparedness to Provide Pharmaceutical Care (PREP). The data were analyzed using the IBM Statistical Package for Social Sciences (SPSS®) Version 22. Chi-Square tests and independent t-test were used to compare between the 2 universities for categorical data. P ≤ 0.05 was considered statistical significant. The results were summarized using tables and figures generated by Excel. The final questionnaire included five sections: demographics of the students (6 items); perception of PC (7 items); attitudes towards PC (13 items); preparedness to deliver PC (25 items) and; barriers that may affect applying PC (5 items). The survey was administered using SurveyMonkey®. Results: Of a total of 77 students, 63 students completed the questionnaire (21 students from QU and 42 students from KU) overall response rate 82%. The mean age of the students was similar between the two universities. The majority of the respondents (95.2%) from both universities were female (QU-CPH is female-only college). KU-FoP had significantly higher proportion of national students than QU-CPH. Both QU-CPH and KU-FoP students prefer to work in the hospital setting in the future (57.1%, 64.3% respectively). No significant differences between the two universities in terms of students’ confidence and perception in applying PC (P ≥ 0.05). There were no significant differences between the students’ attitude in the two programs about the provision of PC (P ≥ 0.05), and all respondents believed that PC services will improve health outcomes. There was a statistically significant differences in documenting information related to detecting, resolving and preventing drug-related problem (p = 0.044). Some of the barriers identified by students from both institutions include: lack of private counseling area, limited pharmacists time, lack of patients records, lack of policy for pharmacists’ patient care role. No differences in opinions between QU and KU students regarding the most important barrier that may affect PC provision above. There was statistical significant difference between students’ opinions in considering poor image of pharmacists role in society as barrier. Conclusion: Final year pharmacy from Qatar and Kuwait had demonstrated positive attitudes towards PC and its potential application in practice when they graduate. They have, however, identified some potential barriers. Students in KU-FoP placed low expectation of the pharmacist's role by the society and within health care team as important barrier; while students in QU-CPH thought that documentation and communication between pharmacists and healthcare providers can have an impact on PC services. More efforts should be directed to resolve the perceived barriers in order to optimize PC provision and ultimately patient care outcomes.
-
-
-
Applying a Novel Smart Insole System that Reduces Re-Ulceration Risk among Diabetics with Peripheral Neuropathy: Do Users Adhere and Comply?
Authors: Eyal Ron, Javad Razjouyan, Bijan Najafi and David ArmstrongBackground & Aim: People with diabetes carry a 25% lifetime risk of foot ulceration. It is well-established that high plantar pressures increase the risk of developing foot ulcers, and that managing peak pressure is an important strategy in reducing such risk. This study tested a novel smart insole system designed to reduce ulceration risk by alerting patients using a smartwatch when their plantar pressure was too high. The device was tested for degree of adherence, compliance, and successful offloading responses among users. Outcomes of users triggering many alerts were compared to those triggering few alerts to see if alert frequency affected adherence and compliance to a novel mobile health device. Method: Participants with diabetes, peripheral neuropathy, and a history of foot ulcers were instructed to wear a smart insole system. Pressure sensors inside the insole were placed in strategic areas where foot ulceration risk has been shown to be high. The sensors were wirelessly connected to a smartwatch through a transmitter. The smartwatch alerted participants when plantar pressure exceeded 50 mmHg over 95% of a moving 15 minute window. Adherence, defined as the number of hours the device was worn, was determined with sensor data and via questionnaires. A successful response to an alert was recorded when patient-initiated offloading occurred within 20 minutes. The length of time an alert lasted (measured as the median time between alert onset and successful offloading) served as a measure of compliance. Results: Participants who increased adherence over time tended to have more alerts (0.82 ± 0.31 alerts/hr) than those who did not improve (0.36 ± 0.46 alerts/hr, p = 0.09). Users receiving a high number of alerts (HA) began with similar levels of successful response to those receiving a low number of alerts (LA), but the HA group successfully offloaded significantly more often than the LA group by the last segment of the study (55.0 ± 6.6% vs. 16.6 ± 11.9%, p < 0.01). Median alert durations increased for LA relative to HA (p = 0.10). Participants tended to overestimate their adherence compared to objective sensor measurements (7.60 ± 2.50 hours/day vs. 5.38 ± 3.43 hours/day, p = 0.10). Conclusion: The results of this study suggests that there appears be to a minimum number of alerts that a user must experience (1 alert every 2 hours of wear time) to maintain adherence and successful response to alerts over time. Above this level, median alert durations decrease, user adherence improves, and successful response rates increase. This suggests that within the range of alerts typically received by someone wearing a smart insole system, relatively more alerts may be preferred, and increasing the number of alerts a user receives by lowering the pressure threshold may be a viable path to maintaining adherence. In addition, self-reported adherence measures may exaggerate usage of novel mobile health devices.
-
-
-
Applying Novel Body-Worn Sensors to Measure Stress: Does Stress Affect Wound Healing Rates in the Diabetic Foot?
Authors: Eyal Ron, Javad Razjouyan, Talal K Talal, David G Armstrong and Bijan NajafiBackground and Aim: In the United States alone, diabetic limb complications and amputations are estimated to cost $17 billion. Significant risk factors that may lead to amputation of the diabetic foot include ineffective wound healing and infection of a wound or ulcer. Previous studies have shown that wound healing is slowed and patient's susceptibility to infection is increased when a patient is under chronic stress. To date, objective measures of stress have not been used to determine if stress affects the rate at which wounds heal. Our study used novel real-time monitoring of patient's heart rate variability to objectively determine the stress levels of patients visiting a surgery clinic for wound dressing changes. The wound healing rates of patients with high stress levels were compared to healing rates of low-stress individuals to assess the effect of stress on rates of wound healing among diabetics with a history of foot ulceration. Methods: Twenty patients (age: 56.7 ± 12.2 years) with diabetic foot ulcers were equipped with a chest-worn sensor (Bioharness 3, Zephyr Technology Corp., Annapolis, MD) during their 45-minute appointments where the patients’ wound was re-dressed. The chest sensor contained a uni-channel ECG recorder, and a novel algorithm was developed to determine heart rate variability from sensor output. Low frequency (0.04 to 0.15 Hz) HRV signals were isolated from high frequency signals (0.15 to 0.40 Hz), and the ratio of their amplitudes was used as a measure of stress. Patients were categorized as low-stress if the ratio of the signals was less than 1, and were otherwise categorized as high-stress individuals. Regardless of classification, each patient's wound size (length, width, depth) was recorded at baseline and in follow-up visits. High and low-stress patients were compared to see if wound sizes decreased more rapidly in either group. Results: Results indicate that patients with low levels of stress reduced their wound size by 79% between baseline and the first follow-up appointment (1.36 mm3 to 0.28 mm3). In contrast, patients with high levels of stress had adverse outcomes, with their wound sizes increasing nearly four times between baseline and follow-up (0.17 mm3 vs 0.84 mm3). Although high stress individuals had smaller wound sizes than low stress individuals initially (0.17 mm3 vs. 1.36 mm3, p < 0.05), the wound sizes of high stress individuals were nearly 3 times larger by the first follow-up (0.84 mm3 vs. 0.28 mm3, p = 0.10). Conclusion: Our research proposes that an individual's stress level can be objectively measured using an algorithm that processes ECG data from a single body-worn sensor that is lightweight and comfortable to wear. The stress levels measured with our algorithm are predictive of positive clinical outcomes. Specifically, individuals with low levels of recorded stress at baseline have faster healing rates and greater reductions in wound size by their second clinical appointment. This indicates that real-time patient stress monitoring using body-worn sensors may help clinicians identify risk factors that prolong wound healing times. In addition, it can be inferred that managing stress in diabetic patients will quicken the pace of wound healing. Surprisingly, however, our results suggest that initial wound sizes are not good indicators of stress levels in patients during initial clinical appointments. In fact, wound sizes of high stress individuals were significantly lower than low-stress individuals at baseline.
-
-
-
Dexamethasone-induced MicroRNA Regulation for Pancreatic Cancer Progression
More LessPancreatic cancer is one of the leading causes of cancer-related mortality worldwide and is highly therapy-resistant, e.g. toward the standard chemotherapy gemcitabine. Glucocorticoids like dexamethasone (DEX) are often co-medicated to reduce inflammation and side effects of tumor growth and therapy. Our group showed DEX to be a potent stimulator of epithelial to mesenchymal transition (EMT), cancer progression and metastasis, but the underlying mechanisms are poorly understood. MicroRNAs are a group of small non-coding RNAs that post transcriptionally regulate gene expression. In this study, I evaluated the effect of DEX on the microRNA profile of pancreatic cancer cell lines. By microRNA array I observed a deregulation of several miRNAs. The most significantly deregulated miRNA, miR-XYZ was predicted to target key members of the TGFß pathway. Forced expression of this miR-XYZ by liposomal transfection of mimicsresulted in significant repression of TGFß-2 mRNA and protein levels. 3’UTR luciferasereporter- and site-directed mutagenesis assays confirmed TGFß-2 to be a direct target of miR-XYZ. Functionally, I found that miR-XYZ significantly reduced proliferation, migration and colony formation. My preliminarily in vivo data show that miR-XYZ reduces tumor xenografttumor growth and abolishes the teratogenic effect of DEX. I conclude that miR-XYZ is a tumor suppressor gene that inhibits EMT by regulating oncogenes and/or genes that control EMT, and DEX is able to activate EMT by suppressing miR-XYZ.
-
-
-
Gliptins: Does this New Class of Antidiabetic Drugs Possess Endothelial-Vasculoprotective Effects?
Background & objective: The gliptins, dipeptidyl peptidase inhibitors (DPP-4 inhibitors) are a relatively new class of antidiabetic drugs that, via their inhibition of DPP-4, a cell membrane–associated serine-type protease enzyme, promote the effects of the endogenous incretins such as glucagon-like peptide 1 (GLP-1) and enhance glucose disposal. The advantage of the gliptins over GLP-1 analogues is that they are orally effective whereas the clinically used incretins, exenatide and liraglutide, have to be administered via sub-cutaneous injection. The first gliptin, sitagliptin, was approved by the FDA in 2006 and six other gliptins have subsequently been approved – vildagliptin (2007 in Europe); saxagliptin (2009, FDA); linagliptin (2011, FDA); anagliptin (2012, Japan); teneligliptin (2012, Japan); alogliptin (2013, FDA). There is a close association between diabetes and cardiovascular disease (CVD) and vascular complications associated with diabetes are responsible for 75% of the deaths of diabetics. Therefore it is important that for any new class of antidiabetic drugs introduced for clinical use that we determine whether such drugs not only show therapeutic efficacy as antidiabetic drugs, but also that they are vasculoprotective and reduce both cardiovascular morbidity and mortality. Endothelial dysfunction that can be functionally defined as a reduced vasorelaxation response to an endothelium-dependent vasodilator, such as acetylcholine, or, at the molecular level, a reduction in the bioavailability of nitric oxide (NO) and/or reduced activity of the enzyme responsible for the generation of NO, namely, endothelial nitric oxide synthase (eNOS). Endothelial dysfunction is a very early indicator of the onset of vascular disease and thus determining whether the gliptins also reduce endothelial dysfunction is very important. The literature concerning the vasculoprotective effects of the gliptins is contradictory with some of the clinical data suggesting a negative effect of the gliptins on vascular function. Thus, the objective of this study was to determine whether the gliptins possess positive or negative effects on endothelial function. Materials & methods: In the present study we used a cell culture protocol with mouse vascular endothelial endothelial cells (MS1-VEGF; CRL-2460, from ATCC, USA) of micro-vascular endothelial origin. The endothelial cells, MMECs, were either cultured under normoglycaemic (NG) conditions for a mouse, 11 mM, or high glucose 40 mM (HG) – a level that equates to the plasma glucose levels that are seen in mouse models of type 2 diabetes, such as the db/db leptin receptor mutant model of type 2 diabetes. The gliptin, alogliptin was chosen for this study and the protocols were designed to determine whether this gliptin reduced, or prevented, the high glucose induced reduction in eNOS phosphorylation at serine 1177 (p-eNOSser1177) as determined by western immunoblot densitometry quantification. A reduction in p-eNOSser1177 will result in a reduced activity of eNOS and hence a reduction in the generation of NO. Thus, the quantification of p-eNOSser1177 serves as a measure of endothelial function. The band densities of the western blot images for eNOS and p-eNOSser1177 were quantified using the basic Quantity One software (Biorad, Inc. CA, USA). Statistical analysis was performed using one-way analysis of variance (ANOVA) and post-hoc comparisons between groups were performed by Tukey's multiple comparison tests. ‘p’ values less than 0.05 were considered to be statistically significant. Results: Our data indicates that a 24-hour exposure to HG reduced p-eNOSser1177 phosphorylation, but the presence of alogliptin reverses the effects of HG and significantly increased the phosphorylation of eNOS, suggesting that this gliptin does protect the microvascular endothelium against hyperglycaemia-induced endothelial dysfunction. Furthermore, the effects of alogliptin were concentration-dependent and were significant with 50 or 100 μM, but not 10 μM alogliptin. Conclusion: Our findings indicate that, in a concentration-dependent manner, alogliptin protects endothelial cells against the negative effects that hyperglycaemia (high glucose) has on endothelial function as measured by alogliptin-induced changes in the phosphorylation of eNOS at serine 1177. Further studies are underway, using a functional myograph assay, to determine whether alogliptin can also prevent hyperglycaemia-induced endothelial dysfunction in mouse aortic vessels.
Acknowledgement
This work was supported by a Summer Student Research Fellowship (SSRF) from Weill Cornell Medicine-Qatar and an Undergraduate Research Experience Program grant, UREP 18-055-3-012 from the Qatar National Research Fund (QNRF). The statements made herein are solely the responsibility of the authors.
-
-
-
Analysis of Date Pits and Food Product Development from Date Pits
Authors: Eman Faisal Mushaima, Rehab Hussain Taradah and Sara Mohammed AlhajiriBackground: In Qatar and GCC countries, palm trees are considered as a symbol of culture due the great number of palm trees. Qatar classified as the third country import the dates for consumption among the world, it produces around 16,500 tons of dates every year mostly for local consumption. Though the high production and consumption of dates in GCC countries, there are no enough studies and investigations of the date's pits despite its high nutritional values and content that the few studies and analysis proved in some countries. This study aim to produce innovative food products from date pits as there are no investment for date pits and its nutritional value in Qatar. Therefore, proximate analysis and minerals analysis were done before developing a product containing date pits powder as the main ingredient to have accurate data about the chemical composition of each product. The main purpose of mineral analysis is to measure and determine mineral composition in general and lead content in date pits to which is related to safety of the food product that we will produce.
Objective: The main objectives of this research are:
1. To do proximate and minerals analysis of date pits (Phoenix dactylifera) that are cultivated in GCC countries (three varieties: khalas, khunaizi, sagei)
2. Developing a food product from date pits powder (cookies and muffin).
Method: • Minerals: mineral constituents present in the date pits of three varieties (Saqei, Khalas and Khunaizi) were analyzed using Inductively Coupled Plasma Spectrometer.
• Proximate analysis:
1. Seed Material and sample preparation: Date pits were obtained from Qatar and Bahrain. The pits of the two countries under investigation (Saqei and Khunaizi) (Khalas dates pit were excluded due to the high content of lead), they were directly isolated from 60 kg of date fruit, collected at the “Tamr” stage which is in the full ripeness. Date pits of each variety have been separated and milled in a heavy duty grinder to get homogenous blended mixture. Date pits powder was kept in a durable and leak proof containers.
2. Analytical methods: All analytical determination were performed in triple trials for each analysis. Values were expressed in the form of mean ± standard deviation (Chemical analysis of powdered pits (determined by AOAC- Association of Official Analytical Chemists)).
3. Fat content: The weight of total fat extracted from date pits was determined using Soxhlet Extraction Method. Results were expressed as percentages.
4. Protein content: The total protein was determined by Kjeldahl method and it was calculated by using the general factor (6.25).
5. Ash content: Ash was determined by removing carbon through taking 2 g of each variety, then it has been incinerated in the muffle furnace for 30 min at 600°C. After breaking up the ash with drops of water, put in the furnace for 3 hours.
6. Carbohydrate content: The total carbohydrate with fiber was calculated by subtracting from 100% the summation of the percent of total moisture, protein, fat and ash.
• Food product development: cookies and muffin have been produce from four different percentages of date pits (100%, 75%, 50% and 25%).
Results: Proximate analysis: Average of two trials for each variety. For khunaizi: CHO (84.32 %), protein (5.89), fat (3.8%), ash (0.48%) and moisture (5.39%). For sagei: CHO (79.81%), protein (5.57%), fat (2.89%), ash (0.72%) and moisture (0.86%). Minerals analysis (ppm): for Khalas: Mn: 22.881, Pb: 0.498, Cr: 40.827, Zn: 29.212, Mg: 838.983, Fe: 207.078, Cu:ND, Cd: 0.348, Ca: 438.434. For Khunaizi (PPM): Mn: 19.288, Pb: ND, Cr: 11.193, Zn: 15.540, Mg: 859.984, Fe: 77.003, Cu:ND, Cd: ND,Ca: 328.652. For Sagei (PPM): Mn: 8.202, Pb: ND, Cr: 0.323, Zn: 13.686, Mg: 559.365, Fe: 309.465, Cu:ND, Cd: 18.925, Ca: ND.
Quality rating for food product development using date pits (muffin and cookies): total quality number (out of 20 points) For muffin and cookies containing 25% date pits powder = 20 points; for 50% = 19 points; for 75% = 15 points; and 3 points for muffin and cookies containing 100% date pits powder.
Conclusion: Pits of date palm could be an excellent source of functional foods components as it is inexpensive and rich source of carbohydrate (mostly fiber) as shown in the analysis results of date pits from two leading varieties in Qatar (Sagei and Khunazi). The results also prove that date pits powder from Khalas variety of date is unsafe for production of food as it contain significant amount of lead.
-
-
-
Pharmacoeconomic Evaluations of Oral Anticancer Agents. Thematic Systematic Review
Authors: Ahmad Amer Alkadour, Daoud Al-Badriyeh and Wafa Al-MarridiBackground: Around 14.1 million new cancer cases and 8.2 million deaths caused by cancer were reported in 2012, expected to rise up to 22 million within the next 2 decades. The parenteral route (intravenous dosage form) has been the most common administration route for chemotherapeutic agents, which is associated with the need for hospitalization and a range of significant adverse drug reaction. A new generation of chemotherapies that is orally administered has been introduced to practices as a superior and more efficient therapeutic alternative. Oral anticancer drugs (OACDs) have shown to be eliminating the need for hospitalization, decreasing the rate of adverse drug reactions and, ultimately, improving patients' quality of life. Economically, this translates into reduction in inpatient hospitalization costs, including several of the associated costs, such as the cost of treating side effects. A disadvantage of OACDs however, is the increased acquisition costs as compared to those for the intravenously administered alternatives. This resulted into resistance to include OACDs by several international insurance schemes and drug formulary practices, including in Qatar. Objectives: The current project sought to analyze the medical literature in relation to published economic evaluations (pharmacoeconomics) of OACDs, especially as compared to the parenteral alternatives. This will identify the decision analytic modeling conducted as well as the variety of methods used. Strengths and weaknesses of study designs will be determined, including gaps in knowledge. Methodology: A thematic systematic review was conducted using the search engines: PubMed, Medline, EconLit, Embase and Economic Evaluation Database. The following 3 categories were considered: (i) therapy (chemotherapy [Mesh]); (ii) dosage form (oral [Mesh]); and (iii) research design (economics [Mesh] OR cost-benefit analysis [Mesh]). These included full-text, English articles incorporating comparative economic evaluations of oral chemotherapies. Excluded studies were: non-comparative, non-economic based models, of secondary indications (not cancer), and/or reviews. This process was followed by two stages of manual exclusion; based on title/abstract content and, then, the full-text article content. A data extraction form was developed and pilot tested for the purpose of data collection. Article inclusion and data collection was conducted twice, each by a different investigator. Included articles were finally summarized according to methodological themes of interest. Results: A total of 235 records were identified. After screening and removing duplicates, only 18 studies were deemed eligible study inclusion. It was found that the pharmacoeconomics evaluations were mostly of cost-utility analyses (13 out of 18), measuring cost per quality adjusted life years (QALY) gained, and from the payer perspective (15 out of 18). Primary sources of clinical and economic data were randomized clinical trials, expert panels and medical charts. Other sources included medicine databases, reimbursement schedules, drug policies and price lists, treatment guidelines, case reports and patient interviews. In 13 out of 18 cases, dominance status was reported in favor of OACDs, in relation to cost and/or clinical effect. Decision analytic modeling was used in the majority of studies, mostly constituting Markov modeling for the simulation of life long use of drugs. Sensitivity analyses were conducted in most studies, mostly constituting one-way sensitivity analysis to ensure robustness of study results. The types of cancers, where the effect of OACDs was studied, were the metastatic renal carcinoma, gastrointestinal tumors, colon cancer, chronic myeloid leukemia and non-small cell lung cancer. Most included articles were published during the last seven years. Most studies were conducted in the UK, US and Europe, while none were conducted in Australia or the Middle East. Conclusion: This is first systematic review of the economic methods used in the evaluation of OACDs. There seems to be a recent increasing interest of this type of research, whereby the QALYs measurement is of priority for the decision making in relation to the comparative value of OACDs in practices. Most important, is that despite the higher acquisition cost, OACDs were demonstrated to be mostly superior over the parenteral alternatives. Furthermore, the decision analytic modeling, mostly constituting Markov modeling, is valued and enables a structured decision analyses of therapies. The pharmacoeconomics research is difficult to generalize, whereby published economic evaluations are locally specific, especially for the purpose of practical interpretation. The current review of literature proposes valuable methods for the local Qatari implementation and guidance of decision makers. This is most relevant to National Center for Cancer Care & Research (NCCCR), which is the only tertiary service provider of cancer therapy in Qatar, where confusion in relation to the use of oral chemotherapies exists, particularly the therapies vinorelbine and capacitabine.
-
-
-
Assessment of HER2 Status of Metastatic Breast Carcinoma on Cell Block Preparations of Fine Needle Aspirates is Unreliable
Authors: Vignesh Shanmugam, Syed Hoda, Thomas Dilcher, Adam Pacecca and Rana HodaObjectives: HER2 status of breast carcinoma is a powerful prognostic and predictive biomarker-particularly in the metastatic setting. Limited data is available regarding assessment of HER2 on cell block preparations (CBP). The primary objective of this study was to assess correlation between HER2 results obtained via immunohistochemistry (IHC) and fluorescence in-situ hybridization (FISH) in cases of metastatic breast carcinoma (MBC) on CBP. Secondary objectives included study of inter-observer variability in interpretation of HER2 on IHC, and concordance between HER2 results on CBP and formalin-fixed paraffin embedded material (FFPEM). Materials and Methods: Cases of MBC diagnosed on fine needle aspirates (FNA) with HER2 testing performed via IHC and FISH on CBP over 5 years (2010-2015) were reviewed. CBP material was fixed in an ethanol-based fixative (CytoRich Red Fixative system, BD). HER2 IHC was performed using polyclonal antibodies against Cerb-2 (Dako 0485). HER2 FISH testing was performed using the LSI HER2/neu/CEP17 probes (Vysis/Abbott Molecular Inc., Des Plaines, IL). Results: Seventeen cases (all female, median age: 59) were analyzed. 41% of CBP were products of bone FNA (7/17). Other sites included lymph node (3), lung (2), pleural fluid (2), liver (1), skeletal muscle (1) and mesentery (1). Median interval between diagnosis of primary carcinoma and FNA of metastasis was 5 years (range: 10 months-32 years). FISH was inconclusive due to suboptimal specimen quality in 2 cases. Correlation between IHC and FISH results was as follows: IHC 0/1+ (0/2; 0% amplification), IHC 2+ (2/12; 16.7% amplification) and IHC 3+ (0/1; 0% amplification). Inter-observer agreement of IHC scoring between 2 pathologists who independently reviewed IHC slides was fair (66.7% agreement, κ = 0.31). Comparison of HER2 results on CBP with FFPEM (primary carcinoma or metastasis) showed a high discordance and slight agreement (discordance rate = 37.5%; κ = 0.02). Conclusions: In this study, (a) 16.7% of MBC cases that scored 2+ on IHC showed amplification on FISH, (b) there was poor inter-observer agreement in HER2 scoring of IHC on CBP, and (c) there was high discordance between HER2 results obtained on CBP and FFPEM. Our results indicate that HER2 testing of MBC on CBP may be unreliable.
-
-
-
Decision-Analytic Modeling in the Economic Evaluations of Systemic Antifungals for the Prophylaxis against Invasive Fungal Infections – A Thematic Systematic Review
More LessBackground: The interest in the economic evaluations of “prophylactic” systemic antifungals is on the rise, especially with the emergence of newer expensive agents for prophylaxis of invasive fungal infections (IFI). Decision analytic modeling is a systematic approach that has become integral in the economic evaluation process for the purpose of simplifying the decision making. This systematic review aims to identify the prevalence of decision-analytic modeling in the pharmacoeconomic literature regarding prophylactic therapies for systemic fungal infections, and to identify variations in model designs used along with defining specific areas of strengths and weaknesses. Method: A systematic literature search was conducted using the e-databases Pubmed, Medline, Embase, Economic Evaluation, Econlit, and Cochrane to obtain all model-based economic evaluations of antifungal agents. Search terms were under three categories: (i) therapy (antifungal agent [Mesh] OR Prophylaxis); (ii) disease (mycosis [Mesh] OR fungal disease [Mesh] OR invasive OR systemic); and (iii) research design (economics [Mesh] OR decision analysis [Mesh] OR costs and cost analysis [Mesh]). Publications were included if they were journal articles, full text publications, human studies, English language. Study articles were excluded if they were reviews, studying topical antifungal, non-invasive infections, or non-economic models. Journal article inclusion and data extraction, via a data collection form, were conducted twice, each by different researcher. Results: Out of 841 citations, only 19 articles were eligible for inclusion. Most of studies were relatively recent, conducted in 2008–2013. Seventeen of them used sources of clinical data from pooled randomized control trials. Evaluations were mostly in USA (7), the remaining in Australia, Canada, Spain, The Netherland, Korea, Greece, France, Germany, and Switzerland (1–2 articles each). All articles utilized the cost-effectiveness method using decision tree models; including 10 using Markov modeling for simulating future use of medications. This was, as appropriate, associated with discounting type of cost adjustment. Drug comparisons in included studies (27/29) were mostly between an old cheaper antifungals and more expensive newer ones. The 19 articles incorporated 15 studies with cost per life year gained measure, six with cost per IFI avoided, one with cost per Quality Adjusted Life Year, and four included cost saving per patient measure. Most important, is that same clinical measures were defined differently in different studies. Most studies reported dominance state, the majority were in favor of posaconazole (9 out of 12), and five studies required incremental cost effectiveness ratio analysis. Only direct medical costs were considered in studies despite that six articles had social perspectives instead of the hospital perspective. All articles had adjusted costs either for inflation (9/19 articles) or discounting. Fourteen articles used only one way sensitivity analysis while few used a combination with multivariate (2) or scenario (3) analyses. Conclusion: Decision making in relation to prophylactic antifungals is not complex, including the economic considerations; whereby straightforward therapy dominance status was demonstrated in the majority of studies. Most important, is that the literature evidence in relation the cost-effectiveness of systematic antifungals is not cumulative in nature, which is due to that same outcomes are defined differently in studies. This also meant that literature economic models are incomparable and not generalizable since different decision makers appear to be interested in different outcomes, including for the same antifungal agent. Studies are limited by not considering cost of side effects and alternative therapy options. Further studies are needed to compare among the newer more expensive agents, where evidence is lacking. Also, studies should be enhanced by better adhering to guidelines in relation to standardized definitions of health states, enabling a cumulative evidence generation and generalizability of findings.
-
-
-
Criminalizing Domestic Violence in Qatar: A Case Study of Student Activism
Globally, gender based violence effects one out of every three women. Recently, the alarming rise in reported cases of domestic violence in Qatar has led to a national call to find an effective way to deal with the issue. This paper documents the efforts of a group of Qatar University students to do just that: draft legislation to criminalize domestic violence. The research project involves eight Qatar University male and female undergraduate students from five different countries (Bahrain, Pakistan, Egypt, Nigeria and Qatar), and three faculty members from different countries (Palestine, Egypt and Saudi Arabia).
In order to determine the status of current societal and legal protection provided to victims of domestic violence, interviews were conducted with law enforcement authorities, judges, religious scholars/leaders, medical professionals and victims of domestic violence themselves. After analyzing the interviews, along with the official documentation provided by institutions (such as hospitals, police departments, and shelters) systematic weaknesses and legal loopholes were identified. A benchmarking of legislation in the Arab and Muslim world was then conducted in order to come up with a conceptual framework for a comprehensive protection system for female victims of domestic violence in Qatar.
-
-
-
Knowledge of MERS-Corona Virus among Female Students at Qatar University
Authors: Zahra Al-Muhafda, Maryam Qaher, Eman Faisal, Heba Abushahla, Rana Kurdi and Ghadir Al-JayyousiMiddle East Respiratory Syndrome (MERS) is a severe, acute respiratory illness caused by Corona Virus (CDC, 2015). Globally, there are 1599 cases diagnosed with MERS Corona virus and at least 574 related death cases were reported in the year 2012 (WHO, 2015). In Qatar, a total of 17 cases were reported between November 2013–May 2015 (WHO, 2015). The routes of transmission of MERS-COV includes direct contact with an infected person, touching contaminated objects or surfaces then touching your mouth, nose or eyes and direct contact with infected camels. A study in Qatar showed that MERS-COV was detected in 3 camels out of 14 from nose swabs taken from these camels. It also showed that the virus fragments were similar to what was found in two human cases from the same farm (Haagmans et al., 2014). Another study was conducted at Al-Shahaniya-Qatar in 2014 confirmed the presence of MERS-CoV in the milk of five camels. This explains that camels are a source of transmission of MERS-COV in the State of Qatar (Reusken et al., 2014). The purpose of this study is to examine knowledge of MERS-CoV transmission, symptoms and prevention techniques among female students at Qatar University, and further evaluate the effect of an awareness event organized by the Public Health Program. Participants (N = 33) were female students at Qatar University aged from 18–26 years old. Public health students designed a survey to test the knowledge of MERS-CoV transmission, symptoms and prevention techniques among female students at Qatar University. A pre-test survey was distributed in an awareness event of MERS-CoV that was held on the 19th of November 2015 at the college of Arts and Sciences. The survey included questions about demographics such as age, college, and nationality. It also included five questions to examine the level of knowledge about transmission routes for the virus, symptoms associated with the infection, the prevention techniques and the preferred strategy to be educated about the disease among students. Later, participants attended activities organized by public health students to be educated about MERS-CoV. They were exposed to epidemiological facts through distributed flyers and screen slides. The transmission routes were explained to the students using a creative and meaningful poster, Moreover, students were informed about the symptoms by another poster and a demonstration of MERS-CoV model. The prevention techniques regarding MERS-CoV were also explained through a poster and attractive, colorful brochures. The same recipients answered same questions as a post-test to measure changes in knowledge about MERS-CoV. For the analysis, SPSS software was used to analyze the pre- and post-test data. The McNemar's Test was used to compare the pre- and post-test results. P-value less than 0.05 was considered to be significant. The results showed the percentage of recipients aged (18-20) years and (21-23) years was the same (45.5%) for both age groups. The majority of the recipients were from the college of Science (57.6%); however, neither was from the college of Medicine nor the college of Law. Moreover, due to the high diversity in Qatar University, students form different nationalities participated in our survey such as, Qatari, Gulf countries, Egyptian, Palestinian, Iranian, Jordanian, Sudanese, Pakistani and Others. Most of the students were Qatari (21.2%), whereas Iranian and Pakistani had the lowest number of recipients (3.0%), (see Table 1). The results showed that prior to the educational event, the majority of the recipients thought that they didn't have enough knowledge about MERS-Corona Virus (54.5%). However, after the event, the majority agreed that they have enough knowledge about MERS-CoV, McNemar's Test (P = 0.000). In addition, the findings regarding transmission routes showed that the majority of recipients didn't know any of the transmission routes (33.3%), on the other hand, after the event 78% of the recipients were aware of all the transmission routes, McNemar's Test (P = 0.000). The next result showed that, most recipients knew about the symptoms associated with MERS-Corona Virus in pretest (51.5%), additively, the majority showed that they knew about these symptoms, but their knowledge improved compared to pre-test (90.9%), McNemar's Test (P = 0.001). When it comes to prevention, most recipients chose washing hands as preventive methods (33.3%) in pretest. However, the results after the event showed that recipients were aware of all of the preventive methods, but without statistical significance, McNemar's Test (P = 0.424). Finally, the pretest regarding the best educational methods showed that all the strategic methods were effective to educate about MERS-Corona Virus (75.8%). The percentage increased in the post-test (84.4%) but without statistical significance, McNemar's Test (P = 0.375) (see Table 2). Future research should focus more on comprehensive educational interventions that are needed to facilitate adoption of precautions associated with MERS-COV, and more follow up studies to see if these educational interventions promote changes in knowledge and behavior of students.
References
Center of Disease and Control (2015). Middle East Respiratory Syndrome (MERS). Retrieved from: http://www.cdc.gov/coronavirus/mers/.
Haagmans, B. L., Al Dhahiry, S. H., Reusken, C. B., Raj, V. S., Galiano, M., Myers, R.,… & Koopmans, M. P. (2014). Middle East respiratory syndrome coronavirus in dromedary camels: an outbreak investigation. The Lancet infectious diseases, 14(2), 140–145.
Reusken, C. B., Farag, E. A., Jonges, M., Godeke, G. J., El-Sayed, A. M., Pas, S. D., & Koopmans, M. P. (2014). Middle East respiratory syndrome coronavirus (MERS-CoV) RNA and neutralising antibodies in milk collected according to local customs from dromedary camels, Qatar, April 2014. Euro Surveill, 19(23), pii20829.
World Heath Organization (2015). Middle East respiratory syndrome coronavirus (MERS-CoV) – Qatar. Retrieved from: http://www.who.int/csr/don/11-february-2015-mers-qatar/en/. On: November 12, 2015.
World Heath Organization (2015). Middle East respiratory syndrome coronavirus (MERS-CoV) – Republic of Korea. Retrieved from: http://www.who.int/csr/don/25-october-2015-mers-korea/en/. On: November 12, 2015.
-
-
-
Picture Archive Communication (PAC) System with extended Image Analysis and 3D Visualization for Cardiac Abnormality
More LessPAC system is extending increasingly from the now well-established radiology applications into a hospital-wide PACS. New challenges are accompanying its spread into other clinical fields. With awareness of the importance of PAC systems among various medical experts, this system has been enhanced through the PAC system's pipeline, and simplification of image display for analysis via an interaction with the user. Generally PAC system consists of medical image, patients' data acquisition, storage, and display subsystems integrated by digital networks and application software. PAC system facilitates the systematic utilization of medical imaging for patient care. However, even though PAC system consist of medical image, data acquisition, storage and display subsystems, most of available PAC system does not have image analysis as required by the clinical expert. If the PAC system do have this element, it need interaction or interference from the clinical expert as user, and the PAC system storage mostly is an unstructured storage with no analysis element and report modules. And unfortunately in most cases, for the web-based PAC system, there are delayed in retrieval and visualize the required image from outside of the hospital. Most of the PACS with function of 3D display, it did not communicate information clearly and efficiently to users (clinical expert). Most of the visualization did not visual accurate information as required by the clinical expert. These listed constraints limited the clinical expert perspective regarding his/her decision.
From market validation observation we concluded that most of the PAC system available in market does not have medical image processing functions for the purposes of decision analysis with minimum user interaction. Research towards this limitation has been conducted in accordance with the needs of clinical experts. Among these studies are: (i) angiography image processing for stenosis position detection and measurement of its dimensions, (ii) echocardiography image processing for detection of ventricular cardiac abnormalities (walls and volume) and (iii) 2D angiography images reconstructed to 3D images for display purposes and to identify the location of artery tree.
With the result of these studies, the PAC system will integrate with extra modules, which are: i) 3D reconstruction function from single image angiogram with identification of the stenosis location, ii) Identification of abnormality heart wall chamber, iii) 3D reconstruction echocardiography left and right ventricular heart, and iv) 3D fused within CTA, angiography and MRI.
As stated above, the common limitation for the web-based PAC system is the delayed in retrieval and visualize the required image from outside of the hospital/clinics. To overcome this limitation, we proposed a technique and integrated a related function for faster transmission of the processed image without sacrifice any importance information. And to complete the PAC system so that the new PAC system able to compete with current PAC system that available in market, we link the PACS with our Patients Clinical Record Database with report modules as required by the medical expert.
The outputs of all those said researches will be integrated with the PAC system where each research output has been tested and validate by numbers of cardiac expert, the patients clinical record database has been tested in UKMMC and the PAC system currently being beta tested in a Private Clinic in Kelang, Selangor, Malaysia. Eventually the PAC system with the Patients Clinical Record Database will be integrated with these image analysis and 3D image visualization and it is plan to be tested in Veterinary Hospital of Universiti Putra Malaysia.
To forward this project to commercialization activity, we have distributed questioners to Clinics in area Bangi (Selangor, Malaysia) and Nilai (Negeri Sembilan, Malaysia). There are 58 Medical, Veterinary and Dental Clinics received the questioner (currently we expand the distribution towards Serdang and Kajang (Selangor, Malaysia). Out of 58 clinics; a) 16% interested to collaborate and looking forward to see PAC system, b)8% interested with the PAC system but do not willing to have any demo, c) 44% not interested but open for demo of system, d) 28% not interested and not willing to have demo system and e) 4% return the form without answered for that particular question.
To secure and protect the ownership, each research output has been submitted for Patent filed in Malaysia and with three chosen country, where 2 of the patent filed has been granted in Malaysia. We also copyrighted each module. This project has been selected by Universiti Putra Malaysia to be commercialized by a startup company seeded by UPM (CASD Medical Private Limited) under program INNOHUB.
We realize to implement the complete PAC system in Hospital there are 10 main problems exist that we might need to overcome or try to minimize the consequences. These problems are; i) Integration with the Hospital Information System. Although a lack of inter-vendor device and IT integration can often make the problem worse, the market is improving as providers and meaningful use demand greater integration. Unfortunately, still, many radiologists and PACS administrators prefer to make full use of hospital IT to configure their own systems and achieve a bit more autonomy, ii) Every system has downtime where we need to establish alternate workflows. Both scheduled and unscheduled, but they need not have to be too serious to minimize the effect on patient care, iii) Non-standardized hanging protocol display is a common and pesky challenge for PACS users. Images from different modalities are not organized by default even though each of them generally will be transmitted through DICOM format getaway, each study takes a little longer to read. As the number of scanners increases and the sample of vendors expands, the problem grows worse, iv) Integration problems concern hardware, from digitizing pre-DICOM modalities to integrating systems for advanced image reconstruction. Add-ons like a DICOM converter can help squeeze out additional value from older CT, angiography and fluoroscopy systems, v) As with downtime, failures are unavoidable, there is a need to demonstrate strong support activities, vi) Effective training can be a cost-effective way to demonstrate to administrators and physicians many of PACS' underused and undervalued features. Training wil help to expose staff to what the system can do to make their jobs easier and more efficient, vii) The migration of data to the new PACS is often the most challenging part of the process, both in negotiating the release of data from the current PACS and in sorting out all the data entry errors that have accumulated over the lifespan of the system, viii) As other specialties realize the value of PACS, the system is slowly being taken out of radiologists' hands. PACS has become a mission-critical enterprise-wide tool used by nearly all specialties. With this change, decision-making for PACS-related purchases, upgrades and configurations has, in some cases, shifted from radiologists to a more central process, ix) Hiring a certified public ergonomist to evaluate the department's workstations can ease radiologists' repetitive stress symptoms and contribute substantially to productivity. Despite accelerating advances in technology, many interface tools have changed little since the introduction of PACS, and finally x) Like business continuity, disaster recovery can prevent a painful experience from becoming fatal. Many hospitals opt for either redundant servers, cloud storage or both. At the very least, preparation for downtime can spare physicians and patients from experiencing significant losses.
To minimize the consequences of the listed 10 main problems, this project (in early stage) targeting potential customers among the owner of the small private clinics where the numbers of patients is less and the bureaucracy of the administration is limited.
-
-
-
Public-Key Cryptosystem Based on Invariants of Groups
Authors: Frantisek Marko, Martin Juras and Alexandr N. ZubkovThe presented work falls within one of Qatar's Research Grand Challenges, namely the area of Cyber Security. We have designed a new public-key cryptosystem that can improve the security of communication networks. This new cryptosystem combines two important ideas. The first idea is the difficulty of finding an invariant of infinite diagonalizable group. More specifically, for the coding purposes, we build an infinite diagonalizable group that has a given polynomial invariant of high degree. One possible attack on this cryptosystem is to find an invariant of this group. However, during the design of the system we guarantee that the minimal degree of invariants of this group is very high which makes the direct attempt to find any invariant using linear algebra techniques computationally expensive. The second idea is based on our discovery that when working over the ring Z of integers, another attack to break the cryptosystem is possible. This attack is based on replacing the prime factorization of integers by finding a factorization of “atoms” and can be implemented using the Euclidean algorithm. Similar algorithm works for rings that are unique factorization domains. To prevent this type of attack, we work over number fields that are not unique factorization domains (there are many such suitable number fields of small degrees). By doing so we invoke the well-known problem of prime factorization (used in commercial cryptosystems like RSA) which becomes principally more complicated over number fields that are not unique factorization domains. We have also showed that similar system based on finite diagonalizable groups are not secure against attack because such a system can be broken in polynomial time using an algorithm that finds a root of every polynomial p(x) with complex coefficients such that all of its roots are roots of unity. All invariants considered for diagonalizable groups are linear combinations of monomial invariants. The situation is more complicated for diagonalizable supergroups. Since it could improve the safety of the cryptosystem, we also investigated the case of supergroups and derived theoretical results about the minimal degrees of invariants. Since the diagonalizable supergroups enjoy more complicated structural properties, a cryptosystem based on them would be even more secured. In order to move to the supergroups, a better understanding of general linear supergroups was desired. We have established theoretical results describing and proving the linkage principle for these supergroups and we gained an understanding how are the composition factors of highest weight modules related. For the future work, we plan to implement the algorithm for the public-key cryptosystem that we have designed and test the speed of coding and test the security of the system by determining the time and space complexities of known attacks of the system.
-
-
-
Fast Prototyping of KNN Based Gas Discrimination System on the Zynq SoC
Electronic noses (EN) or machine olfaction are systems used for the detection and identification of odorous compounds and gas mixtures. The accuracy of such systems is as important as the processing time. Therefore, the choice of the algorithm and the implementation platform are both crucial. In this abstract, a design and implementation of a gas identification system on the Zynq platform which shows promising results is presented. The Zynq-7000 based platforms are increasingly being used in different applications including image and signal processing. The Zynq system on chip (SoC) architecture combines a processing system based on a dual core ARM Cortex processor with a programmable logic (PL) based on a Xilinx 7 series field programmable gate arrays (FPGAs). Using the Zynq platform, real-time hardware acceleration of classification algorithms can be performed on the PL and controlled by a software running on the ARM-based processing system (PS). The gas identification system is based on a 16-Array SnO2 in-house fabricated gas sensor and k-Nearest Neighbors (KNN) for classification. The KNN algorithm is executed on the PL for hardware acceleration. The implementation takes the form of an IP developed in C and synthesized using Vivado High Level Synthesis (HLS), the synthesis includes the conversion from C to register transfer level (RTL). The implementation requires the creation of a hardware design for the entire system that allows the execution of the IP on the PL and the remaining parts of the identification system on the PS. The hardware design is developed in Vivado using IP Integrator. The communication between the PS and PL is performed using advanced extensible interface protocol (AXI). A software application is written and executed on the ARM processor to control the hardware acceleration on the PL of the previously designed IP core and the board is programmed using Software Development Kit (SDK). An overview of the system architecture can be seen in Figure 1. The system is designed to discriminate five types of gases including C6H6, CH2O, CO, NO2 and SO2 at various concentrations, from 0.25 to 5 parts per million (ppm) for C6H6 and CH2O, from 5 to 200 ppm for CO, from 1 to 10 ppm for NO2 and finally from 1 to 25 ppm for SO2. The experimental setup used in the laboratory to collect the data is shown in Figure 2. It consists of a gas chamber where the sensor array is placed. The gas chamber has two orifices, one to serve as an input for the in-flow of gases and the other one as an exhaust to evacuate the gases. Multiple gases are stored in various cylinders and connected to the gas chamber individually through several Mass Flow Controllers (MFCs). A control unit is connected to the MFCs to control the in-flow of gases and to the sensor array via a Data Acquisition (DAQ) system to collect and sample the response of the sensor array. In total, 192 samples are collected, 50% is used for training and the other 50% is used for testing. Simulations were performed in MATLAB environment prior to the implementation on the hardware where different k values have been used. The Euclidean distance has been used as a metric for the computation of distances between various points. The best results were obtained for k = 1 and k = 2 with a classification accuracy of 97.91% and 98.95% respectively. The system implemented on hardware is based on k = 1 since the accuracies are almost similar while the hardware resources required for k = 2 are much higher than for k = 1. This can be explained by the fact that in the case of k = 2 we need to sort the vector of distances to be able to find the nearest two neighbours while in k = 1 we only need to find the smallest distance. The target hardware implementation platform of the proposed KNN is the heterogeneous Zynq SoC. The implementation is based on the use of Vivado HLS. A summary of the design flow is presented in Figure 3. The starting point is Vivado HLS where the KNN block is converted from C/C++ implementation to a RTL based IP core. This allows a considerable gain in development time without scarifying on high parallelism characteristics because Vivado HLS provides a large number of powerful optimization directives. The generated IP-core is then exported and stored in the Xilinx IP Catalog before being used in Vivado IP Integrator to create the hardware block design with all needed components and interconnections. The next step is to export the generated hardware along with IP drivers to the SDK tool. The SDK tool is used to program the Xilinx ZC702 prototyping board via joint test action group (JTAG) interface and the terminal in SDK is used to communicate with the board via universal asynchronous receiver/transmitter (UART) interface. The KNN IP is implemented on the PL of the Zynq SoC and communicates with the PS part via the Xilinx AXI-Interconnect IP. A software is written in C/C++ and executed on the PS to manage the IP present in the PL in terms of sending the input data, waiting for the interrupt and then reading the output data. The block design and the resulting chip layout are shown in Figure 4. It is worth mentioning that the running frequency for the ARM processor is set to the maximum 667MHz while the PL frequency is set to 100 MHz which is the maximum for the KNN IP generated in HLS. The real execution of KNN on the PL side of the ZC702 board shows that one sample can be processed for gas identification in 0.0078 ms while the same sample requires 0.9228 ms if executes on the PS side in the ARM processor in a pure software manner. This means that a speed up of 118 times has been achieved. The main directive in Vivado HLS that helped to reach these performances is the “Loop pipelining” which allows the operations in a loop to be implemented in a concurrent manner. The hardware resources usage can be seen in Figure 5, it shows that 24% of lookup tables (LUT), 12% of flip-flops (FF), 6% of BRAM and 58% DSP have been used. As shown in Figure 6, the total power consumption is 1.895 W, 1.565 W is consumed by the PS and the remaining 0.33W is consumed by the PL.
-
-
-
Robust Controller and Fault Diagnoser Design for Linear Systems with Event-based Communication
Authors: Nader Meskin, Mohammadreza Davoodi and Kash KhorasaniIn order to improve the effectiveness and safety of control systems, the problem of integrated fault diagnosis and control (IFDC) design has attracted significant attention in the recent years, both in the research and in the application domains. The integrated design unifies the control and diagnosis units into a single unit which leads to less complexity as compared to the case of separate designs. Nowadays, the IFDC modules are implemented on digital platforms. However, in almost all of these implementations, the IFDC task is executed periodically with a constant sampling period, which is called “time-triggering” sampling. However, the time-triggering sampling scheme produces many useless messages if the current sampled signal has not significantly changed in contrast to the previous sampled signal, which leads to a conservative usage of the communication bandwidth. This is especially disadvantageous in applications where the measured outputs and/or the actuator signals have to be transmitted over a shared (and possibly wireless) communication network, where the bandwidth of the network (and power consumption of the wireless radios) should be constrained. To mitigate the unnecessary waste of computation and communication resources in conventional time-triggered IFDC design, the problem of event-triggered integrated fault diagnosis and control (E-IFDC) for discrete-time linear systems is considered in this paper. A single E-IFDC module based on a dynamic filter is proposed which produces two signals, namely the residual and the control signals. The parameters of the E-IFDC module should be designed such that the effects of disturbances on the residual signals are minimized (for accomplishing the fault detection objective) subject to the constraint that the mapping matrix function from the faults to the residuals is equal to a pre- assigned diagonal mapping matrix (for accomplishing the fault isolation objective), while the effects of disturbances and faults on the specified control output are minimized (for accomplishing the fault-tolerant control objective). Two event-triggered conditions are proposed and designed to reduce the transmissions from the sensor to the E- IFDC module and from the E-IFDC module to the actuator. These event-triggered conditions determine whether the newly measured data or control output, respectively, should be transmitted or not. Indeed, the sensor measurement (controller output) is sent to the E-IFDC module (actuator) only when the difference between the latest transmitted sensor (controller) value and the current sensor measurement (controller output) is sufficiently large as compared to the current sensor (controller) value. This property reduces the burden on the network communication and saves the communication bandwidth in the network. Consequently, it is possible to significantly reduce the usage of communication resources for diagnosis and control tasks as compared to a conventional time-triggered IFDC approach. A multi-objective formulation of the problem is presented based on the H∞ and H- performance indices. The sufficient conditions for solvability of the problem are obtained in terms of linear matrix inequality (LMI) feasibility conditions. Indeed, the filter parameters and the event-triggered conditions are simultaneously obtained using strict LMI conditions. The main advantage of the proposed LMI formulation is that it is convex, and is therefore solved effectively using interior-point methods. Application of our methodology to a linearized model of the Subzero III ROV is presented to illustrate the effectiveness and capabilities of our proposed methodology. Remotely operated vehicles (ROVs) are underwater robotic platforms that have become increasingly important tools in a wide range of applications including offshore oil operations, fisheries research, dam inspection, salvage operations, military applications, among others. Since transmission resources are limited under water, using an event-triggered scheme for communication is more efficient. Therefore, the results of this paper are applied for designing an event-triggered IFDC module for the Subzero III ROV.
-
-
-
Annotation Guidelines and Framework for Arabic Machine Translation Post-Edited Corpus
Authors: Wajdi Zaghouani, Nizar Habash, Ossama Obeid, Behrang Mohit, Houda Bouamor and Kemal Oflazer1. Introduction
Machine translation (MT) became widely used by translation companies to reduce their costs and improve their speed. Therefore, the demand for quick and accurate machine translations is growing. Machine translation (MT) systems often produce incorrect output with many grammatical and lexical choice errors. Correcting machine-produced translation errors, or MT Post-Editing (PE) can be done automatically or manually.
The availability of annotated resources is required for such approaches. When it comes to the Arabic language, to the best of our knowledge, there is no MT manually post-edited corpora available to build such systems. Therefore, there is a clear need to build such valuable resources for the Arabic language. In this abstract, we present our guidelines and annotation procedure to create a human corrected MT corpus for the Modern Standard Arabic (MSA). The creation of any manually annotated corpus usually presents many challenges. In order to address these challenges, we created a comprehensive and simplified annotation guidelines which were used by a team of five annotators and one lead annotator. In order to ensure a high annotation agreement between the annotators, multiple training sessions were held and regular inter annotator agreement (IAA) measures were performed to check the annotation quality
2. Corpus
We collected a corpus of 100K of English news article taken from the collaborative journalism Wikinews website. Afterwards, the corpus collected was automatically translated from English to Arabic using the Google Translate API paid service.
3. Guidelines
In order to annotate the MT corpus, we use the general annotation correction guidelines we designed previously for L1 described in Zaghouani et al. (2014) and we add specific MT post-editing correction rules. In the general correction guidelines we place the errors to be corrected into seven categories: spelling, word choice, morphology, syntax, proper names, dialectal usage and punctuation. We refer to Zaghouani et al. (2014) for more details about these errors. In the MT post-editing guidelines, we provide the annotators with detailed annotation procedure and explain how to deal with borderline cases. We include many annotated examples to illustrate some specific cases of machine translation correction rules. Since there are equally-accurate alternative ways to edit the machine translation output, all being considered correct, some using fewer edits than others, we explained in the guidelines that the machine translated texts should be corrected with a minimum number of edits necessary to achieve an acceptable translation quality. However, correcting the accuracy errors and producing a semantically coherent text is more important than minimizing the number of edits and therefore, the annotators were asked to pay attention to the following three aspects: accuracy, fluency and style.
4. Annotation Pipeline
The annotation team consisted of a lead annotator and six annotators. The lead annotator is also the annotation workflow manager of this project. He frequently evaluate the quality of the annotation, monitor and report on the annotation progress. A clearly defined protocol is set, including a routine for the Post-editing annotation job assignment and the inter-annotator agreement evaluation. The lead annotators is also responsible of the corpus selection and normalization process beside the annotation of the gold standard to be used to compute the Inter-Annotator Agreement (IAA) portion of the corpus.
The annotation itself is done using an in house built web annotation framework built originally for the manual correction of errors in L1 and L2 texts (Obeid et al., 2013). This framework includes two major components: 1. The annotation management interface which is used to assist the lead annotator in the general work-flow process, it allows the user to upload, assign, monitor, evaluate and export annotation tasks. 2. The MT post-editing annotation interface is the actual annotation tool, which allows the annotators to do the manual correction of the MT Arabic output.
5. Evaluation
The low average WER of 4.92 obtained show a high agreement with the post-editing done in the first round between three annotators. The results obtained with the MT are comparable to those obtained with the L2 corpus, this can be explained by the difficult nature of both corpora and the multiple acceptable corrections for both.
6. Related Work
Large scale manually corrected MT corpora are not yet widely available due to the high cost related to building such resources. For the Arabic language, we cite the effort of Bouamor et al. (2014) who created a medium scale human judgment corpus of Arabic machine translation using the output of six MT systems and a total of 1892 sentences and 22k rankings. Our corpus is a part of the Qatar Arabic Language Bank (QALB) project, a large scale manually annotated annotation project (Zaghouani et al., 2014; Zaghouani et al., 2015). The project goal was to create an error corrected 2M-word corpus for online user comments on news websites, native speaker essays, non-native speaker essays and machine translation output.
7. Conclusion
We have presented in detail the methodology used to create a 100K word English to Arabic MT manually post-edited corpus, including the development of the guidelines as well as the annotation procedure and the quality control procedure using frequent inter-annotator measures. The created guidelines will be made publicly available and we look forward to distribute the post-edited corpus in a planned shared task on automatic error correction and getting feedback from the community on its usefulness as it was in the previous shared tasks we organized for the L1 and L2 corpus (Mohit et al., 2014; Rozovskaya et al., 2015).We believe that this corpus will be valuable to advance research efforts in the machine translation area since manually annotated data is often needed by the MT community. We believe that our methodology for guideline development and annotation consistency checking can be applied in other projects and other languages as well.
8. Acknowledgement
This project is supported by the National Priority Research Program (NPRP grant 4-1058-1-168) of the Qatar National Research Fund (a member of the Qatar Foundation). The statements made herein are solely the responsibility of the authors.
9. References
Obeid, O., Zaghouani, W., Mohit, B., Habash, N., Oflazer,K., and Tomeh, N. (2013). A Web-based Annotation Framework For Large-Scale Text Correction. In The Companion Volume of the Proceedings of IJCNLP 2013: System Demonstrations, Nagoya, Japan, October.
Mohit, B., Rozovskaya, A., Habash, N., Zaghouani, W., and Obeid, O. (2014). The first QALB shared task on automatic text correction for Arabic. ANLP 2014, page 39.
Rozovskaya Alla; Houda Bouamor; Nizar Habash; Wajdi Zaghouani; Ossama Obeid; Behrang Mohit. The Second QALB Shared Task on Automatic Text Correction for Arabic. In Proceedings of the ACL 2015 Workshop on Arabic Natural Language Processing (ANLP), Beijing, China, July 2015.
Zaghouani, W., Mohit, B., Habash, N., Obeid, O., Tomeh,N., Rozovskaya, A., Farra, N., Alkuhlani, S., and Oflazer, K. (2014). Large scale Arabic error annotation: Guidelines and framework. In International Conference on Language Resources and Evaluation (LREC 2014).
Zaghouani, W., Habash, N., Bouamor, H., Rozovskaya, A., Mohit, B., Heider, A., and Oflazer, K. (2015). Correction annotation for non-native Arabic texts: Guidelines and corpus. Proceedings of The 9th Linguistic Annotation Workshop, pages 129-139.
-
-
-
Crowd Inventing: An Innovation about Innovation
More LessThis paper presents the blueprint for the design of a practical system that would promote sharing of ideas among researchers, to allow them to identify optimal partners, to protect their intellectual capital, to ensure attribution of their ideas and to create equitable sharing in the ownership and revenue of any eventual commercializable invention. The title, Crowd Inventing, reflects the fact that the signaling among researchers in search of best partners – the “crowd” – is itself an inventive element in their larger enterprise. The combination of legal and technological components that the Crowd Inventing system offers allows it to reduce the transaction costs of this search. It is an invention about the inventive process that promises theoretical and practical advantages that will hopefully attract research sponsors or private enterprise to invest in promising projects and to thereby better promote and reward innovation. Qatar's rich research environment and entrepreneurial aspirations make it an ideal forum to implement the Crowd Inventing system as a platform that will allow it to capitalize on its investments through commercialization of products and processes. Crowd Inventing is designed to help researchers and innovators:
• Find valuable complementary ideas and research collaborators that are missing from their research team and that threaten to impair or, worse still, cripple their project. The drive to commercialization of research, as well as the basic goal of ensuring attribution of one's research, confronts huge challenges when all the inventive elements are not part of one large enterprise that has designed its own proprietary signaling and invention protection system. The open community of researchers who publish in conferences and journals confronts these challenges, with the result that researchers engage in greater secrecy that limits communication and impairs collaboration and, if they do publish, of losing all attribution and commercial value to the inventor who builds a successful product on their ideas.
• Address the inability of intellectual property to protect against a downstream user's failure to attribute or share revenue. Intellectual property imposes little or no legal obligations on such users to attribute the source of the ideas they employ. It also focuses all reward on the last person to combine the inventive ideas into a final, commercializable product; there is no legal requirement of sharing revenues with contributors that fall outside its corporate or contractual network. Neither copyright nor trademark protect the functional ideas that a researcher discloses in his/her publications. These ideas are deemed to pass into the public domain to become free for competitors to use. Nor do they provide the researcher adequate protection against false attribution. As a result, a downstream user can pluck the idea from the public domain, use it without attributing its source, and even itself claim full attribution.
• Reach their patent goal. Patent law provides researchers with protection for their novel functional ideas, but its reduction-to-practice requirement means that the proprietary reward that it offers is very distant from many research projects. In many areas of innovation the authors lack the collaboration and capital to put together all of the pieces, with the result that the entity adding the last needed element can capture the full commercial benefit of the invention. In a world where ever-increasingly complex projects result in cost-prohibitive and prolonged research and the demanding search for collaborators to stay the course to eventual invention, the patent hurdle produces huge unintended consequences. Increased secrecy is one consequence and this has strong negative effects on publication and on the signaling required to find the missing elements for successful invention. Large enterprises whose rich financial and human capital is congregated in a single corporate silo often emerge as the winners in this environment – and when they do achieve a patent they have the financial resources to protect and defend their resulting property rights. However, academic or disbursed research communities do not typically have these resources or systems in place to compete. Even though their decentralized structure and flexibility mean they are increasingly the sites of path-breaking discovery, they are unable to successful achieve patenting and their ideas pass to others to commercialize them or to fade into obscurity. The proposed Crowd Inventing system crafts a legal and a technological solution that offers the following practical solutions to research and innovation problems:
• The signaling mechanism needed for successful collaboration; more effective sharing of ideas; full attribution to all contributors; equitable sharing in the rewards of patenting; and the enhanced ability to attract financial investment to projects. Researchers that subscribe to Crown Inventing would contractually enter the system through a master agreement that defines and protects member's attribution and revenue-sharing rights. Underlying this is a technological information-sharing platform whereby data is shared in standardized form and access is monitored and controlled using a system of digital management that would help secure and control information flows. The researcher submitting the information (the source) could track access to its information and ensure attribution, while the user could more easily verify the lineage of ideas and link it more closely to the reputation of its source. Thereby both sides could gain. The Crowd Inventing master agreement would also supply contractual templates to provide standardized resolution of the revenue sharing aspects of any resulting joint ventures and its trained intermediaries could facilitate agreements, either of which could be less expensive than employing the lawyers and the other professionals that typically exact heavy taxes on every venture or technology transfer transaction.
• The means to find out the conditions under which the research data was generated and to identify and approach the source through the clearing function of the information sharing system. Once the parties had identified themselves they could begin to work together to learn more about one another's work and address the terms of any relationship between them based on this information. The Crown Inventing System anticipates the contractual needs of their relationship facilitating an ensuing master agreement which avoids the potentially long delays of initial contracting. (If necessary the identity of the source could be held back until a formal approach is made by the interested party to the source).
• The apparatus to more successfully monitor and publish the reputations of the members of its user community. Crowd Inventing would facilitate a community where not only the reputation of products (academic or applied) could be charted by registering the number of transactions, but also the reputations of companies and academic labs as joint venture partners (their good faith and candor) could be logged thereby creating a system of accountability and verifiable reputation.
• The means to create a “market” based on failed experiments conducted by other researchers in related areas. While the Crown Inventing system is designed principally to promote maximum innovation and successful invention, it could also be adapted to create a “market” based on failed or stalled experiments conducted by other researchers in related areas. Currently there is a dearth of communication within the scientific community concerning unsuccessful experiments and failed hypotheses. Only successful experiments are published; scientists do not expose their failures perhaps out of fear that it will lower their prestige and because of a lack of an appropriate, widely-disseminated forum for this purpose. The Crowd Inventing system could be used to address sharing of information about failed or shelved experiments as readily as it could regarding successful experiments, and in the process it could create market value for such information.
• A useful complement to the internal governance structure of large corporations. While it is contemplated that the Crown Inventing system would be ideally suited to communities of scattered researchers or small independent companies, it could also be employed as a useful complement to the internal governance structure of large corporations. Such enterprises confront at the level of their inter-departmental relationships, problems of how to share information between departments, employee attribution, what budgets to set for each department, and the departments, compensations, which the Crown Inventing system could address. In summary, Crowd Inventing aspires to offer the research community a solution to the impediments to collaborative communication and the inequities of an intellectual property system that rewards the last contributor and that fails to protect attribution of prior inputs. In the process it promises to help researchers more easily find the collaborators and commercial funding and other resources they need to reach invention. It is itself an academic-conceived invention that with adequate community or commercial funding could become a reality that makes Qatar a leader in facilitating innovation. This proposal is led by Professor Clinton Francis, the Founding Dean of the Hamad bin Khalifa University, who is joined by HBKU Juris Doctorate students who will assist in conducting the research for, and design of, Crowd Inventing – an innovation about innovation.
-
-
-
Analysis of In-band Full-Duplex OFDM Signals Affected by Phase Noise and I/Q Imbalance
Authors: Lutfi Samara, Ozgur Ozdemir, Mohamed Mokhtar, Ridha Hamila and Tamer KhattabThe idea of the simultaneous transmission and reception of data using the same frequency is a potential candidate to be deployed in the next generation wireless communications standard 5G. The In-Band Full-Duplex (IBFD) concept permits to theoretically double the spectral efficiency as well as to reduce the medium access control (MAC) signaling which will improve the overall throughput of a wireless network. However, IBFD radios suffer from loopback self-interference (LSI) that is a major drawback that hinders the full exploitation of the potential benefits that this system is capable of offering. Recently, there has been an increased interest in modeling and analyzing the effect of LSI on the performance of an IBFD communication system, as well as in developing novel LSI mitigation techniques at the radio-frequency (RF) front-end and/or at the baseband stage.
LSI mitigation is approached by three different ways: The first approach is a propagation domain approach, where the transmitter and receiver antennas of the full-duplex node are designed in a manner that results in minimizing the interference between them. Although this method seems promising, the risk of nulling the received signal of interest is always present. Motivated by this risk, researchers have resorted to the use of analog circuitry to regenerate the LSI effect by adjusting the gain, phase and delay of the known transmitted data to mimic the effect of the channel of the received LSI signal on the transmitted data, and finally subtracting the estimated signal from the received signal. However, this turns out to be a formidable task since the surrounding environment of the full-duplex node is always varying and the LSI channel variations are difficult to track using analog circuit components. Both of the discussed approaches are classified as passive LSI mitigation approaches, given that they lack the ability to adapt to the constantly varying LSI channel. To overcome this drawback, a third technique of LSI cancellation is implemented, where the complex implementation of an adaptive LSI mitigation technique is moved to the digital domain, and the receiver actively updates the estimation of the LSI channel depending on the performance of the communication system to finally combine it with the known transmit data and subtract it from the received signal. Given that the LSI mitigation process can be easily performed in the digital domain using digital signal processing (DSP) algorithms, one might ask: why isn't the whole LSI mitigation process performed in the digital domain? The answer is that the signal entering the analog-to-digital converter (ADC) is limited by the ADC's dynamic range. Consequently, a combination of the three aforementioned LSI mitigation techniques must be deployed towards the implementation of an efficient and reliable IBFD communication node.
Orthogonal frequency division multiplexing (OFDM) is the preferred modulation scheme that is adopted by many wireless communication standards. Its implementation using direct conversion receiver architecture, which is favored over its super-heterodyne counterpart, suffer from inter-carrier interference (ICI) introduced by RF impairments such as oscillator phase noise (PHN) and in-phase/quadrature-phase imbalance (IQI). PHN's effect is manifested in the spread of the energy of an OFDM subcarrier over its neighboring subcarriers, and IQI introduces ICI between image subcarriers. In this work, we analyze the joint effect of PHN and IQI on the process of LSI mitigation in an IBFD communication scenario. The analyses is performed to yield the average per-subcarrier residual LSI signal power after the final stage of the digital LSI cancellation. The analysis shows that, even with the perfect knowledge of the LSI channel-state information, the residual LSI power is still considerably high, and more sophisticated LSI mitigation algorithms must be designed to achieve a better performing IBFD communication scheme. Acknowledgement: This work was made possible by GSRA grant # GSRA2-1-0601-14011 from the Qatar National Research Fund (a member of Qatar Foundation). The findings achieved herein are solely the responsibility of the authors.
-
-
-
A System for Big Data Analytics over Diverse Data Processing Platforms
Authors: Jorge Quiane, Divy Agrawal, Sanjay Chawla, Ahmed Elmagarmid, Zoi Kaoudi, Mourad Ouzzani, Paolo Papotti, Nan Tang and Mohammed ZakiData analytics is at the core of any organization that wants to obtain measurable value from its growing data assets. Data analytic tasks may range from simple to extremely complex pipelines, such as data extraction, transformation and loading, online analytical processing, graph processing, and machine learning (ML). Following the dictum “one size does not fit all”, academia and industry have embarked on a race of developing data processing platforms for supporting all of these different tasks, e.g., DBMSs and MapReduce-like systems. Semantic completeness, high performance and scalability are key objectives of such platforms. While there have been major achievements in these objectives, users are still faced with many road-blocks.
MOTIVATING EXAMPLE
The first roadblock is that applications are tied to a single processing platform, making the migration of an application to new and more efficient platforms a difficult and costly task. As a result, the common practice is to re-implement an application on top of a new processing platform; e.g., Spark SQL and MLlib are the Spark counterparts of Hive and Mahout. The second roadblock is that complex analytic tasks usually require the combined use of different processing platforms where users will have to manually combine the results to draw a conclusion.
Consider, for example, the Oil & Gas industry and the need to produce reports by using SQL or some statistical method to analyze the data. A single oil company can produce more than 1.5TB of diverse data per day. Such data may be structured or unstructured and come from heterogeneous sources, such as sensors, GPS devices, and other measuring instruments. For instance, during the exploration phase, data has to be acquired, integrated, and analyzed in order to predict if a reservoir would be profitable. Tens of thousands downhole sensors in exploratory wells produce real-time seismic structured data for monitoring resources and environmental conditions. Users integrate these data with the physical properties of the rocks to visualize volume and surface renderings. From these visualizations, geologists and geophysicists formulate hypotheses and verify them with ML models, such as regression and classification. Training of the models is performed with historical drilling and production data, but oftentimes users also have to go over unstructured data, such as notes exchanged by emails or text from drilling reports filed in a cabinet. Therefore, an application supporting such a complex analytic pipeline should access several sources for historical data (relational, but also text and semi-structured), remove the noise from the streaming data coming from the sensors, and run both traditional (such as SQL) and statistical analytics (such as ML algorithms).
RESEARCH CHALLENGES
Similar examples can be drawn from other domains such as healthcare: e.g., IBM reported that North York hospital needs to process 50 diverse datasets, which are on a dozen different internal systems. These applications show the need for complex analytics coupled with a diversity of processing platforms, which raises several challenges. These challenges relate to the choices users are faced with on where to process their data, each choice with possibly orders of magnitude differences in terms of performance. For example, one may aggregate large datasets with traditional queries on top of a relational database such as PostgreSQL, but the subsequent analytic tasks might be much faster if executed on Spark. However, users have to be intimate with the intricacies of the processing platform to achieve high efficiency and scalability. Moreover, once a decision is taken, users may still end up tied up to a particular platform. As a result, migrating the data analytics stack to a different, more efficient processing platform often becomes a nightmare. In the above example, one has to re-implement the myriad of PostgreSQL-based applications on top of Spark.
RHEEM VISION
To tackle these challenges, we are building RHEEM, a system that provides both platform independence and interoperability across multiple platforms. RHEEM acts as a proxy between user applications and existing data processing platforms. It is fully based on user-defined functions (UDFs) to provide adaptability as well as extensibility. The major advantages of RHEEM are its ability to free applications and users from being tied to a single data processing platform (platform-independence) and provides interoperability across multiple platforms (multi-platform execution).
RHEEM exposes a three-layer data processing abstraction that sits between user applications and data processing platforms (e.g., Hadoop or Spark). The application layer models all application-specific logic; the core layer provides the intermediate representation between applications and processing platforms; and the platform layer embraces all processing platforms. In contrast to DBMSs, RHEEM decouples physical and execution levels. This separation allows applications to express physical plans in terms of algorithmic needs only, without being tight to a particular processing platform. The communication among these levels is enabled by operators defined as UDFs. Providing platform-independence is the first step towards realizing multi-platform task execution. RHEEM can receive a complex analytic task, seamlessly divide it into subtasks and choose the best platform on which each subtask should be executed.
RHEEM ARCHITECTURE
The three layers separation allows applications to express a physical plan in terms of algorithmic needs only, without being tied to a particular processing platform. We detail these levels below.
Application Layer. A logical operator is an abstract UDF that acts as an application-specific unit of data processing. In other words, one can see a logical operator as a template whereby users provide the logic of their analytic tasks. Such abstraction enables both (i) ease-of-use by hiding all the implementation details from users, and (ii) high performance by allowing several optimizations, e.g., seamless distributed execution. A logical operator works on data quanta, which are the smallest units of data elements from the input datasets. For instance, a data quantum represents a tuple in the input dataset or a row in a matrix. This fine-grained data model allows RHEEM to apply a logical operator in a highly parallel fashion and thus achieve better scalability and performance.
Example 1: Consider a developer who wants to offer end users logical operators to implement various machine learning algorithms. The developer defines five such operators: (i) Transform, for normalizing input datasets, (ii) Stage, for initializing algorithm-specific parameters, e.g., initial cluster centroids, (iii) Compute, for computations required by the ML algorithm, e.g., finding the nearest centroid of a point, (iv) Update, for setting global values of an algorithm, e.g., centroids, for the next iteration, and (v) Loop, for specifying the stopping condition. Users implement algorithms such as SVM, K-means, and linear/logistic regression, using these operators.
The application optimizer translates logical operators into physical operators that will form the physical plan at the core layer.
Core Layer. This layer exposes a pool of physical operators, each representing an algorithmic decision for executing an analytic task. A physical operator is a platform-independent implementation of a logical operator. These operators are available to the developer to deploy a new application on top of RHEEM. Developers can still define new operators as needed.
Example 2: In the above ML example, the application optimizer maps Transform to a Map physical operator and Compute to a GroupBy physical operator. RHEEM provides two different implementations for GroupBy: the SortGroupBy (sort-based) and HashGroupBy (hash-based) operators from which the optimizer of the core level will have to choose.
Once an application has produced a physical plan for a given task, RHEEM divides the physical plan of a task into task atoms, i.e., sub-tasks, which are the units of execution. A task atom (a part of the execution plan) is a sub-task to be executed on a single data processing platform. It then translates the tasks atoms into an execution plan by optimizing each task atom according to a target platform. Finally, it schedules each task atom to be executed on its corresponding processing platform. Therefore, in contrast to DBMSs, RHEEM produces execution plans to run in multiple data processing platforms.
Platform Layer. At this layer, execution operators define how a task is executed on the underlying processing platform. An execution operator is the platform-dependent implementation of a physical operator. RHEEM relies on existing data processing platforms to run input tasks. In contrast to a logical operator, an execution operator works on multiple data quanta rather than a single one. This enables the processing of multiple data quanta with a single function call, hence reducing overhead.
Example 3: Again in the above ML example, the MapPartitions and ReduceByKey execution operators for Spark are one way to perform Transform and Compute.
Defining mappings between execution and physical operators is the developers’ responsibility whenever a new platform is plugged to the core. In the current prototype of RHEEM, the mappings are hard-coded. Our goal is to rely on a mapping structure to model the correspondences between operators together with context information. Such context is needed for the effective and efficient execution of each operator. For instance, the Compute logical operator maps to two different physical operators (SortGroupBy and HashGroupBy). In this case, a developer could use the context to provide hints to the optimizer for choosing the right physical operator at runtime. Developers will provide only a declarative specification of such mappings; the system will use them to translate physical operators to execution operators. A simple and extensible operator mapping is crucial as it enables developers to easily provide extensions and optimizations via new operators.
PRELIMINARY RESULTS
We have implemented two applications on top of RHEEM, one for data cleaning, BigDansing [1], and one for machine learning. The performance of both applications are encouraging and already demonstrate the advantages of our vision.
Our results show that, in both cases, RHEEM enable orders of magnitude better performance than baseline system. These improvements come from a series of optimization done at the application layer as well as at the core layer. As an example of optimization at the core layer, we extended the set of physical operators with a new physical operator for joins, called IEJoin [2], This new physical operator provides a fast algorithm for joins containing only inequality conditions.
REFERENCES:
[1] Z. Khayyat, I. F. Ilyas, A. Jindal, S. Madden, M. Ouzzani, P. Papotti, J.-A. Quian-Ruiz, N. Tang, and S. Yin. BigDansing: A System for Big Data Cleansing. In ACM SIGMOD, pages 1215-1230, 2015.
[2] Z. Khayyat, W. Lucia, M. Singh, M. Ouzzani, P. Papotti, J.-A. Quiane-Ruiz, N. Tang, and P. Kalnis. Lightning Fast and Space Efficient Inequality Joins. PVLDB 8(13): 2074-2085, 2015.
-
-
-
Secure Communications Using Directional Modulation
Authors: Mohammed Hafez, Tamer Khattab, Tarek El-Fouly and Hüseyin ArslanLimitations on the wireless communication resources (i.e., time and frequency) introduces the need for another domain that can help communication systems to match the increasing demand on high data transfer rates and quality of service (QoS). By using multiple antennas. Besides, the widespread use of wireless technology and its ease of access makes the privacy of the information, transferred over the wireless network, questionable. Along with the drawback of the traditional ciphering algorithms, physical layer security rises as a solution to over come such problem.
Multiple-antennas systems offer more resources (i.e. degrees of freedom) which can be used to achieve secure communication. One of the recently developed techniques, that make use of directive antenna-arrays to provide secrecy, is Directional Modulation (DM).
In DM, the antenna pattern is recognized as a spatial complex constellation, but it's not used as a source of information. The antenna pattern complex value, at a certain desired direction, is set to have the same complex value of the symbol to be transmitted. This scheme also randomizes the signal in the undesired directions, thus, providing a source of directional security. Contrary to the regular beam-forming, which provides directional power scaling, DM technique is applied in the transmitter by projecting digitally encoded information signals into a pre-specified spatial direction while simultaneously distorting the constellation formats of the same signals in all other directions.
In our previous work, we introduced the Multi-Directional DM transmission scheme (MDDM). By using MDDM, we were able to provide multiple secure communication links for different directions. We showed that the scheme increases the transmission capacity of the system up to the number of the antenna elements. Also, the secrecy capacity increases with the increase of the number of transmitted streams. Moreover, MDDM has a low complexity structure compared to other DM implementations and it does not necessitate the implementation of special receiver algorithms.
Up till now, DM was only discussed from the algorithm construction perspective, and to the extent of the authors knowledge there has been no study of the employment of DM algorithms into the system level. Hereby, we introduce a multi-user access system level design that uses MDDM as a transmission technique. The new design utilizes the dispersive nature of the channel to provide a location-based secure communication link to each of the legitimate users. The scheme shows the ability to highly degrade the eavesdropper channel, even for the worst case scenarios. We also deduce the Achievable secrecy rate and secrecy outage probability for the scheme. The amount of degradation increases with the increase of the number of users in the system. Moreover, the secrecy analysis shows that the proposed system is always able to achieve a positive secrecy rate with a high probability. Besides, we compare the performance of this scheme to the performance of Artificial Noise (AN) precoding, as they share the same assumption about the channel knowledge. The results also shows that the DM scheme outperforms the ordinary AN scheme, while having a simpler hardware and processing structure.
-
-
-
Building a Global Network of Web Observatories to Study the Web: A Case Study in Integrated Health Management
Authors: Wendy Hall, Thanassis Tiropanis, Ramine Tinati and Xin WangThe Web is barely 25 years old but in that time it has changed every aspect of our lives. Because of its sociotechnical rather than purely engineered nature, not only is the Web changing society but also we shape the way the technology evolves. The whole process is inherently co-constituted and as such its evolution is unlike any other system. In order to understand how the Web might evolve in the future - for good or bad - we need to study how it has evolved since its inception and the associated emergent behaviours. We call this new research discipline Web Science [1,2], and it is important for all our futures that we urgently address its major research challenges.
We are fast becoming part of a world of digital inter-connectivity, where devices such as smartphones, watches, fitness trackers, and household goods are part of a growing network, capable of sharing data and information. Increasingly, the Web has become the ubiquitous interface to access this network of devices. From sensors, to mobile applications, to fitness devices, these devices are transmitting their data to - often - centralised pools of data, which then become available via Web services. The sheer scale of this data leads to a rich set of high-volume, real-time streams of human activity, which are often is made publicly consumable (potentially at a cost) via some API. For academia, the combination of these sources are providing social scientists and digital ethnographers a far richer understanding of society, and how we as individuals operate.
These streams represent a global network of human and machine communication, interaction, and transaction, and with the right analytical methods, may contain valuable research and commercial insights. In domains such as health and fitness for example, the aggregation of data from mobile devices is supporting the transition towards the quantified-self, and offers rich insight into the health and well-being of individuals, with the potential of diagnosing or decreasing disease.
Why the need for Web Observatories?
Studying the Web provides us with critical insights about how we as individuals and society operate in the digital world. The actions, communications, interactions, and transactions produced by humans and machines has the potential to offer rich insight into our world, allowing us to better understand how we operate at a micro and macro scale. However, there are a number of barriers that prevent researchers from making the most of those data resources.
Herein lies a challenge, and a great opportunity. We are now in a position where the technologies used within the big data processing pipeline are maturing, as are the methods we use to analyse data to provide valuable insights. Yet, overshadowing these benefits are issues of data access, control, and ownership. Whilst the data being produced continue to grow, their availability beyond the walled-gardens of the data holder - whether commercial or institutional - reduce the full potential of analysis envisaged in the big data era.
(a) Datasets are distributed across different domains (b) Metadata about those datasets are not available or are in different vocabularies/formats, (c) Searching or for or inside datasets is not possible, (d) Applying analytics on one or more datasets requires copying them into a central location, (e) Datasets are often provided in the context of specific disciplines lacking the metadata and enrichment that could make them available in other disciplines, (f) The nature of some of the datasets often requires access control in the interest of privacy. (g) There is a need for engines that lower the barrier of engagement with analytics for individuals, organisations and interdisciplinary research communities by supporting the easy application of analytics across datasets without requiring them to be copied into a central location, (h) There is a need for services enabling the publication and sharing of analytical tools within and across interdisciplinary communities.
Addressing the challenges described above, we have introduced the Web Observatory [3], a globally distributed infrastructure that enables users to share data with each other, whilst retaining control over who can view, access, query, and download their data. At its core, a Web Observatory comprises of a list of architectural principles that describe a scalable solution to enable controlled access to heterogeneous forms of historical and real-time data, visualisations, and analytics. In order to handle these new forms of big, and small data, significant effort has gone into developing technologies capable of storing, querying, and analysing high-volume datasets - or streams - in a timely fashion, returning useful insights of social activity and behaviour.
A Global Network of Web Observatories
The Web Observatory (WO) project, developed under the auspices of the Web Science Trust, aims to develop the standards and services that will interlink a number of existing or emergent Web Observatories to enable the sharing, discoverability and use of public or private datasets and analytics across Web Observatories, on a large, distributed scale (http://online.liebertpub.com/doi/abs/10.1089/big.2014.0035). It involves the publication or sharing of both datasets and analytic or visualisation tools (http://webscience.org/web-observatory/). At the same time, it involves the development of appropriate standards to enable the discovery, use, combination and persistence of those resources; effort in the direction of standards is already underway in the W3C Web Observatory community group (http://www.w3.org/community/webobservatory/).
International research collaboration is one of the primary goals of creating a network of Web Observatories. There has already been significant effort in creating a number of Web Observatory nodes globally [4,5]
In this paper will describe an instance of the Web Observatory, the Southampton Web Observatory (SUWO) and how it is being applied both at Southampton and at other institutions in areas such as integrated health management in particular in support of an aging population
We believe that the true potential of the Web Observatory vision will be realised when the different observatories become part of a global network of Wide Web of Observatories, allowing cross-observatory querying and analysis. Only through working through a set of initial application areas, we will show the immediate value that the Web Observatory platform will provide, from the sharing of datasets and resources, to improving international collaboration and research opportunities as a result of the raised awareness of institutional resources.
References
[1] Berners-Lee, Tim, Hall, Wendy, Hendler, James, Shadbolt, Nigel and Weitzner, Danny Creating a Science of the Web. Science, 313, (5788), 769-771.
[2] Hendler, James, Shadbolt, Nigel, Hall, Wendy, Berners-Lee, Tim and Weitzner, Daniel (Web science: an interdisciplinary approach to understanding the Web. Communications of the ACM, 51, (7), 60-69.
[3] Tiropanis, Thanassis, Hall, Wendy, Shadbolt, Nigel, De Roure, David, Contractor, Noshir and Hendler, Jim, The Web Science Observatory. IEEE Intelligent Systems, 28, (2), 100-104.
[4] Tinati, Ramine, Wang, Xin, Tiropanis, Thanassis and Hall, Wendy, Building a real-time web observatory. IEEE Internet Computing (In Press).
[5] Wang, Xin, Tinati, Ramine, Mayer, Wolfgang, Rowland-Campbell, Anni, Tiropanis, Thanassis, Brown, Ian, Hall, Wendy and O'Hara, Kieron, 2Building a web observatory for south Australian government: supporting an age friendly population. In, 3rd International workshop on Building Web Observatories (BWOW), 10pp.
-
-
-
Internet of Things Security: We're Walking on Eggshells!
By Aref MeddebSince the Internet of Things (IoT) will be entwined with everything we use in our daily life, the consequence of security flaws escalates. Smart objects will govern most of the home appliances and car engines yielding potential disaster scenarios. In this context, successful attacks could lead to chaos and scary scenarios (www.darkreading.com). Unprotected personal information may expose sensitive and embarrassing data to the public and attacks may threaten not only our computers and smart devices, but our intimacy and perhaps our lives too.
Because persons and objects will be bonded with each other, user consent becomes critical. Therefore, thing, object, and user Identity will be the focus of future IoT security solutions, yielding a Trust, Security, and Privacy (TSP) paradigm, which may constitute the Achilles' heel of IoT.
While security issues are quite straightforward, mainly from background knowledge, privacy issues are far more complex. Privacy constitutes a rather challenging task, even for the most skilled developer and may impede large scale deployment of IoT. Vinton Cerf stated that “Privacy may actually be an anomaly”, generating a whole lot of discussions among Internet users. And as Scott McNealy further pointed out nearly a decade ago: “You have zero privacy anyway. Get over it!”
From an industry and developer perspective, privacy is a matter of user conduct and responsibility. Consumers need to be trained to understand that by saving their personal data on various devices, they expose themselves to various types of attacks. Often, there is no mean for users to know whether their personal data is being tracked or “stolen” by third parties.
In fact, technology seems to have evolved far beyond any expectations and we seem not prepared to deal with it. As Vinton Cerf also pointed out that “figuring out how to make a security system work well that doesn't require the consumer to be an expert is a pretty big challenge.” For instance, consumers often use easy to remember and similar passwords as well as USB Flash drives in various systems, rendering the development of secure solutions as digging holes in water.
With the advent of IoT, manufacturers of “traditional” home appliances, construction, and industrial engines will be required to include communication components to their products. As these components will be subject to the same cyber threats as computers and smart phones, manufacturers will also need to integrate security into their manufacturing processes, from the design phase to packaging.
There are quite a number of IoT architectures emanating from mainstream standards bodies. In what follows we describe and discuss some of the most promising architectures namely from IETF, ITU-T, ISO/IEC, IEEE, ETSI, and oneM2M.
IETF's Security Architectures
As for the common Internet, the IETF is playing a lead role in IoT standardization efforts. A variety of proposals are being made, ranging from application layer to network layer protocols; and from sensor networks to RFID communications.
IETF Core Architecture
According the IETF, “a security architecture involves, beyond the basic protocols, many different aspects such as key management and the management of evolving security responsibilities of entities during the lifecycle of a thing.” The proposed IoT security architecture aims to be flexible by incorporating the properties of a centralized architecture whilst at the same time allowing devices to be paired together initially, without the need for a trusted third party.
Some key new security features that go beyond current IT paradigms are to take into account the lifecycle of a thing. To this regard, a thing may need to go through various stages during its lifecycle: Manufactured, Installed, Commissioned, Running, Updated, Reconfigured, etc.). In the manufacturing and installation phases, the thing is Bootstrapped, while during the commissioning and running phases, the thing is Operational. In each stage, security credentials and ownership information may need to be updated.
The architecture also takes into account the specific features of IoT devices namely, low processing power, low energy resources, and potential inaccessibility. Further, things may need to be protected for decades and need to be reset to rebuild security capabilities over time.
Further, the IETF proposes an architecture that describes implementation and operational challenges associated with securing the streamlined Constrained Application Protocol (CoAP, RFC 7252). The draft also proposes a security model for Machine to Machine (M2M) environments that requires minimal configuration. The architecture relies on self-generated secure identities, similar to Cryptographically Generated Addresses (CGAs) (RFC3972) or Host Identity Tags (HITs) (RFC5201).
DTLS-based Security Architecture
Datagram Transport Layer Security (DTLS, RFC 6347) is based on the stream-oriented Transport Layer Security (TLS) protocol and is intended to provide similar security features, but uses datagram semantics. The IETF introduces a full two-way authentication security scheme for IoT based on DTLS, which is designed to work over standard protocol stacks namely UDP/IPv6 over Low power Wireless Personal Area Networks (6LoWPANs, RFC 4944).
HIP support for RFIDs
In order to enforce privacy, an architecture based on the Host Identity Protocol (HIP, RFC 5201) for active RFID systems that supports tamper resistant computing resources is proposed.
The HIP-RFID architecture includes three functional entities: HIP RFIDs, RFID readers, and portals. The architecture defines a new HIP Encapsulation Protocol (HEP). The architecture also defines an identity layer for RFID systems that is logically independent from the transport facilities. HIP-RFID devices hide the identity (typically an EPC-Code) by a particular equation that can be solved only by the portal. Messages exchanged between HIP-RFIDs and portals are transported by IP packets.
ETSI M2M Architecture
The ETSI M2M architecture describes a range of variants that depend on the security characteristics of the underlying networks and on the relationships between the M2M service provider and the network operator.
The ETSI TS 102 690 Technical Specification (TS) describes a functional architecture, including the related reference points and the service capabilities, identifiers, information model, procedures for bootstrapping, security, management, charging, and M2M communications implementation guidance. The M2M functional architecture is designed to make use of IP based networks, typically provided by 3GPP as well as the Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN) environment.
Among other things, the ETSI TS 102 690 TS introduces an M2M security framework for underlying functions and related key hierarchy. It is worth noting that the ETSI 103 104 specification describes an “Interoperability Test Specification for CoAP Binding of ETSI M2M Primitives”, which is of a particular importance in terms of interoperability with IETF standards.
OneM2M Security Architecture
The oneM2M standardization body emerged as a unified effort of standards organizations namely, ETSI, ATIS (Alliance for Telecommunications Industry Solutions), TIA (Telecommunications Industry Association), CCSA (China Communications Standards Association), TTA (Telecommunications Technology Association of Korea), ARIB (Association of Radio Industries and Businesses) and TTC (Telecommunication Technology Committee) from Japan. oneM2M aims to “unify the Global M2M Community, by enabling the federation and interoperability of M2M systems, across multiple networks and topologies”.
ITU-T Architectural Framework
The ITU-T is actively working on standardizing IoT security. For this purpose, a large number of recommendations were published or being considered. In particular, recommendation ITU-T Y.2060 provides an overview of IoT and clarifies the concept and scope of IoT. It further identifies fundamental characteristics and high-level requirements of IoT. What is important to note is that security and privacy are assumed to be de facto features of IoT within ITU-T standards.
ITU-T SG17 is currently working on cybersecurity; security management, architectures, and frameworks; identity management; and protection of personal information. Further, security of applications and services for IoT, smart grid, Smartphones, web services, social networks, cloud computing, mobile financial systems, IPTV, and telebiometrics are also being studied.
In particular, ITU-T rec. X.1311 provides a security model for Ubiquitous Sensor Networks (USN). Note that this model is common with ISO/IEC 29180 and based on ISO/IEC 15408-1 (see below). In the presence of a threat, appropriate security policies will be selected and security techniques will be applied to achieve the security objective.
Further, rec. X.1312 provides USN middleware security guidelines, while security requirements for routing and ubiquitous networking are provided in Rec. X.1313 and X.1314, respectively.
In addition, ITU-T is actively working on tag based identification through a series of recommendations such as ITU-T rec. F.771, rec. X.672, and rec. X.660. In particular, rec. X.1171 deals with Threats and Requirements for Protection of Personally Identifiable Information in Applications using Tag-based Identification.
ISO/IEC Reference Architecture
The ISO/IEC NP 19654 IoT draft std. introduces a Reference Architecture as a “generalized system-level architecture of IoT Systems that share common domains”. Developers may use parts or all these domains and entities. The IoT RA also aims to provide rules, guidance, and policies for building a specific IoT system's architecture. The IoT RA includes three key enabling technology areas:
1. IoT system of interest;
2. communications technology; and
3. information technology.
The IoT RA standard describes a conceptual model where seven IoT System Domains are defined: 1) IoT System, 2) Sensing Device, 3) Things/Objects, 4) Control/Operation, 5) Service Provider, 6) Customers, and 7) Markets.
The ISO/IEC 29180 std. (which is common with ITU-T rec. X.1311 described above) describes security threats to and security requirements of USN. The std. also categorizes security technologies according to the security functions.
On the other hand, ISO/IEC 29167-1 deals with security services for RFID air interfaces. This standard defines a security service architecture for the ISO/IEC 18000 RFID standards. It provides a common technical specification of security services for RFID devices that may be used to develop secure RFID applications. In particular, the std. specifies an architecture for intractability, security services, and file management.
Moreover, the ISO/IEC 24767 standard specifies a home network security protocol for equipment that cannot support standard Internet security protocols such as IPSec or SSL/TLS. This protocol is referred to as Secure Communication Protocol for Middleware (SCPM).
European Internet of Things Architecture (IoT-A)
A Reference Model and a Reference Architecture are introduced by IoT-A, both providing a description of greater abstraction than what is inherent to actual systems and applications. The RM is composed of several sub-models. The primary model is the IoT Domain Model, which describes all the concepts that are relevant to IoT. Other models include the IoT Communication model and the Trust, Security, and Privacy (TSP) Model.
The TSP model introduces interaction and interdependencies between these three components. The IoT-A model focuses on Trust at the application level providing data integrity, confidentiality, authentication, and non-repudiation.
In turn, the security RM proposed by IoT-A is composed of three layers: the Service Security layer, the Communication Security layer, and the Application Security layer. One of the key aspects is that while taking into account heterogeneity, tradeoffs between security features, bandwidth, power consumption, and processing are of a major concern in IoT-A.
Conclusion
The need for a Reference Model and a Reference Architecture seems to have reached a global consensus. However, because this concept is quite abstract, we need more pragmatic definitions that give developers straightforward guidelines in their endeavour to develop secure IoT services and applications.
Technologies like Zigbee, KNX, Z-Wave, BACNet are quite mature and much more secure than 6LoWPAN. Users who invested in those mature technologies may not be willing to switch to another technology any time soon. This was also the case for other networking areas where IP may be used for entertainment, education, and research but cannot be trusted for transactional applications, business critical applications, and sensitive applications requiring high reliability and security.
Further, the advantages brought by 6LowPAN over Zigbee are not significant: they both use the 802.15.4 PHY and MAC layers. Zigbee uses its own high layer stack while 6LowPAN is based on compressed IPv6 headers. Further, 6LowPAN requires an adaptation layer and supports fragmentation, a feature that may be cumbersome given the required simplicity of constrained low-resource environments.
ITU-T, IEEE, IETF, ETSI, and ISO/IEC seem to be heading towards a common security architecture although the picture is not clear yet. The oneM2M initiative is one step towards this goal. Other initiatives are needed where pragmatic definitions will be of a much greater help for developers.
-
-
-
Lattices are Good for Communication, Security, and Almost Everything
Authors: Joseph Jean Boutros, Nicola di Pietro, Costas N. Georghiades and Kuma P.R. KumarMathematicians considered lattices as early as the first half of the nineteenth century, e.g. Johann Carl Friedrich Gauss and Joseph-Louis Lagrange explored point lattices in small dimensions. After the pioneering work of Hermann Minkowski around 1900, lattices were extensively studied in the twentieth century until engineering applications in the areas of digital communications, data compression, and cryptography were recently discovered. Nowadays it is admitted that lattices are good for almost everything, including many new fields such as physical-layer security, post-quantum cryptographic primitives, and coding for wireless mobile channels.
In this talk, after introducing the mathematical background for point lattices in multi-dimensional Euclidean spaces, we shall describe how lattices are used to guarantee reliable and secure communications. The talk will include strong technical material but it is intended to a large audience including engineers and scientists from all areas. The talk shall also present new results on lattices found in Qatar under projects NPRP 6-784-2-329 and NPRP 5-597-2-241 funded by QNRF.
Lattices are mathematical structures with specific algebraic and topological properties. We consider the simplest form of lattices, i.e. lattices in real Euclidean spaces equipped with the standard scalar product. In communication theory, lattices can play different roles in the processing and the transmission of information. They are suitable for vector quantization of analog sources, for channel coding as coded modulations, and also for joint source-channel coding. In the recent literature, lattices are found to be good tools in network coding and secure coding at the physical layer. More information on lattice for communications can be found in [1] and references therein. A lattice is a Z-module of the Euclidean vector space R^N or equivalently it is a discrete additive subgroup of R^N. A new family of lattices, referred to as Generalized Low-Density Lattices (GLD) was built in Qatar. GLD lattices are obtained by Construction A from non-binary GLD codes. Their impressive performance and analysis can be found in [2].
In his seminal paper [3], Miklos Ajtai showed how lattices could be used as cryptographic primitives. Since Ajtai's result, lattice-based cryptography became very popular in the cryptography community. One important area is Learning With Errors (LWE), see [4]. LWE is a clear example of elementary mathematical problem with an algorithmically complex solution. Thanks to its intrinsic connection with lattice-based problems that are known to be hard also for quantum algorithms, LWE has raised much interest in the last decade, it is part of the so-called post-quantum cryptography. Another area in lattice-based cryptography is GGH. This is the first McEliece-like scheme for lattices, referred to as GGH cryptosystem, and was proposed by Goldreich, Goldwasser, and Halevi in 1997. GGH is a lattice equivalent of the McEliece cryptosystem based on error-correcting codes. Many other cryptosystems based on lattices were investigated and proposed by several researchers. In this talk, after presenting how lattices are used for reliable communications, we also present how lattices are used to secure data communications. The attached slides constitute a draft mainly focusing on the communication aspects of lattices. These slides will be completed by a second part on security based on lattices.
[1] R. Zamir, Lattice Coding for Signals and Networks, Cambridge, 2014.
[2] J.J. Boutros, N. di Pietro, and Y.C. Huang, “Spectral thinning in GLD lattices”, Information Theory and Applications Workshop (ITA), La Jolla, pp. 1-9, Feb. 2015. Visit http://www.ita.ucsd.edu/workshop/15/files/paper/paper_31.pdf
[3] M. Ajtai, “Generating Hard Instances of Lattice Problems,” Proc. of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, pp. 99-108. doi:10.1145/237814.237838, 1996.
[4] V. Lyubashevsky, C. Peikert, and O. Regev, “A Toolkit for Ring-LWE Cryptography,” in EUROCRYPT, pp. 35-54, 2013.
-
-
-
South Asia's Cyber Insecurity: A Tale of Impending Doom
More LessIn the digital era, India's national security has extricably been linked with its cyber security. However, although India has digitalized its governance, economy and daily life on an industrial scale, it has never paid adequate attention to adopt a programme side-by-side to securitize its digitalization plan. Resultantly, not only India's cyber space but also its physical spheres have been exposed and facing constant attacks from its rivals and enemies. India is the single biggest supplier of cyber professionals around the world and successfully leading cyber space across the globe. But India's army of cyber professionals falters when it comes to detect the simplest of cyber crime, which often led to devastating consequences. Cyber security is ensuring secure use of computers, smart phones as well as computer network including internet from security threat – physical or virtual (emphasize mine). There are two types of threat attached with cyber security. First is threat to digital equipment from unauthorized access, with the intention to change, destruct or misuse the information available on that system which would wreck havoc on the intended service linked with that system. The second threat, which is still to be analysed by digital practitioners, analysts, security agencies and academics, is ‘the authorized use of cyber tool to aid, organize and orchestrate terror attacks and conduct or facilitate devastating physical damage to life, property and national assets (interpretation mine)’. In India, nearly all efforts, public and private, to prevent cyber threats are employed within the description of first type of threat. No endeavour spared on either to monitor or prevent the second type of threat. All cyber security related debates, government commissioned reports, private initiatives and public discourses are confined on how to secure the cyber information, data, secrets stored in computer networks and the seamless functioning of software enabled services. But contrastingly, most of the damages suffered by India during the past decade are because of the second type of security threat where enterprising terrorists and criminals have been exploiting the cyber world to inflict severe damage to personal as well as national security. Terrorists and criminals have been using telephone, email, internet, instant messaging, VoIP and other method of communication to execute terror plot and crime. Therefore, it is essential for the security agencies to keep pace with the plotters. Western intelligence agencies like the British MI6 and American Central Intelligence Agency (CIA) have been using eavesdropping technologies to chase, arrest, and pre-empt ominous attack plot as well as imminent crime. India's Military Intelligence uses the eavesdrop method to intercept the instructions of rival armies to their corps commanders and cadres while its Intelligence Bureau employs the method on a limited scale to unravel domestic disturbances and violence. Eavesdropping is the unauthorized real-time interception of a private communication, such as phone call, email, instant message, internet search, videoconference, and fax transmission. Owing to its robust cyber security programme and a pro-active interception policy, the United States has successfully prevented 25 terrorist attacks since 11 September 2001. In the contemporary era, potential recruits and cadres for various terror organizations and crime syndicates are found on social media. Therefore, the terrorist organizations are not looking recruits from the campus of orthodox madrassas or from poverty stricken ghettos but on the social sites to enlist the participation of highly educated radicals with an ability to crack government security. Leading terrorist organizations and various terrorist leaders are openly visible on cyber space flaunting their idiosyncratic agenda to lure potential foot soldiers. For example, three Muslim youths from an upmarket Mumbai suburb have recently not only joined the Islamic State of Iraq and Syria (ISIS) through social media but also travelled to Mosul in Iraq and received training to become suicide bombers. In India, the ISIS has not been soliciting cadres through mosque-madrassa sermoning but by a self-motivated cyber savvy information technology professional from Bangaluru. The cyber world provided an extensive and expansive platform to the terror recruiters and the recruits to meet, give-and-receive indoctrination and orchestrate high volume terror attacks. All nineteen 9/11 attackers, all four 7/7 London suicide bombers, and even the kingpin of 26/11 Mumbai attacks David Headley were recruited by their respected terrorist organizations from the cyber world. Therefore, it is essential to install a proper mechanism to monitor and restrain the users not to fall in the trap of terror organizations. Indian security agencies have been functioning in a reactionary fashion where prevention is received least priority. The country's British era security system is so archaic that the police officers on duty at street, which form the first line of citizen's defence, do not understand what cyber crime is. Security is a state subject and due to lack of evolution, provincial police departments have been using obsolete methodology to deal with modern day crime. Because of the inertia of security agencies, citizens do not trust the state police. Added to the malice is the fact that state police are neither capable nor trained to deal with cyber related crimes. At the federal level, India is still to develop a data base of criminals, home-grown militants and international terrorists with recognizable information like facial images, fingerprints, voice samples and biographical descriptions. In the absence of such a data bank, the viability of the system installed at India's entry-exit ports to effectively screen individuals entering or departing the country is worthless. It is time to correct the anomalies and absence of a robust cyber security system in India. While the Modi government is spreading the digital web throughout the country as part of its ‘Digital India’ campaign what India lacking is a definite monitoring mechanism. Misguided exuberance on the part of the government to digitalize India would prove counterproductive sooner than later. Ibn Khaldun, the all time great Arab historian, in his seminal ‘Muqaddimah’ explained how simple court intrigues devastated and defeated mighty emperors who were otherwise invincible and matchless in open battle. On cyber security issues India is following the maxim of Iban Khaldun. As per an estimation of the National Security Council, China, with its 1.25 lakh cyber security experts, is a potential challenge to India's cyber security. In humiliating contrast, India has a mere 556 cyber security experts. At stake is India's US$ 2.1 trillion GDP, power grids, telecommunication lines, air traffic control, the banking system and all computer-dependent enterprises. India and China's cyber security preparedness is a striking study in contrast. India is a reputed information technology-enabled nation while China struggles with its language handicap. India, with a massive 243 million internet users, has digitized its governance, economy and daily life on an industrial scale without paying adequate attention to securitize the digitization plan. In the digital era, national security is inextricably linked with cyber security, but despite being the single biggest supplier of cyber workforce across the world India failed to secure its bandwidth and falters to detect the simplest of cyber crimes, which often leads to devastating consequences. India's Cyber Naiveté India's inertia to induct cyber security as an essential element of national security and growth is tremblingly palpable. Cyber security is less debated, sporadically written about, and rumoured at best in India. Because of this apathy and despite India's grand stature in the cyber world, India is vulnerable to the cyber snarls of China and other countries. With its archaic governmental architecture, India is still in expansion mode with little time spared on digital security. One of the significant reasons of India's inertia is its lack of understanding and appreciation of the gravity of cyber security. Added to that, despite being a proclaimed land of young people, India's age-old lamentation for its youth is one of the vital stumbling blocks to adopting a strong cyber security policy. For example, the Narendra Modi-government appointed expert group ‘to prepare a roadmap on cyber security’ is comprised of aged professors and busy bureaucrats who cannot keep pace with the speed, agility and thought of modern-day hackers. China and all other countries' cyber security, on the other hand, rest in the hands of their young cyber experts. Prime Minister Modi might be a cyber wizard but the country's political apathy to cyber security is blatant. While the Chinese President and Prime Minister have been involving themselves directly with the cyber security initiative, no political figure in India has ever shown the slightest interest in securing India's cyberspace. The Ground Zero Summit, which is considered as the Mecca of India's cyber security debate and an earnest endeavor of cyber security professionals, failed to get a single political figure to deliberate on the issue. The lone reluctant political participant, former army-general-turned-politician Gen. V.K. Singh addressed the gathering through video conferencing. Prime Minister Modi talks about Digital India and the next wave of internet growth will have to come from vernacular users who would be far more vulnerable to cyber-related deception than their city-based English-speaking counterparts. The apathy of aging politicians and bureaucrats stem from the fact that this new field is dominated by twentysomethings with cans of Diet Coke and a constant chat history with their girlfriends. India is delaying the rightful prestige to its young cyber security professionals at its own peril. China, US, Israel and even war-torn Syria has long been cherishing the ability of its young cyber professionals. India's vulnerability to Chinese cyber attacks could be judged from the fact that a colonel rank officer from People's Liberation Army informed Swarajya contributing editor Ramanand Sengupta that India's cyber infrastructure to protect its stockmarkets, power supply, communications, traffic lights, train and airport communications is so ‘primitive’ that can be overwhelmed by the Chinese in less than six hours. So if there is a second India-China War, India's adversary does not need to send troops to the trenches of the Himalayas but to ask its cyber warriors to cripple India's security infrastructure from their cool air-conditioned computer rooms. India is nowhere in the cyber war that has engulfed the globe. India's response to such a critical situation is a timid National Cyber Security Policy that the government circulated in 2013. There is no national overhaul of cyber security and the Indian Computer Emergency Response Team, the statutory body to look after cyber attacks, has little critical strength or capability. Its endeavour to recruit young talent and meaningfully engage them is still to take off. After the 2013 National Security Council note that exposed India's cyber security unpreparedness, the government decided to augment infrastructure and hire more professionals. However, what is required is a strategic vision to ensure stealth in India's cyber security and a political conviction to plug strategic vulnerabilities. The National Technical Research Organization has regularly been alerting successive governments about the danger from Chinese cyber attacks. India cannot afford to be passive and unresponsive because if it does not not act now, by the time a sophisticated cyber-attack happens, it will probably be too late to defend against it effectively. India's immediate requirement is to understand the impending cyber security threat from China and build better network filters and early warning devices and add new firewalls around computers that run the Indian economy and regulate vital civil and military installations. But in any battle the attackers are always embedded with all advantages from choosing the battlefield to deciding the time of war to the choice of instrumentalities. Poor defenders end up defending an attack that they even cannot imagine.
-
-
-
Role of Training and Supporting Activities in Moderating the Relationship Between R & D and Introducing KM Systems Within SMEs
More LessThis paper presents an abstract of the final phase of an on-going research project aiming at investigating the antecedents and consequences of research and innovating within Lebanese small and medium-sized enterprises (SMEs). The role of training personnel and introducing supporting activities in moderating the relationship between R & D and introducing knowledge management systems within Lebanese small- medium –sized innovation in Lebanon still suffer from funding shortages, short of IT personnel training and lack of the ability to adequately use existing knowledge. Ashrafi, and Murtaza suggest that “Large organizations have enough resources to adopt ICT while on the other hand SMEs have limited financial and human resources to adopt ICT” (Ashrafi and Murtaza, 2008 P. 126). Even though, Lebanese government is trying to create a digital economy, Lebanon ranked 94th out of 144 countries on the Network Readiness Index in 2012 and “In the Arab world, Lebanon ranked in 10th position, right behind Morocco (89th worldwide), but right ahead of Algeria (131st Worldwide)” (BankMed, 2014; P.19). What it is imperative, however, to note here is that “SMEs have been recognized as an important source of innovative product and process” (HanGyeol et al 2015; P.319). It is widely believed that “Research and development (R&D) intensity is crucial for increasing the innovative capacity of small to medium-sized enterprises (SMEs)” (Nunes et al 2010; P.292). Most interestingly, “the Labour Market Survey (2001) showed a clear relationship between business failure and a lack of planning or training by SMEs” (Javawarna et al 2007; P.321). Based on existing review of literature, it is found that business expansion oblige SMEs to adopt new and original information technology solutions. Simultaneously, it is suggested that “Lack of training and skills of IT in organizations will result in a limited use of IT and lack of success in reaping benefits from computer hardware and software in organizations” (Ghobakhloo et al 2012, P.44). All that is said is true, “Information technologies (IT) have become one of the most important infrastructural elements for SMEs” (Uwizeyemungu and Raymond, 2011 P.141). As a result, it is generally believed that information technology has an imperative role to play in gaining innovation and competitiveness for SMEs. What is more, it has always been recognized that investing in technology is necessary but insufficient by itself. An imperative need exists for businesses of all sizes to protect their customers by protecting themselves from cyber attack. This is to be accomplished via changing the attitude towards cyber security and the development of cyber culture. Valli and his associates note that “There is little literature available about the ability of SMEs to deploy, use and monitor cyber security countermeasures” (Valli et al 2014;P. 71). Borum and his colleagues believe that “Industries and commercial sectors must collaborate with the government to share and disseminate information, strengthen cyber intelligence capabilities and prevent future cyber incidents” (Borm et al 2015 P.329). Uwizeyemungu and Raymond suggest that “IT adoption and assimilation in these firms should be the product of an alignment between the strategic orientation and competencies that characterize their organizational context on one hand, and specific elements in their technological and environmental contexts on the other hand” (Uwizeyemungu and Raymond 2011, P.153). A study by Ghobakhloo and his colleague “suggested that through the passing of cyber laws by governments to regulate and secure online transaction activities, and also by providing appropriate anti-virus and/or firewall/security protocols for SMEs by vendors and service providers to reduce or prevent the attacks of hackers, viruses and spyware, the perceived risk of IT adoption by these businesses, should be alleviated” (Ghobakhloo et al 2012, P.57). To lead the way to successful innovation within SMEs, this study will be a significant effort in promoting the key sustainability issues affecting innovation within SMEs. The best way to start is by understanding innovation within the Lebanese SMEs. A study such as the one conducted here is recommended by experts in this area. Armbruster and his associates noted that “There is still plenty of research to do before organizational innovation surveys achieve the degree of homogeneity and standardization that advanced R&D and technical innovation surveys possess” (Armbruster et al 2008, P.656). The purpose of this investigation is to determine the relative importance of introducing supporting activities, training personnel and R & D on the variation in introducing knowledge management systems within Lebanese SMEs. To this end, The aim of this project is to investigate the adoption of existing technologies to new applications in a concrete SME business case in addition to what motivate innovation within Lebanese SMEs and the challenges and barriers facing SMEs in adopting innovation. The population of the study consists of all SMEs in Lebanon. Most of SMEs are family businesses and as to be expected “Family involvement in a firm has an impact on many aspects of organizational behaviour” (Cromile and O'Sullivan, 1999, p. 77). Morris and his colleagues argue that “family firms violate a tent of contemporary models of organizations, namely, the separation of ownership from management” (Morris et al, 1996, p. 68) This leads to a lot of complications including the succession problems, role conflict and role ambiguity that may represent major barriers to adopting IT and innovation within SMEs. The sample for this study is relatively large sample and the instrument for collecting the primary data was a well constructed questionnaire. Cronbach's alpha and factor analysis were used to establish the reliability and construct validity of the instrument. Findings of the study show that introducing new or significantly improved supporting activities, training personnel, having an employee who is fully in charge of the website and having an R&D department are the significant factors affecting introducing new or significantly improved knowledge management systems to better use or exchange information, knowledge and skills within Lebanese SMEs. Findings of this study are in line with previous findings. Schienstock and associates believe that firms have to develop their competence to learn and innovate by introducing new knowledge management practices and organizational restructuring. In fact, they criticized the traditional approach in the classical studies of “the so-called linear model, traditional innovation policy focuses primarily on the creation of new scientific and technical knowledge, supposing some kind of automatic transformation of this new knowledge into new products” (Schienstock et al 2009, PP.49–50). What is more, Molero and García believe that “the theory about factors affecting firms' innovation has still a long way to go because the analytical object is complex and difficult to set limits for” (Molero and García 2008, P.20). This project has implications for policy making, decision making and recommendations for further research.
References:
Armbruster, H. Bikfalvi, A. Kinkel, S. and Lay, G (2008) “Organizational innovation: The challenge of measuring non-technical innovation in large-scale surveys”, Technovation 28, 644–657. The full text is available at: www.sciencedirect.com
Ashrafi, R. and Murtaza, M. (2008), “Use and Impact of ICT on SMEs in Oman.” The Electronic Journal Information Systems Evaluation Volume 11 Issue 3, pp. 125–138. The full text is available at: www.ejise.com
BankMed (April 2014) “ANALYSIS OF LEBANON'S ICT SECTOR”, P.19. The full text is available at: http://www.bankmed.com.lb/LinkClick aspx?fileticket = xGkIHVHVrM4%3D&portalid = 0
Borum, R.; Felker, J.; Kern, S.; Dennesen, K; Feyes, T. (2015), “Strategic cyber intelligence”, Information & Computer Security, Vol. 23 Iss: 3, pp.317–332. The full text is available at: http://www.emeraldinsight.com.ezproxy.aub.edu.lb/doi/10.1108/ICS-09-2014–0064
Cromile, S. and O'Sullivan, S. (1999), “Women as managers in family firms”, Women in Management Review, Vol. 14 No. 3, pp.76–88. The full text is available at: http://www.emeraldinsight.com/doi/abs/10.1108/09649429910269884
Ghobakhloo, M.; Hong, T.S.; Sabouri, M.S.; Zulkifli, N. (2012), “Strategies for successful information technology adoption in small and medium-sized enterprises”. Information 3, 36–67. The full text is available at: file:///C:/Users/PC/Desktop/information-03-00036%20(2).pdf
Hangyol, S. Yanghon, C. Dongphil, C and Chungwon, W. (2015), “Value capture mechanism: R&D productivity comparison of SMEs”, Management Decision, Vol. 53 Iss: 2, pp.318–337. The full text is available at: http://www.emeraldinsight.com.ezproxy.aub.edu.lb/doi/abs/10.1108/MD-02-2014–0089
Javawarna, D., Macpherson, A. and Wilson, A. (2007), “Training commitment and performance in manufacturing SMEs: Incidence, intensity and approaches”, Journal of Small Business and Enterprise Development, Vol. 14 Iss: 2, pp.321–338. The full text is available at: http://www.emeraldinsight.com.ezproxy.aub.edu.lb/doi/abs/10.1108/14626000710746736
Molero, J. and García, A. (2008), “Factors affecting innovation revisited”, WP05/08, PP:1–30. The full text is available at: https://www.ucm.es/data/cont/docs/430-2013-10-27-2008%20WP05-08.pdf
Morris, M. H. Williams, R. W. Nel, D. (1996), “Factors influencing family business succession”, International Entrepreneurial Behaviour & Research. Vol. 2 No. 3, pp.60–81. The full text is available at: http://www.emeraldinsight.com/doi/abs/10.1108/13552559610153261
Nunes, P.M. Serrasqueiro, Z. Mendes, L. Sequeira, T.N (2010), “Relationship between growth and R&D intensity in low-tech and high-tech Portuguese service SMEs”, Journal of Service Management, Vol. 21 Iss: 3, pp.291–320. The full text is available at: http://www.emeraldinsight.com.ezproxy.aub.edu.lb/doi/pdfplus/10.1108/09564231011050779
Schienstock, G. Rantanen, E. and TyniIAREG, P. (April 2009) “Organizational innovations and new management practices: Their diffusion and influence on firms' performance. Results from a Finnish firm survey”. IAREG Working Paper 1.2.d. PP. 1–64. The full text is available at: http://www.iareg.org/fileadmin/iareg/media/papers/WP_IAREG_1.2d.pdf
Uwizeyemungu, S. and Raymond, L. (2011), “Information Technology Adoption and Assimilation: Towards a Research Framework for Service Sector SMEs,” Journal of Service Science and Management, Vol. 4 No. 2, pp. 141-157, doi: 10.4236/jssm.2011.42018. The full text is available at: http://www.scirp.org/Journal/PaperInformation.aspx?PaperID5229 World Economic Forum (2015), Global Information Technology Report 2015. The full text is available at: http://www3.weforum.org/docs/WEF_Global_IT_Report_2015.pdf
Valli, C. Martinus, I. and Johnstone, M. (Aug 2, 2014), “Small to Medium Enterprise Cyber Security Awareness: an initial survey of Western Australian Business”, The 2014 International Conference on Security and Management, At Las Vegas, Nevada, PP:71-75. The full text is available at: https://www.researchgate.net/publication/264417744_Small_to_Medium_Enterprise_Cyber_Security_Awareness_an_initial_survey_of_Western_Australian_Business
-
-
-
A Survey on Sentiment Analysis and Visualization
More LessOnline Social Networks become the medium for a plethora of applications such as targeted advertising and recommendation services, collaborative filtering, behavior modeling and pre- diction, analysis and identification of aggressive behavior, bullying and stalking, cultural trend monitoring, epidemic studies, crowd mood reading and tracking, revelation of terrorist net- works, even political deliberation. They mainly aim to promote human interaction on the Web, assist community creation, and facilitate the sharing of ideas, opinions and content. Social Networking Analysis Research has lately focused on major Online Social Networks like Face-book, Twitter and Digg [Chelmis and Prasanna, 2011]. However, research in Social Networks [Erétéo et al., 2008] has extracted underlying and often hidden social structures [Newman, 2010] from email communications [Tyler et al., 2003], structural link analysis of web blogs and personal home pages [Adamic and Adar, 2003] or recently explicit FOAF networks [Ding et al., 2005], structural link analysis of bookmarks, tags or resources in general [Mika, 2007], co-occurrence of names [Kautz et al., 1997] [Mika, 2007], and co-authorship in scientific publications references [Wasserman and Faust, 1994], and co-appearance in movies or music productions [Yin et al., 2010]. Interactive visualizations is employed by visual analytics in order to integrate users' knowledge and inference capability into numerical/algorithmic data analysis processes. It is an active research field that has applications in many sectors, such as security, finance, and business. The growing popularity of visual analytics in recent years creates the need for a broad survey that reviews and assesses the recent developments in the field. This paper reviews the state of the art of sentiment visualization field. The growing popularity of sentiment visualization systems in recent years is an active research area. In this paper, we will present a survey that reviews and assesses the recent visualization techniques and systems in the field. This report classifies and reviews the recent approaches in visual analysis. The motivations in conducting this survey are twofold. First, we aim to review the most recent research trends and developments of sentiment visualization techniques and systems and provide a precise review of the field. Second, this survey aims to provide a critical assessment of the research which can help enhance the understanding of the field.
-
-
-
Full-View Coverage Camera Sensor Network (CSN) for Offshore Oil Fields Surveillance
More LessThe United Arab Emirates (UAE) is the eighth largest oil producing country in the world. It has about 800 km of coastline. Beyond the coastline, its territorial water and exclusive economic zone have very rich and extensive marine life and natural resources. Most of the oil and natural gas in the UAE is produced from the offshore oil fields. Maritime oil exploration and transportation has increased more steeply due to the expansion of the world crude oil and natural gas production, and the trend of using larger-shaped and higher-speed container vessels. The probability of oil rigs pollution, burning and explosion continues to rise. All these factors stimulate a greater danger for vessels, oil operation safety and maritime environment. Therefore, maritime security and environmental protection are of great interest, both from the academia and petroleum industry point of view. The continuous surveillance of the offshore oil fields is essential to secure the production flow, avoid trespassing and prevent vandalism from intruders and pirates. With the emergence of the new technologies such as maritime wireless mesh network (MWMN) and camera sensor network (CSN), maritime surveillance systems have been gradually improving due to the accuracy, reliability and efficiency of the maritime data acquisition systems. However, in order to realize the oil operation security, it is necessary to control and implement a dynamic system to monitor the maritime environment. The monitoring objects include vessels, fishery boats, pollution, navigational and sailing conditions. By the same token, the legacy monitoring systems such as very-high frequency (VHF) communication, marine navigational radar, vessel traffic service (VTS) and automatic identification system (AIS) are still insufficient to satisfy the increasing demand of the maritime surveillance. The objective of the paper is to provide full-view coverage CSN for a triangular grid based deployment and to reduce the total transmission power of the image to cope with the limited available power in the CSN. The rough and random movements of the sea surface can lead to a time-varying uncovered area by displacing the CSN from its initial location. Thus, it is important to investigate and analyze the effects of the sea waves on the CSN to provide full-view coverage in such complex environments. The main challenges in the deployment of the CSN are the dynamic maritime environment and the time-varying full-view coverage provided by the sea waves. Therefore, quasi-mobile platforms such as buoys are envisaged to hold the camera sensor nodes. The buoys will be nailed at the sea floor to limit the movement of the buoys due to the sea waves. In addition, cooperative transmission method has been proposed to reduce the total transmission power of the image in the CSN. A CSN is formed by autonomous, self-organized ad-hoc camera sensor nodes that are equipped with wireless communication devices, processing unit and power supply. The design, implementation and deployment of a CSN for maritime surveillance stimulate new challenges different to that which exist on the land, as the maritime environment hinders the development of such a network.
The main differences are summarized as follows:
• Dynamic aspect of the maritime environment which requires rigorous levels of device protection.
• Deployment characteristic of a CSN in maritime environment which is highly affected by wind direction and speed, sea wave, and tide.
• Requirement of flotation and anchoring platforms for a CSN and the possible vandalism from intruders and pirates.
• Coverage problem of a CSN due to the random and rough sea movement.
• High energy consumption if battery-power based cameras are used continuously.
• Communication signals are highly attenuated by the constant sea movement.
In this context, CSNs with ubiquitous and substantive camera sensor nodes can be utilized to monitor offshore oil fields to secure the production flow, avoid trespassing and vandalism from intruders and pirates. However, camera sensor nodes can generate various views of the same target if they are captured at different viewpoints if the image is taken near or at a frontal viewpoint, then, the target will be more likely to be recognized by the recognition system. It is fundamental to understand how the coverage of a given camera depends on different network parameters to better design numerous application scenarios. The coverage of a particular CSN represents the quality of surveillance provided by the CSN. As the angle between the target's facing direction and the camera's viewing direction increases, the detection rate drops severely. Consequently, the camera's viewing direction has a considerable effect on the quality of surveillance in a CSN. Recently, a novel concept, which is called full-view coverage, has been introduced to characterize the intrinsic property of camera sensor nodes and assess the coverage in CSNs. A target is full-view covered if its facing direction is always within the scope of a camera, regardless of the target's actual facing direction. Simply, the underlying contribution of full-view coverage tackles the pledge of capturing the target's face image. Consequently, designing a CSN with full-view coverage is of major importance, as the network does not only provide the detection of a target, but also the recognition of it. In many network configurations, camera sensor nodes are not mobile and they remain stationary after the initial deployment. In a stationary CSN, when the deployment characteristic and sensing model for the CSN are defined, the coverage can be deduced and remain unchanged over time. In order to address the hostile maritime environment, there has been a strong desire to deploy sensors mounted on quasi-mobile platforms such as buoys. Such quasi-mobile CSNs are extremely beneficial for offshore oil field surveillance where buoys move with the sea waves. Hence, the coverage of a quasi-mobile CSN depends not only on the initial network deployment, but also on the mobility pattern of the CSN. Nevertheless, the full-view coverage under a quasi-mobile CSN in maritime network has not been investigated. This problem is pivotal for the network design parameters and application scenarios of CSNs where conventional deployment characteristics such as air-drop fail or is not appropriate in a maritime environment. Since a priori knowledge of the terrain is available, a grid-based deployment can be utilized for the given terrain. The endeavor to design a practical mobility pattern for CSNs gives rise to model the cable attached to the buoy as a spring. In this practical mobility pattern, buoys start from an initial coordinate assignment, then oscillate based on spring force, sea wave, wind direction and speed and ultimately converge to a consistent solution. Specifically, this mobility pattern is based on two stages. The first stage is the effects of sea wave, wind direction and speed that move a buoy. The second stage is the spring reaction, based on the previous effects. Then, the design concept is followed and extended to develop a mobility pattern for CSNs in maritime environment. A CSN is considered that constitutes of a small number of buoys whose locations are initially known and consecutively their locations will be derived based on spring relaxation technique. With this technique, coverage issues arisen in a CSN and design a cooperative transmission method to reduce the total transmission power in the CSN are studied. One primary problem is how to design a realistic sea wave model for a given deployed CSN to achieve full-view coverage. Compared with the traditional sea wave model which assumes sine wave model for simplicity analysis, there are two elements that increase the complexity of the problem in a realistic sea wave model. First, the force that acts on the surface of the sea, which is supposed to be the main driving force for the creation of waves in deep water. Second, the force from the sea surface and ground interaction, which is the main contributing force near shoreline, however, this type of force becomes dominant in deep water when seaquakes occur. The realistic sea wave model should encounter the extruding of a two dimensional sine wave model into a three dimensional sea wave model. However, there should be some order of variation along the wave propagation direction for the finite wide waves. In conventional wireless sensor networks (WSNs), scalar phenomena can be traced using thermal or acoustic sensor nodes. In camera sensor networks (CSNs), images and videos can significantly enrich the retrieved information from the monitored environment, and hence provide more practicality and efficiency to WSNs. Recently; there has been enormous development of applications in surveillance, environment monitoring and biomedicine for CSNs that has brought a new spectrum to the coverage problem. It is indispensable to understand how the coverage of a camera depends on various network parameters to better design numerous application scenarios. In many network configurations, cameras are not mobile and they remain stationary after the initial deployment. However, different from a stationary CSN, maritime environment poses challenges on the deployment characteristic and mobility pattern for CSNs. In stationary CSNs, when the deployment characteristic and sensing model are defined, the coverage can be deduced and remain unchanged over time. In the maritime environment, camera sensors are mounted on quasi-mobile platforms such as buoys. This paper aims to provide full-view coverage CSN for maritime surveillance using cameras mounted on buoys. It is important to provide full-view coverage because in full-view coverage, targets facing direction is taken into account to judge whether a target is guaranteed to be captured. Image shot at the frontal viewpoint of a given target considerably increases the possibility to detect and recognize the target. The full-view coverage has been achieved using equilateral triangle grid-based deployment for the CSN. To accurately emulate the maritime environment, a mobility pattern has been developed for the buoy which is attached with a cable that is nailed at the sea floor. The buoy movement follows the sea wave that is created by the wind and it is limited by the cable. The average percentage of full-view coverage has been evaluated based on different parameters such as equilateral triangle grid length, sensing radius of camera, wind speed and wave height. Furthermore, a method to improve the target detection and recognition has been proposed in the presence of poor link quality using cooperative transmission with low power consumption. In some parameter scenario, the cooperative transmission method has achieved around 70% improvement in the average percentage of full-view coverage of a given target and total reduction of around 13% for the total transmission power PTotal(Q).
-
-
-
Classification of Bisyllabic Lexical Stress Patterns Using Deep Neural Networks
Authors: Mostafa Shahin and Beena AhmedBackground and Objectives: As English is a stress-timed language, lexical stress plays an important role in the perception and processing of speech by native speakers. Incorrect stress placement can reduce the intelligibility of the speaker and their ability to communicate more effectively. The accurate identification of lexical stress patterns is thus a key assessment tool of the speaker's pronunciation in applications such as second language (L2) learning, language proficiency testing and speech therapy. With the increasing use of Computer-Aided Language Learning (CALL) and Computer-Aided Speech and Language Therapy (CASLT) tools, the automatic assessment of lexical stress has become an important component of measuring the quality of the speaker's pronunciation. In this work we proposed a Deep Neural Network (DNN) classifier to discriminate between the unequal lexical stress patterns in English words, strong-weak (SW) and weak-strong (WS). The features used in training the deep neural network are derived from the duration, pitch and intensity of each of the two consecutive syllables along with a set of energies of different frequency bands. The robustness of our proposed lexical stress detector has been validated by testing it on the standard TIMIT dataset collected from adult male and female speakers distributed over 8 different dialect regions. Method: Our lexical stress classifier is applied on the speech signal along with the prompted word. Figure 1 shows a block diagram of the overall system. The speech signal is first force aligned with the predetermined phoneme sequence of the word to obtain the time boundaries of each phoneme. The alignment is performed using a Hidden Markov Model (HMM) Viterbi decoder along with set of HMM acoustic models trained from the same corpus to reduce the error caused by inaccurate phone level segmentation. A set of features is then extracted from each syllable and the features of each pair of consecutive syllables combined using the extracted features directly and concatenating them into one wide feature vector.
Lexical stress is identified by the variation in the pitch, energy and duration produced between different syllables in a multi-syllabic word. The stressed syllable is characterized by increased energy and pitch as well as a longer duration compared to the other syllables within the same word. Therefore we extracted seven features f1–f7 related to these characteristics as listed in Table 1. The energy based features (f1, f2, f3) were extracted after applying the non-linear Teager energy operator (TEO) on the speech signal to obtain a better estimation of the speech signal energy and reduce the noise effect. These seven features are commonly used in the detection of the stressed syllable in a word. As the speech signal energy is distributed over different frequency bands, we also computed the energy in the Mel-scale frequency bands in each frame of the syllable nucleus. The speech signal was divided into 10 msec non-overlapped frames and the energy, pitch and the frequency bands energies calculated for each frame.
As seen in Figure 1, to input the raw extracted features directly to the DNN, we concatenate the extracted features into one wide feature vector. Each syllable has 7 scalar values f1–f7 and 27*n Mel-coefficients where n is the number of frames in each syllable's vowel.
To handle variable vowel lengths, we limit the number of input frames provided to the DNN to a maximum N frames for each syllable. This provides the DNN with a fixed length Mel-energy input vector and allows the DNN to use information about the distribution of the Mel-energy bands over the vowel. If the vowel length (n) is greater than N frames, only the middle N frames are used. If the length of the vowel (n) is smaller than N frames, inputs frames are padded to N frames. The final size of the input vector to the DNN is 2*(7+27*N) for a pair of consecutive syllables, with N tuned empirically.
The DNN is trained using the mini-batch stochastic gradient decent method (MSGD) with adaptive learning rate. The learning rate starts with an initial value (typically 0.1) and after each epoch the loss in the error of the validation data set is computed. If the loss is greater than zero (i.e. the error increases) the training continues with the same learning rate.
If the error continues increasing for 10 consecutive epochs, the learning rate is halved and the parameter of the classifier returned to the one that achieved minimum error. Training is terminated when the learning rate reaches its minimum value (typically 0.0001) or after 200 epochs, whichever is earlier. The performance of the DNN is then computed using a separate testing set. Experiments and Results: We extracted raw features from consecutive syllables belonging to the same word from the TIMIT speech corpus. With the TIMIT corpus, we achieved a minimum error rate of 12.6% using a DNN classifier with 6 hidden layers and 100 hidden units per layer. Due to the unavailability of sufficient male and female data, we were unable to build a separate model for each gender. In Fig. 2, we present the error rate for each gender using a model trained on both male and female data. The results show that the classification of the SW is better in male speakers compared to female speakers while the WS error rate is lower for female speakers. However, the overall misclassification rate for both male and female speakers is almost the same.
To study the influence of the dialect on the algorithm, we compared the error rate when testing each dialect using a model trained with the training data of all dialects and when the model was trained with training data from all dialects except the tested one as shown in Fig. 3. As seen, the error rate of most of the dialects remains unchanged except for DR1 where the error rate increased significantly from 4.8 % to 8%. This can be explained by the small amount of test samples for this dialect (only 5% of the test samples). DR4 also shows a considerable increase in the error rate. Although the smallest amount of training samples was from the DR1 (New England) dialect, it produced the lowest error rate among the other dialects. Further work is needed to explain this behavior. Conclusion: In this work we present a DNN classifier to detect bisyllabic lexical stress patterns in multi-syllabic English words. The DNN classifier is trained using a set of features extracted from pairs of consecutive syllables related to pitch, intensity and duration along with energies in different frequency bands. The feature set of each pair of consecutive syllables is combined by concatenating the raw features into one wide vector. When applied on the standard TIMIT adult speech, the algorithm achieved a classification accuracy of 87.4%. The system performance show high stability over different dialects and gender.
-
-
-
Mobile Sensing of Human Sunlight Exposure in Cities
Authors: Ahmad Al Shami, Weisi Guo and Yimin WangI. Abstract: Despite recent advances in sensor and mobile technology, there is still a lack of an accurate, scalable, and non-intrusive way to knowing how much sunlight we are exposed to. For the first time, we devise a mobile phone software application (SUN BATH), that utilizes a variety of on-board sensors and data sets to accurately predict the sunlight exposure each person is exposed to. The algorithm is able to take into account the sunlight exposure based on the person location, the local weather, sun location, and shadow from buildings. The algorithm achieves this by using the mobile user's location and other sensors to determine whether it is indoors or outdoors, and uses building data to calculate shadow effects and weather data to calculate diffused light contributions. This will ultimately allow the user to be more informed about sunlight exposure and compare it with daily recommended levels to encourage positive behaviour change. In order to show the value added by the application, SUN BATH is distributed to a sample of students population for benchmarking and user experience trials. The latest stable version of the application, suggests a scalable and affordable way compared to survey or physical sensing methods. II. Introduction: In this particular proposal, we examine how to live healthily in cities using a data-driven mobile-sensing approach. Cities are partly defined by a high building concentration and a human lifestyle that is predominantly spent indoors or in the shadow of buildings. Some cities in particular, also suffer from heavy pollution effects that significantly reduces the level of direct solar radiation. As a result, one area of concern is the urban dwellers lack of exposure to the ultra-violet (UV) band of sunlight and the wide range of associated health problems. The large scale and chronic nature of the health problems can lead to a time bomb in the National Health Service and cause irreversible future damage to the economy. This article proposes using the ray tracing SORAM model by Erdelyi et al. as an innovative and flexible technique for modelling and estimating the amount of solar irradiation can be collected at a time and certain location. SORAM module is already benchmarked against real measurement data, hence, our work here will benefit from this by taking the calculated ray-tracing information as a primary filter. The aim is to devise an affordable and accurate way of continuously estimating each person's UV exposure. Primarily, this is achieved by developing an Android smartphone application that uses the SORAM advanced modelling techniques to estimate the level of UV exposure each person is subjected to at any given time and location. The research novelty is that the proposed solution does not require additional purpose-built hardware such as a photovoltaic sensor, but instead utilizes a combination of accurate wireless localization, and weather-/terrain-informed sunlight propagation mapping. The challenges addressed include how to accurately locate a human and how to model the propagation of sunlight in complex urban environments. The latest stable version of the application, suggests a novel and affordable way compared to traditional or physical methods when calculating the amount of sunshine we are exposed to. III. System Overview: We implemented and evaluated SUN BATH application with the Android platform using different mobile phone models such as Samsung S5, Asus Zen5 and Archos tablet. The application is developed using Android Studio as IDE for Android application development. The application allows the user to create a profile using a user name and some information such as date of birth, height, weight, skin colour, country of origin, and level of income. To be used later for future detailed reporting with relation to the amount of sun bathing for different groups and ethnicities. SUN BATH only relies on lightweight “sensors to server” modelling which allows continuous low-energy and low-cost tracking of the user location and state transitions. In particular, we will present the process to show that we were able to use SORAM within the smart-phone environment to accurately infer the amount of sunshine in a user is exposed to based on the accuracy of the GPS and other location modules for Android mobile phones. To meet stringent design requirements, SUN BATH utilizes a series of lightweight ‘sensors – server’ for a fault-tolerant location detection. SUN BATH primarily makes use of three types of location-aware detectors: WiFi, cellular-network, and GPS. The aforementioned three wireless location detectors are used in conjunction to improve resolution and resilience. WiFi hub SSID identifiers are used to locate the hub in known open and commercial databases up to an accuracy of a few metres. In the absence of WiFi, a combination of cell tower location area and assisted GPS is used to get an accuracy of 10–15 m in urban areas with shadow effects. WiFi detector adopts the distributed IP address to capture the source location to determine the region the user is in. Cellular-network detector detects the source and attenuation of signals caused by objects on its path (e.g., trees, buildings). It normally help to indicate the movement of the user as the mobile signal gets handed from one network to another. The Application utilizes the GPS sensor to exactly pinpoint the coordinates of the user location i.e. Latitude and Longitude. The system time clock is also used to assist the detection of the local time. The App cache-in those parameters and sends it to a remote server whenever there is an Internet connection. The server hosts the SORAM calculation algorithm which generates a live estimate of the amount of sun exposure the user is experiencing. The results are then passed back to the applications through the Open Database Connectivity (ODBC) middleware service to permanently store the results in a secure database management system (DBMS). IV. Soram Ray-tracing Methodology: A person positioned in an out-door environment is surrounded by solar radiation, which consists of direct and diffuse rays. Direct and diffuse radiation data on a horizontal surface are usually collected at various locations and weather stations across the universe. The raw datasets collected can be used to estimate the amount of global radiation at any point of earth of a given slope and azimuth. Due to cost and scarcity of live data, the SORAM algorithm embed and made use of the Reindel Model, to estimate the direct and diffuse irradiation from hourly horizontal global radiation data. In addition and to go light on computation and avoid calculations for the nighttime hours, the SORAM determines the sunrise time for each day of the year and the amount of solar radiation data from that point onwards which is then calculated until sunset. The algorithm also estimates and with high accuracy direct and diffuse radiation on a surface of given slope and azimuth from their counterparts on a horizontal access considering surrounding shading conditions. We tested SUN BATH in simulated and real locations for five continuous days from sunrise to sunset in around the School of Engineering building complex at the University of Warwick campus. Simulated tests were carried manually, using two fixed location parameters i.e Longitude and Latitude. A quick memory and CPU monitor view revealed that the SUN BATH energy consumptions and resource-constraint on the used smartphone devices were moderate. A full memory and CPU monitor view can be easily produced, but it is beyond the scope of this article. V. Conclusions and Future Work: This research presented the architecture of SUN BATH mobile sensing application that gathers a variety of lightweight sensors information and utilised ray-tracing algorithm to derive the level of human sun exposure in urban areas. The application has demonstrated that it can be an affordable and pervasive way of accurately measuring the level of sunlight exposure each person is exposed to. Further work is required to scale the project to the global level, which requires big data sets on urban building maps and meterological data from all the cities.
-
-
-
Information Security in Artificial Intelligence: A Study of the possible intersection
Authors: Tagwa Warrag and Khawla Abd Elmajed1. Introduction
Artificial Intelligence or A.I attempts to understand intelligent entities, and strives to build ones. And it is obvious that computers with human-level intelligence (or more) would have huge impact in our everyday lives and the future. A student in physics might reasonably feel that all the good ideas have already been taken by Galileo, Newton, Einstein, and the rest, and that it takes many years of study before one can contribute new ideas. Artificial Intelligence, on the other hand, still has openings for a full-time Einstein. [1] With the ever increasing amounts of the generated information-security data, smart data analysis will become even more pervasive as a necessary ingredient for more effective and efficient detection and categorization. Which will provide intelligent solutions to security problems that perform beyond the typical automatic approaches. Moreover, the combination of Artificial Intelligence and Information Security focuses on analytics of security data rather than simple statistical reporting. In our research, we are conducting a survey on the different A.I methodologies that are being used by researchers and industry professionals in order to tackle security problems. We added our own analysis and observations on the findings, and compared between different methods. We are working on providing more details about which approaches that suit certain problems, and how some A.I methodologies are not always a good choice for specific information security problems. By this work, we are trying to introduce the intersection of the two fields of Information Security and Artificial Intelligence, and to hopefully promote more use of intelligent methods in solving cyber security issues in the middle-east. The background is divided into two parts: the first part is about the different forms of information security data sets, and the second part of background is briefly giving examples of major corporations that use A.I to address security issues. The background part is followed by the results and discussion, at which we expressed our own opinions, observations and analysis. Our work is still in progress, so we concluded the paper by stating our future missions of this research.
2. Background
Artificial Intelligence has always proven its way through the successful applying in solving various industrial problems related to medicine, finance, marketing, low and technology. It is the cognitive era we are living in, and according to IBM, the augmented-intelligence systems like IBM-Watson process information themselves, and they can teach too. Which will lead to more cognitive learning platforms that eradicate the need to manually work in industrial problems. [2] On the other side, the overwhelmingly huge sizes of data generated by networking entities and information security elements is considered as a rich and valuable resource to more promising security insights.
2.1 Information Security Data Sets
Data can take different forms when it comes to information security. Starting from the logs, as Windows security logs, servers logs, outputs generated from networking tools as Snort, TCPDump, and NMAP [3]. In addition, Sandbox output [4] when malwares are executed, the sniffed network-traffic in a.pcap [Packet CAPture] files, and features of installed android mobile applications [5] are all examples of information security data that can be treated as input to the Artificial Intelligence techniques as Machine Learning, Data Mining, Artificial Neural Networks, Fuzzy logic systems and Artificial Immune Systems. For the experimental and academic purposes, there are various online repositories that provide information security data sets as DARPA data sets [6].
2.2 Examples from the major corporations
2.2.1 Kaspersky Cyber Helper
Cyber Helper is a successful attempt in getting nearer to employing truly autonomous Artificial Intelligence in the battle against malware. The majority of Cyber Helper autonomous sub-systems synchronize, exchange data and work together as if they were a single unit. For the most part they operate using fuzzy logic and independently define their own behavior as they go about solving different security tasks. [7].
2.2.2 IBM – IBM Watson
IBM Watson is a technology platform that uses Natural Language Processing or NLP and Machine Learning to reveal insights from large amount of unstructured data. [8] IBM Watson can be trained up on massive amount of security data from the Common Vulnerabilities and Exposures, or CVE, threat database to articles on network security, plus deployment guides and how-to manuals, and all sorts of content that makes IBM Watson very smart on security. IBM Watson uses NLP technologies, so the users might pose security questions to Watson, and Watson will response with all pertinent applications. [9]
3. Results & Discussion
Through the survey that we conducted on number of different researches and projects that work on the intersection area of A.I and Information security, we came across the following:
1- We noticed that the most commonly followed approach is Machine Learning. But not all the Machine Learning algorithms were right for solving every information security problem. Some algorithms result in high rates of false alarms, false negative and false positive for certain kind of information security issues. So it turns out that deciding the most suitable A.I approach to follow depends on the nature of the information security problem we are trying to solve, what kind of data we do have in hand, whether we have classes or labels for those data or not, and many other factors.
2- There is a preprocess stage for the unstructured data before it becomes ready for upload into the selected Artificial Intelligent model or method. When the information security data was an android malware, it had to be executed inside a Sandbox first, and then a report was generated about the execution of this malware. Each different type of Sandbox usually generate different format of reports. The common format are text, XML and MIST [Sequence of instructions]. Text is more convenient for humans but XML and MIST is are more suitable for machines. [3] Another example is when the features of installed android mobile applications were the input to the artificial intelligence processes. Those features had to be extracted from the dex code [Android programs are compiled into.dex Dalvik Executable files, which are in turn zipped into a single.apk file] [5].
3- When Machine Learning is the selected approach to be applied on the information security problem, the dataset defines what action need to be followed: clustering, classification, feature-selection or a combination between different processes. When the security data set has labels [supervised learning] then the applied actions are classification algorithm. When there are no labels or classes on the data rows of the security data set [Unsupervised learning] then we need to group the similar entities in group, and thus we use clustering Machine Learning algorithms. Some researchers used both clustering and classification techniques for security applications that detect malicious malware behavior that has not been previously assigned to a certain malware class, and they see both clustering and classification as two techniques that complement each other. [4] The feature selection was conducted as well on security datasets to identify the features that help the most in the prediction process and building of the predictive models.
4- The use of some of the Linear Algebra techniques as vector space, which was combined with the static analysis, was suggested by one of the researchers, so as to have a better representation of the selected features of installed android mobile applications that are suspected to be malicious.
4. Conclusion
The application of Artificial Intelligent methodologies in addressing the information security problems will play a major role in bringing brighter insights that move the security forward; by formulating more successful winning approaches leading to a security that “thinks”. The huge overwhelming amount of data that is generated by networking devices and security appliances will be of a great use when combined with the intelligence of machine learning; and going beyond the traditional and limited automatic techniques of information security. Our research work on the intersection between the Artificial Intelligence and Information Security is still on going. And we hopefully look forward by the end of this work to help in designing a matrix with more accurate criteria that help the information security practitioners to decide which A.I approach to follow. Security professionals will no longer need to dwell into the deep mathematical formulas of A.I and machine learning when they start considering more intelligent alternative solutions.
5. References
[1] Russell, Stuart J. and Norvig, Peter. Artificial Intelligence: A Modern Approach. A Simon & Schuster Company 1995.
[2] Powered by IBM Watson: Rethink! Publication - Our future in augmented intelligence. August 2015.
[3] Buczak, Anna L. and Guven, Erhan. A survey of data mining and machine learning methods for cyber security intrusion detection. 2015.
[4] Rieck, Konrad. Trinius, Philipp. Willems, Carsten, and Holz, Thorsten. Automatic Analysis of Malware Behavior using Machine Learning. 2011.
[5] Arp, Daniel. Spreitzenbarth, Michael. Hübne, Malte. Gascon, Hugo. Rieck, Konrad. DREBIN: Effective and Explainable Detection of Android Malware in Your Pocket.
[6] DARPA Intrusion Detection Data Sets – MIT Lincoln Laboratory. Link: http://ll.mit.edu/ideval/data.
[7] Oleg Zaitsev. Cyber Expert. Artificial Intelligence in the realms of I.T security, link: http://securelist.com/analysis/publications/36325/cyber-expert-artificial-intelligence-in-the-realms-of-it-security October 25, 2010.
[8] IBM Watson official website, link: http://www.ibm.com/smarterplanet/us/en/ibmwatson/what-is-watson.html.
[9] An interview with Amir Husain on how IBM Watson is helping to fight cyber-crime, link: http://www.forbes.com/sites/ibm/2015/05/29/how-ibm-watson-is-helping-to-fight-cyber-crime/ MAY 29, 2015.
-
-
-
Patient Center – A Mobile Based Patient Engagement Solution
More LessPatient Engagement's Research results a mobile Application Patient Center to Manage Appointments, Personal Health Records and to avail Emergency Online Healthcare across all hospitals in Patient Center Network. It's Single Mobile App to manage and engage Patients actively in Healthcare across Qatar.
Problem Statement: Healthcare trends moving towards Patient Centric Healthcare Model from Hospital Centric Model. Qatar is having more than 350 Health facilities. Most of Health facilities not connected each other. Global Healthcare trends are implementing Interoperable Systems in Hospitals (Integrated Health Network), so that they can exchange their health data and Patient's Care will be continued and data will be shared with partnered networks and also government entities Biggest Gap in Healthcare process will be Patient Engagement. Providers and patients working together to improve health. A patient's greater engagement in healthcare contributes to improved health outcomes, and information technologies can support engagement. Patients want to be engaged in their healthcare decision-making process, and those who are engaged as decision-makers in their care tend to be healthier and have better outcomes Outpatients in Hospitals. Outpatient's work flow is very much defined in Healthcare Process, By Current technology, we need to reduce out patient's visit to hospital by having remote healthcare facility, so that Providers can provide better health outcomes on Inpatients Study: Book Name: Engage! Transforming Healthcare Through Digital Patient Engagement Edited by Jan Oldenburg, Dave Chase, Kate T. Christensen, MD, and Brad Tritle, CIPP This book explores the benefits of digital patient engagement, from the perspectives of physicians, providers, and others in the healthcare system, and discusses what is working well in this new, digitally-empowered collaborative environment. Chapters present the changing landscape of patient engagement, starting with the impact of new payment models and Meaningful Use requirements, and the effects of patient engagement on patient safety, quality and outcomes, effective communications, and self-service transactions. The book explores social media and mobile as tools, presents guidance on privacy and security challenges, and provides helpful advice on how providers can get started. Vignettes and 23 case studies showcase the impact of patient engagement from a wide variety of settings, from large providers to small practices, and traditional medical clinics to eTherapy practices. Book Name: Applying Social Media Technologies in Healthcare Environments Edited by Jan Oldenburg, Dave Chase, Kate T. Christensen, MD, and Brad Tritle, CIPP Applying Social Media Technologies in Healthcare Environments provides an indispensable overview of successful use of the latest innovations in the healthcare provider-patient relationship. As professionals realize that success in the business of healthcare requires incorporation of the tools of social media into all aspects of their worlds and recognize the value offered by the numerous media channels, this compendium of case studies from various voices in the field – caregivers, administrators, marketers, patients, lawyers, clinicians, and healthcare information specialists – will serve as a valuable resource for novices as well as experienced communicators. Written by experienced players in the healthcare social media space and edited with the eye of an administrator, chapters provide insight into the motivation, planning, execution, and evaluation of a range of innovative social media activities. Complete with checklists, tips, and screenshots that demonstrate proven application of various social channels in a range Based on the research on Patient Engagement, I have designed mobile based Patient Engagement solution called Patient Center. Using Patient Center mobile app, we can search nearest Specialties, Doctors with review ratings and also can book appointments in Health Facilities which are part of Patient Center Network. Patients can securely access their health records mobile and also can exchange health records to any provider for better treatment. Patient Center App works as personal health advisor by reminding patients medications, exercises and discharge notes etc. Patient Center is trying solve problems in following areas Reducing Phone calls for booking appointments and reducing time to fill forms prior to Encounter Patient Center Network contains several Providers, Payers, Pharmacies and Laboratories, all are completely interoperable and exchange Health Data with HIPPA Compliance Patients no need to carry bundles of his/her health records in papers when they meet doctor, instead all personal health records across all hospitals can be downloaded to Patient's Mobile and Patient can securely exchange his/her health data to any provider Single Mobile App to manage and access Appointments, lab orders medications, diagnoses, care plans, immunization history, and more On Emergency Cases, Patient Can click Emergency Service Menu in App and automatically nearest ambulance services will be alerted with Patient's GPS Location Complex Discharge Notes will be illustrated in Mobile App with images. Medication Consumption Alarms as per doctor's Advice in mobile Communications with Doctor from home using Patient Center, Healthcare Wearables which can provide Heart Rate, Blood Pressure, ECG, Diabetes Readings to Doctor in Real Time, so that, Doctor can diagnose patient online. Patient Center will try to reduce 60% outpatient visits and get Quality Healthcare right from Home In addition to the resources above, Patient Center will focus on following areas Social and Behavioral which covers social media, texting and gaming, wearables and mobile, and the social determinants of health Home Health which covers remote monitoring and telehealth, patient education, and smart homes Financial Health which includes managing health insurance and expenses, transparency and consumerism, patient onboarding and financial options Technology We are going to build entire application using IHE Profiles to achieve Interoperability in our networked hospitals (Integrated health Management). Integrated Health Enterprise (IHE) is an initiative by healthcare professionals and industry to improve the way computer systems in healthcare share information. IHE promotes the coordinated use of established standards such as DICOM and HL7 to address specific clinical needs in support of optimal patient care. Systems developed in accordance with IHE communicate with one another better, are easier to implement, and enable care providers to use information more effectively. The Exchange of Personal Health Record Content (XPHR) profile provides a standards-based specification for managing the interchange of documents between a Personal Health Record used by a patient and systems used by other healthcare providers to enable better interoperability between these systems. The Exchange of Personal Health Record Content (XPHR) integration profile describes the content and format of summary information extracted from a PHR system used by a patient for import into healthcare provider information systems, and vice versa. The purpose of this profile is to support interoperability between PHR systems used by patients and the information systems used by healthcare providers. Patient Center leverage other IHE Integration and Content Profiles for interoperability in addition to the XPHR Content Profile. For example, a PHR Systems may implement XDS-MS to import medical summaries produced by EHR systems, XDS-I to import imaging information, XDS-Lab to import laboratory reports, etc Patient Center Mobile Application is connected to Intersystems Healthshare Platform which is Health informatics platform certified by IHE. Patient Center app will be developed in ios and Android native application development platforms and connected to Healthshare server using web services and REST. Healthshare will be connected to network hospitals using IHE transactions. We use IHE XPHR for personal health records exchange.
Conclusion
These scenarios are not theoretical; early adopting consumers across these various patient personae are currently using Patient Engagement Solutions. Consumers appreciate the idea of remote health monitoring, and 89% of physicians would prescribe a mobile health app to patients. Patients continue to trust their personal physicians above most other professionals, following nurses and pharmacists. Patient Center is designed for better Health outcomes, Clinical Decision Support, Engaging Patients actively in Healthcare and reducing outpatient visits and repetitive encounters.
-
-
-
Partition Clustering Techniques for Big LIDAR Dataset
More LessI. Abstract: Smart cities are collecting and producing massive amount of data from various data sources such as local weather stations, LIDAR data, mobile phones sensors, Internet of Things (iOT) etc. To use such large volume of data for potential daily computing benets, it is important to store and analyse such amount of urban data using handy computing resources and al-gorithms. However, this can be problematic due to many challenges. This article explores some of these challenges and test the performance of two partitional algorithms for clus-tering such Big LIDAR Datasets. Two handy clustering algorithms the K-Means vs. the Fuzzy c-Mean (FCM) were put to the test to address the suitability of these algorithms for clustering such a large dataset. The purpose of clustering urban data is to categorize it into homogeneous groups according to specic attributes. Clustering Big LIDAR Data in compact format represents the information of the whole data and this can benefit researchers to deal with this reorganised data much efciently. To achieve this end, the two techniques were utilised against a large set of Lidar data to show how they perform on the same hardware set-up. Our experiments conclude that FCM outperformed the K-Means when presented with such type of dataset, however the later is lighter on the hardware utilisations. II. Introduction: Many ongoing and recent researches and development in computation and data storing technologies have contributed to production of the Big Data phenomena. The challenges of Big Data are due to the 5V's which are: Volume, Velocity, Variety, Veracity and Value to be gained from the analysis of Big Data [1]. From the survey of the literature, there is anagreement between data scientists about the general attributes that characterise Big Data 5V's which can be summed as follows: Very large data mainly in Terabytes/Petabytes/Exabyte's of data (Volume). Data can be found in structured, unstructured and semi structured forms (Variety). Often incomplete data and inaccessible. Data sets extraction should be from reliable and verified sources. Data can be streaming at very high speed (Velocity). Data can be very complex with interrelationships and highdimensionality. Data may contain few complex interrelationships between different elements. The challenges of Big Data in general are an ongoing thing and the problems is growing every year. A report by Cisco, estimated that by the end of 2017, annual global data traffic will reach 7.7 Zettabytes. The global internet traffic will be three times over the next five years. Overall, the global data traffic will grow at a Compound Annual Growth Rate (CAGR) of 25% by the year 2017. It is essential to take steps toward tackling these challenges because it can be predicted that a day will come when Big Data tools will become obsolete in front of such enormous amount of data flow. III. Clustering Methods: Researchers are dealing with many types of large datasets, the concern here is to wither introduce new algorithms or use the existing algorithms to suit large datasets by focusing on the data itself to suit the available algorithms. Currently, two approaches are predominant: First, is known as “Scaling-Up” which focuses the efforts on the enhancement of the available algorithms. This approach risks them becoming useless for tomorrow, as the data continues to grow. Hence, to deal with continuously growing in size datasets, it will be necessary to frequently scale up algorithms as the time moves on. The second approach is to “Scale-Down” or to skim the data itself, and to use existing algorithms on the skimmed version of the data after reducing its size. This article focuses on the scale-down of data sets by comparing clustering techniques. Clustering is defined as the process of grouping a set ofitems or objects which have same attributes or characteristics in the same group called a cluster which may differ from another group. Clustering can be very useful for between cluster separation, within cluster homogeneity and for good representation of data by its centroid. These can be applied to different fields such as Biology to find groups of genes which have same functions or similarities. It is also used in Medicine to find patterns in symptoms of disease and in Business to find and target potential customers. IV. Compared Techniques K-Means vs. Fuzzy c-Means: To highlight the advantages to everyday computing for Big Data, this article is focusing on comparing two trendy and computationally attractive partitional techniques which can be explained as follows: 1) K-Means Clustering: This is a widely used clustering algorithm. It partition a data set into K clusters (C1;C2;:::;CK), represented by their arithmetic means called the “centroid” which is calculated as the mean of all data points (records) belonging to certain cluster. 2) Fuzzy c-Means clustering: FCM was introduced by Bezdek et al. and it is derived from the explained K-means concept for the purpose of clustering datasets, but it differs in that the object may belong to more than one cluster with degrees of belonging. However, it is possible that an object may belong to more than one cluster according to its degree of membership, which is also calculated on the bases of distances (usually the Euclidean) between the data points and cluster. V. Experiments Set-up: The experiments are done to compare and illustrate how the candidate K-Means and FCM clustering techniques cope with clustering Big LIDAR Data set using a handy computer hardware. The experiment were performed using an AMD8320, 4.1 GHz, 8 core processor with 8 GB of RAM and running a 64-bit Windows 8.1 OS. The algorithms were implemented against a LIDAR data points, taken for our campus location at Latitude: 52:23–52:22 and Longitude: 1:335–1:324. This location represents the International University of Sarajevo main campus with an initialization of 1000000 × 1000 digital surface data points. Both clustering techniques were applied to the dataset starting with a small cluster number K = 5 and gradually increased to reach K = 25 clusters. VI. Conclusions: The lowest time measured for FCM to re group the data into 5 clusters was recorded at 42.18 seconds while it took K-Means 161.17 seconds to form the same number of clusters. The highest time recorded for K-Means to converge was 484. 01 seconds, while it took FCM 214.15 seconds to cluster the same dataset. Hence, There is a high positive correlation between the time and the number of clusters assigned, as the number of clusters count increases so does the time complexity for both algorithms. On average FCM used up between 5–7 out of the eight available cores, with 63.2 percent of the CPU processing power and 77 percent of the RAM memory. The K-Meanson the other hand utilised between 4–6 with the rest remain as idle cores with an average of 37.4 percent of the CPU processing power and 47.2 percent of the RAM memory. Overall, both algorithms are scalable to deal with Big Data, but, FCM is fast and would make an excellent clustering algorithm for everyday computing. In addition, it would offer some extra added advantages such as its ability to handle different data types. Also, this fuzzy partitioning technique and due to its fuzzy capability, FCM could produce a better quality of the clustering output which could benefit many data analysts.
-
-
-
Understanding Cross-modal Collaborative Information Seeking
Authors: Dena Ahmed Al-Thani and Tony StockmanI. Introduction
Studies reveal that often group members collaborate when searching for information even if they were not explicitly asked to collaborate [1]. The activity that involves a group of people engaging in a common information seeking task is called Collaborative Information Seeking (CIS). Over the past few years, CIS research has focused on providing solutions and frameworks to support the process [2]. However, work in this field to date has always assumed that information seekers engaged in CIS activity are using the same access modality, the visual modality. The attention on this modality has failed to address the needs of users who employ different access modalities such as haptic and/or audio. Visual Impaired (VI) employees in a workplace may often have to collaborate with their sighted team members when searching the web. Given that the VI individual's search behaviour is challenged by poor web design and the shortcomings of current assistive technology [3][4]; collaboratively engaging in web search activity with peers can be considered as a major barrier to workplace inclusion. This study explores the under investigated area of cross-modal collaborative information seeking (CCIS), that is the challenges and opportunities that exist in supporting VI users to take an effective part in collaborative web search tasks with sighted peers in the workplace.
II. Study Design
To develop our understanding of the issues, we investigated the CCIS activity of pairs of VI and sighted participants. The study consisted of an observational experiment that involved 14 VI and sighted users completing two web search tasks which was followed up by scenario-based interviews which conducted with seven of the 14 pairs from the experiment. We conducted the experiment to examine the patterns of CCIS behaviour and the challenges that occur. In the scenario-based interviews we examined the techniques used, tools employed and the ways information is organized both for individual and collaborative use. In the observational study, all VI participants used a speech-based screen reader. Each pair was given two search tasks; one task was performed in a co-located setting and the other task was performed in a distributed setting. For the co-located task, the participants were asked to work collaboratively to organize a trip to the United States while for the distributed task they were asked to plan a trip in Australia. In the distributed condition, participants were seated in different locations and were told that they were free to use any method of communication they preferred; 5 pairs used emails and 9 pairs used Skype. Seven pairs from the observational studies took part in the scenario-based interviews. The interviews involved the interviewer describing a CIS activity to the participants followed by four scenarios that contain questions relating to the management of the retrieved information.
III. Observational Study Findings
A. Division of Labour
In the co-located condition, discussion about the division of labour occurred at two levels. Firstly in the initial discussion and secondly as a result of one participant interrupting his/her partner in order to complete a certain action. Three reasons were identified for these interruptions: (1) When viewing large amounts of information with assistance from their sighted participant. (2) When browsing websites with inaccessible components, VI users asked sighted participants to perform the task or assist them in performing it. (3) The third reason is related to the context of the task. In contrast, in the distributed condition, the discussion about the division of labour occurred only at the beginning of the task. The pair divided the work and started work independently. The participants only updated each other about their progress through the communication tool. Unlike, the co-located sessions, collaboration in the later stages was not observed. Additionally, VI participants' requests for assistance were fewer in this condition as they seemed to be more reluctant to ask for support in the distributed condition. When a VI participant encountered an accessibility issue, they would try on average 3 websites before asking their sighted partner to assist them. The majority (13 pairs in the co-located and 12 pairs in the distributed setting) divided the labour so that sighted participants performed booking related activities and VI participants performed event organization activities. VI participants emphasised that they chose this approach to avoid any issues related to accessibility. Vigo and Harper [5] categorized this type of behaviour as “emotional coping”, in which users past experience relating to an inaccessible action in a similar webpage or similar task affects their judgment on website use or tasks conducted. It is clear from the results that VI users put some thought into either dividing the labour in a specific way or to find some other way to get around the issues encountered.
B. Awareness
In the co-located condition, the main method to implement awareness was verbal communication. In the distributed condition, the only method to implement awareness was through email and instant messaging. To facilitate awareness of partner's activities while performing the task, participants informed their partners about actions. The participants tended to provide their partner constantly with information about their current activities to enrich group awareness in the absence of a tool that supported awareness in both conditions. In fact, pairs who completed more of the task in both conditions communicated more information about their activities. It was observed that when more information to avoid duplication of effort was communicated, the higher was the performance in the distributed condition. This indicates that making this type of information available between distributed collaborators might enhance their ability to complete tasks efficiently. This was not the case in the sessions in the co-located condition, as the sessions with lowest and highest performance reported the same amount of information communicated relating to duplication of effort. However, it was observed that pairs who performed well in the co-located sessions have communicated more information related to the actions they are performing that is not essential for the other pair to know. This indicates that facilitating the appropriate type and amount of awareness information in each condition is crucial to team performance and can increase team productivity [6].
C. Search Results exploration and management
Collaboration occurred mainly in two stages of the information seeking process: the results exploration and results management. In the results exploration stage, collaboration was triggered when VI participants viewed large amounts of information with their sighted partner's assistance or by both partners deciding to explore search results together. The average number of search results viewed collaboratively was higher than the average number of search results viewed by VI participants alone. Screen readers' presentation of large volumes of data imposes a number of challenges such as short term memory overload and a lack of contextual information [3]. This stage is highlighted as one of the most challenging stages faced by VI users during the IS process [4]. The amount of retrieved information kept by sighted users is nearly double the amount of information kept by VI users. The reasons for this were twofold: (1) Sighted users viewed more results than their VI partners. (2) The cognitive overhead that VI users experience when switching between the web browser and an external application used to take notes. This increased cognitive load is likely to slow down the process. The effect of this is more apparent in the distributed condition where VI users are required to switch between three applications: the email client or instant chat application, the web browser and the note taking application.
IV. Scenario-based interviews findings
The scenario-based interview is a tool that allows exploration of the context of the task. This type of scenario narrative approach provides a natural way for people to describe their actions in a given task context. The scenario-based interviews revealed that collaborative searching is quite a common practice as all the participants were able to relate the given scenario with similar activities they had undertaken in the past. It found that often ad hoc combinations of everyday technologies are used to support this activity rather than the use of dedicated solutions. There were clear instances of the use of social networks such as Twitter and Facebook to support the sharing of retrieved results by both VI and sighted interviewees. Individual and cross-modal challenges were also extensively mentioned by VI interviewees, as current screen readers' fall short of conveying information relating to spatial layout and helping users form a mental model of web pages congruent with that of their sighted partners. It is clear that the VI participants interviewed were fully aware of the drawbacks that the serial nature of screen readers impose on their web search activities. In fact, these challenges have led them to choose to perform some web search activities collaboratively when that was an option. In the interviews, sighted users tended to use more complex structures for storing retrieved information, such as under headings or multi-level lists, while VI users tended to use simpler flat or linear lists of information.
V. Implications
The studies we carried out highlighted the challenges encountered when VI and sighted users perform a collaborative web search activity. In this section we propose a number of design implications for the design of CCIS systems. Due to space limitations in this paper, we only present three.
1. Overview of Search Results
Developing a mechanism that provides VI group members with an overview of search results and the ability to focus on particular pieces of information of interest could help in increasing VI participants' independence in CCIS activities. VI web searchers are likely to perform the results exploration stage more effectively and efficiently if they could firstly get a gist of results retrieved and could then drill down for more details as required. This will advantage both individual and collaborative information seeking activities.
2. Cross-modal Shared workspace
Having a common place to save and review information retrieved can enhance both the awareness and the sense making processes and reduce the overhead of using multiple tools, especially in the case of VI users, who do not have sight of the whole screen at one time. The system should support a cross-modal representation of all changes made by collaborators in the shared workspace. As changes in a visual interface can be represent it in colours, changes in the audio interface might be represented by a non-speech sound or a modification to one or more properties of the speech sound, for example timbre or pitch.
3. Cross-modal representation of collaborators Search Query Terms and Search Results
Allowing collaborators to know their partner's query terms and results viewed will inform them about their partner's progress during a task. Additionally, having a view of partners search results can allow sighted users to collaborate with their VI partners while going through search results.
VI. Conclusion
This paper discussed CCIS; an area that has not previously been explored in research. The studies presented in this paper is a part of a project that aims to provide support to the CCIS process. The next part of the project is to investigate the validity of the design implications in supporting CCIS and their effect on collaborators performance and engagement.
References
[1] Morris, M. R. (2008). A survey of collaborative web search practices. In Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, New York, USA. ACM.
[2] Golovchinsky, G., Pickens, J., and Back, M. (2009). A taxonomy of collaboration in online information seeking. In JCDL Workshop on Collaborative Information Retrieval.
[3] Stockman, T., and Metatla, O. (2008). The influence of screen readers on web cognition. Proceedings of Accessible design in the digital world conference. York, United Kingdom.
[4] Sahib, N. G., Tombros, A., and Stockman, T. (2012). A comparative analysis of the information-seeking behavior of visually impaired and sighted searchers. Journal of the American Society for Information Science and Technology.
[5] Vigo, M., and Harper, S. (2013). Coping tactics employed by visually disabled users on the web. International Journal of Human-Computer Studies.
[6] Shah, C., and Marchionini, G. (2010). Awareness in collaborative information seeking. Journal of the American Society for Information Science and Technology.
-
-
-
Geography of Solidarity: Spatial and Temporal Patterns
Authors: Noora Alemadi, Heather Marie Leson, Ji Kim Lucas and Javier Borge-HolthoeferWe would like to propose a panel which discusses this paper but includes special guests from the International and Qatar Humanitarian community to talk about the future of humanitarian research in the MENA region. Abstract: In the new era of data abundance, procedures for crisis and disaster response have changed. As relief forces struggle with assistance on the ground, digital humanitarians step in to provide a collective response at unprecedented shorttime scales, curating valuable bits of information through simple tasks – mapping, tagging, and evaluating. This hybrid emergency response leaves behind detailed traces, which inform data scientists about how far and how fast the calls for action reach volunteers around the globe. Among the few consolidated platforms in the DH technology arena, we find MicroMappers1. It was created at QCRI, partnering with UN-OCHA and the Standby Task Force, as part of a tool set that combines machine learning and human computing: Artificial Intelligence for Disaster Response2 (AIDR) [1]. MicroMappers is activated during natural disasters in order to tag short text messages and evaluate ground and aerial images. Thus, MicroMappers can also be viewed as a valuable data repository, containing historical data from past events in which it was activated. To perform our study, we rely on rich datasets coming from three natural disasters occurring in Philippines (Typhoon Hagupit, 2014) [2], Vanuatu (Cyclone Pam, 2015) [3] and Nepal (earthquake, 2015) [4]. Each event rendered thousands of digital records from the labor inputs of the crowd. We focus particularly on IPaddresses, which can be conveniently mapped to a specific location; and timestamps, which describe for us the unfolding of the collective response in time. The anonymity of each contributor is preserved at all times throughout the project.
-
-
-
Detecting And Tracking Attacks in Mobile Edge Computing Platforms
Authors: Abderrahmen Mtibaa, Khaled A. Harras and Hussein AlnuweiriDevice-to-device (d2d) communication has emerged as a solution that promises high bit rates, low delay and low energy consumption which represents the key for novel technologies such as Google Glass, S Beam, and LTE-Direct. Such d2d communication has enabled computational offloading among collaborative mobile devices for a multitude of purposes such as reducing the overall energy, ensuring resource balancing across device, reducing the execution time, or simply executing applications whose computing requirements transcend what can be accomplished on a single device. While this novel computation platform has offered convenience and multiple other advantages, it obviously enables new security challenges and mobile network vulnerabilities. We anticipate challenging future security attacks resulting from the adoption of collaborative mobile edge cloud computing platforms, such as MDCs and FemtoClouds. In typical botnet attacks, “vertical communication” between a botmaster and infected bots, enables attacks that originate from outside the network. While intrusion detection systems typically analyze network traffic to detect anomalies, honeypots are used to attract and detect attackers, and firewalls are placed at the network periphery to filter undesired traffic. However, these traditional security measures are not as effective in protecting networks from insider attacks such as MobiBots, a mobile-to-mobile distributed botnet. This shortcoming is due to the mobility of bots and the distributed coordination that takes place in MobiBot attacks. In contrast to classical network attacks, these attacks are difficult to detect because MobiBots adopt “horizontal communication” that leverages frequent contacts amongst entities capable of exchanging data/code. In addition, this architecture does not provide any pre-established command and control channels (C&C) between a botmaster and its bots. Overall, such mobile device infections will circumvent classical security measures, ultimately enabling more sophisticated and dangerous attacks from within the network. We propose HoneyBot, a defense technique that detects, tracks, and isolates malicious device-to-device communication insider attacks. HoneyBots operate in three phases: detection, tracking, and isolation. In the detection phase, the HoneyBot operates in a vulnerable mode in order to detect lower layer and service-based malicious communication. We adopt a data driven approach, using real world indoor mobility traces, to evaluate the impact of the number of HoneyBots deployed and their placement on the detection delay performance. Our results show that utilizing only a few HoneyBot nodes helps detect malicious infection in no more than 15 minutes. Once the HoneyBot detects malicious communication, it initiates the tracking phase which consists of disseminating control messages to help “cure” the infected nodes and trace back the infection paths used by the malicious nodes. We show that HoneyBots are able to accurately track the source(s) of the attack in less than 20 minutes. Once the source(s) of the attack is/are identified, the HoneyBot activates the isolation phase that aims at locating the suspected node. We assume that the suspect node is not a cooperative device that aims at hiding its identity by ignoring all the Honeybot messages. Therefore, the HoneyBot requests wireless fingerprints from all nodes that have encountered this suspect nodes at a given time period. These fingerprints are used to locate these nodes and narrow down the suspect's location. To evaluate our localization accuracy, we first deploy an experimental testbed where we show that HoneyBots accurately localize the suspect node within 4 to 6 m2. HoneyBots can operate efficiently in small numbers, as few as 2 or 3 nodes while improving the detection, tracking, and the isolation by a factor of 2 to 3. We also assess the scalability of HoneyBots using a large scale mobility trace with more than 500 nodes. We consider, in the attached Figure, a scenario of a corporate network consisting of 9 vulnerable devices labeled 1 to 9. Such network is attacked by one or many botmaster nodes using d2d MobiBot communication. We notice that attacks are propagated horizontally, bypassing all Firewall and intrusion detection techniques deployed by the corporate network administrators. In this scenario, we identify 3 main actors; the botmaster (red hexagon), the HoneyBot (green circle), the infected bot (red circle), and the cured or clear node (blue circle). We assume that the 9 nodes shown in the figure only represent the vulnerable d2d nodes in this corporate networks. We propose detection, tracking and isolation technique that aim at accurately and efficiently defend networks from insider d2d malicious communication.
-
-
-
Computational Calculation of Midpoint Potential of Quinones in the A1 Binding Site of the Photosystem I
Authors: Yasser H.A. Hussein, Velautham Sivakumar, Karuppasamy Ganesh and Gary HastingsQuinones are oxidants; colorants; electrophiles and involved in the electron transfer process of important biological functions such as photosynthesis, respiration, phosphorylation etc. In earth, photosynthesis is the main biological process which converts solar energy into chemical energy. By producing oxygen and assimilating carbon dioxide, it supports the existence of virtually all higher life forms. It is driven by two protein complexes namely photosystem I and II (PSI and PSII). In PS I, light induces the transfer of electron from P700, a pair of chlorophyll a molecules via a series of protein bound pigment acceptors (A0, A1, FeS) to ferredoxin. In PSI, phylloquinone (PhQ, 2-methyl-3-phytyl-1,4-naphthoquinone) act as a secondary acceptor termed as A1. In menB, a mutant of PS I, a gene that codes for a protein involved in PhQ biosynthesis has been deleted, plastoquinone (PQ9, 2,3-dimethyl-5-prenyl-1,4-naphthoquinone) occupies instead. Recent literature reveals that the PQ9 is weakly bounded in the A1 binding site of menB and can be easily replaced by different quinones both invitro and invivo. The efficiency of light induced electron transfer of quinone is related to the mid potential (Em) of the quinone in the A1 binding site. For native PSI, the estimated Em of PhQ is -682 mV. The estimated Em value of PQ9 in menB is -754 mV, and for the incorporated quinone, 2-methyl-1,4-naphthoquinone, it has been reported to be -718 mV. Interestingly in the case of 2,3-dichloro-1,4-naphthoquinone (DCNQ) incorporated menB, there is no forward electron transfer observed. So far this was the highest positive redox potential quinone incorporated into the A1 site in menB PSI. By keeping this reported Em values and electron transfer directionality in mind, we intent to find the Em of the substituted 1,4-naphthoquinones that can be incorporated into A1 binding site. Computation calculations were performed at the B3LYP aug cc-pVTZ level of theory using Gaussian 09 software in linux platform. High performance computer network namely VELA (512 GB RAM per node, 40 core per node, with Turbo-Boost up to 2.4 GHz) in Georgia State University, Atlanta is used through the remote operating system from Qatar University. First the electron affinity (EA) of substituted 1,4-naphthoquinones (NQs) were calculated. From the calculated EA of NQs, we have been able to calculate the redox potential of NQs in a solvent and their Em in the A1 binding site. In order to understand the electronic and structural effects, electron releasing (CH3, OCH3) and withdrawing (Cl, Br) substituted NQs were used in this calculations. Results show that out of seven NQs used, 2-methoxy-1,4-naphthoquinone has the highest negative Em of -850 mV and the DCNQ has the highest positive Em of -530 mV in the A1 binding site. Our calculated Em of DCNQ is in line with the blocking of forward electron transfer reported previously. Our Em values can be used to explain the directionality of electron transfer reactions past A1 and predict the forward electron transfer kinetics when these NQs are incorporated into the A1 binding site experimentally.
-
-
-
Effective High-level Coordination Programming for Decentralized and Distributed Ensembles
Authors: Edmund Lam and Iliano CervesatoProgramming and coordinating decentralized ensembles of computing devices is extremely hard and error-prone. With cloud computing maturing and the emerging trend of embedding computation into mobile devices, the demand for building reliable distributed and decentralized systems is becoming increasingly common and complex. Because of these growing technical challenges, solutions for effective programming and coordination of decentralized ensembles remain elusive. Most main-stream programming methodologies only offer a node-centric view of programming, where a programmer specifies distributed computations from the perspective of each individual computing node (e.g., MPI, transactional memory, the Actor model, Linda Tuple Space, graph processing frameworks). When programming distributed computations in this style, programmers experience minimal shifts in paradigm but such concurrency primitives offer minimal support for the coordination problem. However, as systems grow in complexity and sophistication, maintaining code in this node-centric style of ten becomes costly, as the lack of concurrency abstraction means that programmers assumes all the responsibility of avoiding concurrency pitfalls (e.g., deadlocks and race-conditions). Because of this, ensemble-centric concurrency abstractions are now growing in popularity. In this style of programming, programmers are able to specify complex distributed computations from the perspective of entire collections of computing nodes as a whole (e.g., MapReduce, Google Web Tool-kit, choreographic programming), making implementations of distributed computations more concise and even making large classes of concurrency pitfalls syntactically impossible. However, programming distributed computations in this style typically require programmers to adopt a new perspective of computation. At times, they are overly restrictive and hence not applicable to a wider range of distributed coordination problems.
Our work centers on developing a concurrency abstraction to overcome the above challenges, by (1) providing a high-level ensemble-centric model of coordinating distributed computations, and (2) offering a clean and intuitive integration with traditional main-stream imperative programming languages. This framework as a whole, orthogonally combines a high-level concurrency abstraction together with established lower-level main-stream programming methodologies, maintaining a clean separation between the ensemble-centric concurrency model and the underlying sequential computation model, yet allowing them to interact with each other in a symbiotic manner. The benefit of this separation is twofold: first, a clear distinction of the elements of the coordination model from the computation model helps lower the learning curve of this new programming framework. Hence, developers familiar with the underlying main-stream computation model can incrementally build their technical understanding of the framework, by focusing solely on the coordination aspects of the framework. Second, by building the coordination model on top of an underlying main-stream computation model, we inherit all existing libraries, optimizations, as well as programming expertise available to it. We have addressed several key challenges of developing such an ensemble-centric concurrency model. In particular, we have developed a choreographic transformation scheme that transform our ensemble-centric programs into node-centric encodings, and we have also developed a compilation scheme that convert such node-centric encodings into lower-level imperative codes that can be executed by individual computing nodes. Finally, we proved the soundness of this choreographic compilation scheme by showing a two-step correspondence from ensemble-centric specifications to node-centric encodings, then to node-centric compilations. We have implemented an instance of this distributed programming framework for coordinating decentralized ensembles of Android mobile devices. This system is called CoMingle and it is built to integrate with Java and the Android SDK. The ensemble-centric nature of this programming abstraction simplifies the coordination of multiple Android devices, and in we demonstrate how the clean integration with Java and Android SDK allow local computations within each device to be implemented in a traditional manner, hence leveraging from an Android programmer's expertise, rather than forcing him/her to work with an entirely new programming environment. As proof-of-concept, we have developed a number of distributed mobile Android applications. CoMingle is open-source and available for download at https://github.com/sllam/comingle.
-