Biomedical Journal of Scientific & Technical Research (BJSTR) is a multidisciplinary, scholarly Open Access publisher focused on Genetic, Biomedical and Remedial missions in relation with Technical Knowledge as well.
Author: biomedicalopenaccessjournals
The only motto of Biomedical Journal of Scientific & Technical Research (BJSTR) Publishers is accelerating the scientific and technical research papers, considering the importance of technology and the human health in the advanced levels and several emergency medical and clinical issues associated with it, the key attention is given towards biomedical research. Thus, asserting the requirement of a common evoked and enriched information sharing platform for the craving readers.
BJSTR is such a unique platform to accumulate and publicize scientific knowledge on science and related discipline. This multidisciplinary open access publisher is rendering a global podium for the professors, academicians, researchers and students of the relevant disciplines to share their scientific excellence in the form of an original research article, review article, case reports, short communication, e-books, video articles, etc.
Examining the Spatial and Temporal Variability of Soil Moisture in Kentucky Using Remote Sensing Data
Introduction
Soil moisture plays a key role in controlling the exchange of carbon, water, and energy fluxes between the land surface and atmosphere in local, regional and global scales. It is an important term for accurate prediction of runoff, infiltration, drainage, soil evaporation and other important variables [1]. It thus affects near-surface climate by changing soil property such as albedo, soil thermal capacity. Soil moisture plays an important role in constraining plant respiration, transpiration, and photosynthesis in many regions of the world [2]. Also, soil moisture is involved in a number of feedback at different spatial scales and plays a major role in climate change projections [3]. For instance, it is found that up to 40% of the heat wave anomalies can be explained by interannual and seasonal variability of soil moisture because of the significant effect of soil moisture on air temperature [4].
Although soil moisture plays an important role in weather, climate and ecosystem functions, there is a lack in adequate longterm observations of soil moisture which represent challenges to the efforts to accurately predict soil moisture. Soil moisture measurement have been taken using field samples and more recently using soil moisture sensors, but are labor intensive and offer limited spatial variability. Thus, such point-based observation does not represent the high variability in soil moisture both temporally and spatially and are insufficient for regional or global analysis of soil moisture. Meanwhile, satellite imagery is an important tool for soil moisture studies because it’s continuous spatial and temporal coverage. Several studies have used remote sensing derived surface temperature and reflectance indices to estimate soil moisture [5,6]. The validation of such techniques with ground measurements is important for improving remote sensing soil moisture retrieval.
Since the lunch of the Advanced Microwave Scanning Radiometer (AMSR-E), global soil moisture data are available at a 25km spatial resolution. Soil moisture from AMSR-E is calibrated and validated at different sites. Good agreement is found between AMSR-E soil moisture and ground data at different sites ranging from natural vegetation to crops [7-10]. This consistent level of agreement between AMSR-E soil moisture with ground observation over a range of meteorological and surface conditions, offers a promise for application of AMSR-E soil moisture data to areas with different environmental conditions. The primary objective of the paper is to investigate the spatial and temporal variability in remote sensing soil moisture for the State of Kentucky. In particular, the spatial pattern of soil moisture during the vegetation growing season and it’s relation to vegetation type is explored. The strategy is to compare and analyze any observed trends in satellite soil moisture estimates for different Kentucky biome types. The goal of this paper is to evaluate the trends in Kentucky soil moisture and whether these trends have possible link to climate change. To answer these questions, the growing season soil moisture for 10 years are analyzed.
Methods Satellite Data
Daily soil moisture data from the Advanced Microwave and Scanning Radiometer (AMSR-E) are downloaded for the state of Kentucky from the National Snow and Ice Data Center achieve center for year 2003-2011 [11-14]. Moderate Resolution Imagining Spectroradiometer (MODIS) combined Aqua and Terra land cover data (MCD12) are downloaded for years 2004-2011from Earth Explorer. Land cover data are used to subgroup the AMSR-E data for the major Kentucky biomes: Forest, cropland, and mixed forest/ crop. All land cover classes designated as forest (e.g. evergreen, deciduous) are combined to create a new lad cover class named “forest”. Since MODIS combined land cover data are available starting year 2004, land cover data for year 2004 is used to extract Kentucky biomes for AMSR-E 2003. All satellite images are mosaic and subset to the State of Kentucky borderline. Image processing is done in Erdas Imagine (Hexagon geospatial) and statistical analysis and plotting are done in R (R statistical group).
Statistical Analysis
Daily AMSR-E data are averaged to obtain monthly soil moisture for years 2003-2011. Then to generate a growing season average soil moisture, the monthly soil moisture data are averaged to produce 9 years averages for January-February-March (Jan-Feb- Mar), April-May-June (Apr-May-Jun), July-August-September (Jul- Aug-Sep), and October-November-December (Oct-Nov-Dec).
Results and Discussion
The spatial variability in AMSR-E soil moisture is illustrated in (Figure 1) Lower soil moisture in Appalachian Kentucky can be related to errors in AMSR-E algorithm/retrieval due to elevation and surface terrain resulting in fewer collected data by AMSR-E. Otherwise, the spatial variability in soil moisture tracks the seasonal variability in precipitation. It is not clear how the spatial variability of AMSR-E soil moisture is related to vegetation types, but still some pattern can be detected. For example, low soil moisture increases in Western Kentucky (dominated by agricultural fields) from January to September and then decreases afterword.
Figure 1: Spatial variability of 10 years average AMSR-E soil moisture in Kentucky.
This can be related to irrigation that is used during the crop growing season and ceases before harvest, which is typically around September/October depending on the crop type. It is important to remember that AMSR-E soil moisture is for the top 1cm of the soil that is highly correlated with precipitation amount and variability. Temporal variability in soil moisture reflects the temporal variability in precipitation with higher soil moisture during the winter and spring seasons and lower during the summer season (Figure 2). AMSR-E data shows a decreasing trend in Kentucky soil moisture after year 2009 which can be related to decrease in precipitation. What is driving this decrease in soil moisture? Is it changes in winter, spring, or summer precipitation? To answer this question, the growing season soil moisture (April-September) is averaged for each of the 10 years (data not shown).
Figure 2: Time series of AMSR-E soil moisture for years 2002-2011.
Analysis reveals that the apparent decreasing trend in Kentucky soil moisture during the vegetation growing season is due to a decrease in July-September AMRS-E soil moisture (Figure 3). This decreasing trend is consistent with observed decrease in the precipitation duration (not intensity) during the growing season in Kentucky, particularly from July to September. Since the shift in precipitation is toward higher rates in shorter time period, runoff is increasing leading to a decrease in water infiltration and thus soil moisture. Soil moisture is higher in crops compared to forest ecosystems by about 20% (Figure 4). The seasonal variability in forest soil moisture is similar thought out the study nine years period. Higher soil moisture is detected by AMSR-E during the dormant season for forest ecosystem (Figure 4).
Figure 3: Time series of average AMSR-E soil moisture for the months of July-September.
Figure 4: Time series of AMSR -E soil moisture for forest (top), crop (middle) and mixed crop/forest (bottom) biomes in Kentucky.
Whereas, lower soil moisture can be detected during the vegetation growing season reflecting the use of soil water by plants for transpiration. Also, water interception by leaves can impact soil moisture by decreasing the amount of precipitation reaching the forest floor. Crop soil moisture does not show a distinct seasonality most probably due to human activities such as irrigation (Figure 4). For the mixed crop/forest, the soil moisture signal resemble more the crop soil moisture in that there is no apparent seasonal cycle in soil moisture estimates (Figure 4). Nevertheless, AMSR-E soil moisture data are capable of detecting the temporal variability in soil moisture for the major Kentucky biomes.
To check the ability of AMSR-E soil moisture data to capture the variability in soil moisture, two sites in Kentucky (agricultural and forest sites) are used and compared to AMSR-E soil moisture data. Results show that AMSR-E soil moisture underestimated observed soil moisture and show better agreement for the months of July to September (Figure 5). AMSR-E soil moisture is able to capture soil moisture during the dry period for both sites (Figure 5). To a great extent, the trend is represented by AMSR-E not the actual values. The lower variability in AMSR-E soil moisture compared to site data is likely the results of different measurement depth (~1cm for AMSR-E vs 5cm for site data). Lower moisture is expected in the top 1cm of the soil as it dries quicker due to evaporation and infiltration compared to 5cm soil depth. Another reason for this difference is the coarse resolution of AMSR-E (25km), where the variability in soil moisture due to texture, vegetation cover, and topography are not well captured by AMSR-E. It is important to note that soil moisture varies at very fine spatial scale that can be captured at the site level, but are harder to detect using satellite signal that represents the average of heterogeneous areas (mixed of different land cover types, topography, soil type, etc.)
Figure 5: Field soil moisture measurement at 5cm depth (connected dots) for Mammoth Cave (top) for years 2009- 2011 and Princeton field station (bottom) for year 2010. Piont data represent AMSR-E soil moisture.
This work presents the capability of remote sensing to capture the spatial and temporal variability in soil moisture and its relation with vegetation types in Kentucky. AMSR-E soil moisture data are capable of detecting the spatial and temporal variability in soil moisture for Kentucky biomes. The temporal variability in soil moisture for forest ecosystems reflect the continuous drying during the growing season. This is more apparent toward the end of the growing season due to decrease in precipitation amounts. The nine years data revealed a decrease in soil moisture, but continuous availability of satellite soil moisture is needed before such a trend can be significantly related to climate change. Evaluating satellite soil moisture products is important for improving our understanding of the spatial variability in vegetation carbon and water cycles.
Strengthening Early Warning and Early Action strategies for Urban for Security in Kenya
Executive Summary
Kenya is witnessing rapid population growth in the urban centers estimated at 4.4 % per year. It is projected that the number of people living in urban areas will exceed those in rural areas in the next two decades where majority of the population (60%) are living in informal settlements. Due to diverse physiographic conditions, urban areas are more exposed to various types of risks than even rural areas which are likely to worsen due to climate change. An increasing concentration of population coupled with extreme events, results in high damages to assets, interruptions in business continuity, loss of lives, displacement of populations, which is further enhanced by economic and social vulnerability. Informal settlements in urban areas face serious threat from such emergencies with food security top amongst the crises facing them.
Those living in the slums earn low wages which are often uncertain and unreliable to meet their needs. Majority of the dwellers whose earnings range from KS 12, 845 to KS 6.666 spend between 80 % – 100 % of their income on food. Unlike those living in rural areas the changes in food prices impact seriously on these categories by price global fluctuations. Additionally, most of them exhibit shift in diet from those consumed in rural areas. Besides, food security indicators, thresholds and coordination mechanism are weak and not well developed like those in rural areas such as the Arid and semi Arid Lands (ASALs) spearheaded by NDMA. This depressing and uncertain state of affairs justified the development of the IDSUE project spearheaded by Concern Worldwide under the START consortium.
a) To determine indicators for early detection of humanitarian emergency situations; and coping strategies.
b) To develop surveillance systems for detection of early warning signs of a humanitarian emergency/crisis and
c) To identify thresholds and triggers for action for defining when a situation has reached an emergency/crisis stage.
Despite the impressive work done by the START consortium to develop these indicators and thresholds for food security, a number of challenges may hinder full adoption of this mechanism in order to address urban emergencies. First, policies and strategies are quite unfriendly to the urban poor and have not integrated food security issues with agriculture budgets at 6 % below the recommended 10%; secondary, there is limited resources dedicated to monitoring food security issues in urban areas by the NCCG and stakeholders; thirdly, there appears to be a weak coordination mechanism without a dedicated agency with a strong convening and coordinating power similar to NDMA and finally tools and approaches developed by Concern Worldwide may need further testing and research to validate them. It is for the grounds that this policy brief to galvanize efforts of the policy makers including NCCG, national government, donors, private sector and CBOs to ensure the good work by Concern worldwide and partners is fully adopted in support of urban food security. Such efforts will ensure mechanism and strategies for the urban poor and dwellers of slums are supported to cope with the crises in urban areas and challenges associated with climate change.
Background to Urbanization and Disaster Risks
Humanity is now half urban and is expected to be nearly 70 percent by 2050 [1]. Urbanization is predominantly taking place in cities in developing countries, most notably in Africa and Asia. In Kenya, urbanization is rapidly increasing at about 4.4 per cent annually with an estimated 60 % of urban population living in informal settlements [2]. High population densities, soaring crime, poor sanitation and inadequate health care services are contributing to high disease burdens more than rural populations [3]. Most of these slum dwellers are powerless, lack personal security, tenure to their land and access to stable income sources [4]. Additionally, sub-standard infrastructure including housing and inherent socio economic inequalities increase susceptibility of the urban residents to a variety of emergencies which seriously undermine their social capital and life expectancy [5]. Such rapid growth of urban settlements could gradually turn into crucibles of death from natural and manmade disasters unless targeted policies and programmers are enacted to enhance their resilience.
The common shocks that the urban dwellers are exposed to include fires, floods, security risks, water shortages and rising food prices. Mortality and malnutrition levels are routinely used to detect when a disaster situation has entered a crisis phase and trigger humanitarian response in rural settings. However, use of these indicators and corresponding thresholds pose some challenges due to high population densities, different livelihoods and coping strategies in urban settings. It is imperative therefore to develop a surveillance system with indicators that reveal stress in early stages of a crisis to aid humanitarian and development agencies activate early action [6]. Hunger, food insecurity and negative coping strategies have been noted as strong outcomes of slow onset emergencies in informal settlements. Urban crises disproportionately affect the poorest disproportionately particularly the female headed households, youth, children, marginalized groups, people living with AIDS, elderly and stigmatized ethnic groups [7].
Urbanization and food security
Food and nutritional security are basic needs for not only human survival but also for economic productivity and human development. It is widely accepted that there are four dimensions to food security: availability, access, utilization and stability. While each dimension is necessary for overall household food security, they may have different weightings particularly in urban settings. The main trends impacting on urban food security include: demographic changes, diversification of diets, high cost of farm inputs, poor investments in agriculture, natural disasters, climate change and unfriendly policies and legislation toward farmers [8]. Promoting food security directly contributes to the realization of human rights and fundamental freedoms as provided by the Kenya Constitution 2010. It also contributes directly to achievement of two Sustainable Development Goals (SDGs) number one
a) Ending poverty in all its forms everywhere, and SDG number
b) End hunger, achieve food security and improved nutrition and promote sustainable agriculture.
For many years, emergency interventions focusing on food security and nutrition have had a predominantly rural focus and there is need to reverse this approach to accommodate the needs of the fast growing urban populations. The 2007-2008 global food crises that occurred in most cities across the world highlighted and exposed the vulnerability of the urban food system [9]. In Kenya, the 2007-2008 post election violence contributed further to deterioration of the living conditions of the residents in urban areas thus underpinning the need to develop and strengthen food security surveillance systems in urban set ups. These crises revealed that the urban poor and other minorities were hardest hit making it difficult to maintain a decent standard of living [10] (Figure 1).
The urban food security environment presents several challenges that differentiate it from a rural context. First, the urban dwellers purchase almost all their food requirements from markets. Secondly, majority of urban dwellers exhibit shift in staple food from maize, sorghum and traditional foods to rice and wheat. This exposes the urban poor to international markets than their rural counterparts as their diet is controlled more by global markets and by many actors (Figure 1). Urban residents are also exposed to changes in global events such as Foreign Direct Investment (FDI) which would influence remittances that some urban dwellers depend on. Lastly, the income sources of the urban poor are casual, insecure, uncertain and low paying posing huge risk to food access for the residents. Thus when emergencies strike, the urban dwellers are negatively impacted even more than their rural counterparts resorting to negative coping strategies including theft.
Figure 1: Illustration of food supply chain in urban setups (Adapted from Teng et al., 2010).
Urban Humanitarian Response
Humanitarian response to urban emergencies is not new. However, the approaches and tools are relatively new and continue to evolve. Recent experiences in urban humanitarian responses emphasize the need for engagement through a strong coordination mechanism with a wide spectrum of stakeholders such as the national and local/county governments, private sector, civil society, donor community, the media, multilateral organizations, CBOs and local communities. A key challenge affecting effectiveness of urban humanitarian response has been lack of consensus on indicators for detecting when chronic poverty has tipped into a crisis [11]. A clear methodology and indicators appropriate for urban set ups to monitor the situation and trigger timely humanitarian response when it has reached a certain threshold is essential. Such a mechanism should consider urban peculiarities including high population densities which can unravel the actual humanitarian situation (Figure 2).
Figure 2: Hidden crisis in urban areas.
Equally essential is putting in place a credible people–centered early warning system [12] (EWS) to support early humanitarian response in the event of urban crises. The EWS should generate and disseminate timely and meaningful information to allow populations at risk and stakeholders sufficient lead time to take appropriate actions to mitigate the impacts of disaster [13]. For the EW information to be effective, a prepositioned contingent fund and response plan should be rolled out immediately the indicators have reached a certain threshold. An effective multi stakeholder coordination mechanism to champion delivery of the various interventions by the government and humanitarian actors should also be established. This is vital to enhance efficiency and effectiveness of humanitarian operations, limit duplication of efforts and clarify roles and responsibilities of various actors. Often, coordination particularly during emergencies has proved to be a challenge leading to unacceptable delays and inefficient delivery of interventions.
The humanitarian IDSUE research project has made remarkable progress in developing surveillance systems, strengthening institutional capacities and development of indicators and thresholds for urban emergencies [14]. Generally, the study found that majority of the community members living in the informal settlements are food insecure with up to 64 per cent of sampled household being food insecure [15]. This pioneering work needs to be up-scaled by NCCG, national government, private sector donors and other stakeholders as a mechanism for tackling food insecurity in urban areas. The study has contributed to development of indicators and thresholds for phase classification of urban crisis as shown in Table 1 below.
Table 1: Thresholds for early warning indicators (Source: IDSUE survey data, 2015).
Legal & Policy Framework for Food Security in Kenya
A number of legislation and policies frameworks exist at global, regional and national levels in support of urban food security and agriculture. At county and national levels, the IDSUE spearheaded by Concern Worldwide under the START Network, the NCCG Disaster & Emergency Management Act, the National Disaster Management Bill and the National Food and Nutrition Security are driving the agenda to ensure the right to food is achieved. However, in the event of emergencies particularly in the urban set-ups, the these policies and legislation frameworks do not clearly spell out mechanisms for ensuring the nutritional and food security needs of the urban poor are realized.
At regional level, the efforts supporting food security include IGAD Drought Disaster Resilience and Sustainability initiative [16] (IDDRSI) – a regional initiative for reducing vulnerability and building resilience by IGAD and the Comprehensive Africa Agriculture Development Programme [17] (CAADP) Framework and Agenda 2063. The Africa Risk Capacity [18] (ARC) of AU is supporting agriculture insurance mechanism thus promoting food security and agricultural productivity. Agenda 2063 which is building on existing continental initiatives is a strategic framework of the AU to accelerate socio-economic transformation including food security and agriculture. The CAADP aims at committing countries to increase agricultural productivity by increasing public investment to at least 10 % of their national budgets by 2008. The national and county governments need to capitalize on these initiatives to ensure appropriate programs and strategies are developed to combat urban food insecurity.
The 10 % CAADP commitment on urban agriculture and food security issues are not integrated in the NCCG legislation as the current agriculture budget allocation is only 6 % of the total. These targets need to be cascaded to all city counties of Nairobi, Mombasa and Kisumu to ensure agricultural budgets are at least 10 per cent of the allocation from the national government to support food security (Figure 3). However, there are critical gaps in these strategies and legal frameworks. Generally there are limited provisions in these documents for engaging with urban communities. The policy frameworks also assume homogeneity in populations being targeted. Even in informal settlements all urban populations are not homogenous – there are huge differences among the residents. Additionally, there seems to be no concrete plan for funding the implementation of these strategies and policy frameworks. There is also limited formal engagement with political leaders and decision makers which makes it hard to get urban emergencies and food insecurity issues high on the political agenda.
Figure 3: A glance at policies, legislation and strategies in support of urban food security.
Cost Effectiveness of Early Humanitarian Response
Most slow onset emergencies such as drought and food insecurity generally predictable. Major problem has been the long time taken to mobilize humanitarian aid from donors and governments to respond to the crisis. Late humanitarian response often leads to loss of lives, damage to livelihoods, depletion of assets and erosion of coping capacity leading to destitution of affected households. For instance it is estimated that the 1984 famine in Ethiopia caused a half a million deaths while the 2011 drought in Horn of Africa resulted in 50,000 to 100,000 deaths [19] When food aid is provided in time mortality rates declines by up to 40% [20] particularly among the under 5 children. With regard to economic impacts, drought in every 5 years lowers GDP per up to 4 % per year. Reduced food intake common among low income households shocks is associated with increased mortality and contributes to 33 % of childhood deaths among the under 5 in Africa. In absence of rapid early response, the mortality rates increase substantially.
Further studies by ARC in Africa has shown that a total cost of USD 81 Million is lost in the event of a high magnitude drought, a slow onset disaster. This is equivalent to USD 221 M over 5 years or USD 44 M per year [21]. In urban settings it may be much lower due to nature of livelihoods in slums and peri-urban areas. With regard to food aid alone it is estimated that the humanitarian community spent about USD 54 per person per year on a high impact slow onset emergencies. At macroeconomic level, the ARC assessment indicates that slow onset emergencies have adverse impact on the GDP of 4 percent per year for 1 – in – 10 year drought [22]. The study concluded that 1 US $ saves $ 4.40 spent after the crisis. At household level, ARC analysis reveals that nearly US $ 1,300 is lost per household in three month as a result of slow onset emergencies in HoA. Thus early intervention is critical in saving lives, protecting economic turn down and sale of assets at household level.
Implications, Conclusions and Recommendations
The pioneering research work by Concern Worldwide under START consortium on developing indicators, thresholds and strengthening surveillance systems for urban emergencies is promising good results. It is quite relevant and strategic to changing livelihoods and demographic needs of the urban dwellers. However, there is need to move forward this work to focus more on socio economic contexts of slow onset emergencies on the urban poor. The tools developed may need further testing and refinement by the academia and research organizations to suit evolving issues. Comparing the performance of these tools and approaches with similar research work undertaken in Ethiopia would also help validate these tools. Further studies need to be pursued on urban vulnerability and hazard assessment and the linkages between weather forecasting and contingency planning to enhance early response mechanisms within a DRR framework.
Conclusion
Kenya is witnessing rapid growth in urban centers which is projected to exceed the rural population by 2050. Majority of these urban dwellers are living in informal settlements characterized by inhuman and deplorable living conditions without adequate access to food, health care, water and sanitation and housing and are at greater risk to food insecurity and climate change related disasters. There is a wide range of humanitarian and development actors in urban areas operating in environment without clear terms of reference thus impeding their effectiveness and delivery of their assistance. Besides, the existing NCCG Disaster and Emergency Act lacks a policy and a plan to operationalize and support it is full implementation. The coordinating structures are unclear and need urgent review to enhance effectiveness to EWEA mechanisms.
The current IDSUE work carried out in Nairobi, Mombasa and Kisumu slums is shifting the humanitarian context from the familiar rural emergency response to emerging urban response mechanism. The ground breaking innovative approach has yielded good results on strengthening systems and developing tools, indicators, thresholds and approaches tailored to the peculiar livelihoods and demography of informal settlements. Such initiative should be integrated into stakeholder actions and policy frameworks in order to safeguard the lives and livelihoods of residents of the informal settlements and support progress towards achievement of SDGs and Vision 2030. The current state of affairs in monitoring the slum crises may not be sustainable as it is donor aided with little resources from NCCG and national government to sustain this important work. The role of private sector in urban food supply chain [23] (Figure 1) has also not been fully exploited although it plays an important role in urban food security system and building resilience.
Recommendations
Think and Act Ahead of the Needs of Rapidly Growing Urban Populations: Whereas there is strong interdependency between rural and urban areas, the urban food security system merits distinctive greater attention from governments, private sector and donors due to rapid urbanization. There is need to strengthen policies and strategies supporting urban food system that address the growing problem of food insecurity in informal settlements. This shall involve reviewing the NCCG disaster Act and other Urban Agricultural Polices in the city counties of Mombasa, Nairobi and Kisumu to comply with the 10 % agriculture budget requirement which currently stands at about 6 % to support urban food security. Experiences and lessons learnt from many years of drought response in the ASALs reveal a slow and sometimes unacceptable delay in timely response to emergencies as a result of slow mobilization of aid and inefficient mechanisms.
Promote a Strong DRM Coordination Mechanism, Strategies and Policy Frameworks as key to Enhancing Early Response to Urban Emergencies: A robust coordination mechanisms and institutional arrangements that is well resourced and working collaboratively with various stakeholders in all the city counties is critical in managing slow onset emergencies. In addition, there is need to develop a relevant policy and plans to operationalize NCCG Emergency and Disaster Act as rightly pointed out in Part III of the Bill in Article 8. The disaster Act should be reviewed to firmly integrate food security issues, EWA mechanism, reinforce coordination arrangements and clarify the roles and responsibilities of the many actors.
Support Periodic Monitoring, Information Management and EWS to Enhance Early Response Mechanisms in Urban Areas: There is urgent need to adopt a tested and credible early action framework to guide early response to slow onset humanitarian crisis in urban areas especially slums modeled on the initiatives being spearheaded by CONCERN Worldwide under the START consortium. These mechanism need to be integrated into the city counties planning and budgeting processes to ensure it is fully embedded in county business processes.
Adapt Tools and Approaches from UEWEA Project to Strengthen Urban EWEA Systems: During emergencies humanitarian agencies have often focused rural areas leading to well established tools and systems for monitoring rural crises. Adoption of UEWEA tools and approaches by the stakeholders through an aggressive campaign is essential. However, these tools require further refinement and testing through the research process which calls for continued support from the donor and increased commitment from the both national and county governments.
Mobilize Financial Resources to Strengthen Urban Food Security Systems Including Monitoring of Risks: Adequate financial resources and human capacity is needed to periodically collect, analyze and forecast early warning information for prompt action before the crisis turns into an emergency. The city counties of Mombasa, Nairobi and Kisumu should create a specific budget line for this important activity which should be clearly spelt out in the Act and NCCG budget.
Building Resilience and Poverty Reduction Initiatives is Key to Sustainability of Urban Areas: Urban emergencies are a serious setback to progress in attaining Vision 2030 and SDGS including Sendai Framework of Action and regional initiatives. Climate change is expected to escalate the impacts slow onset emergencies of the urban poor. It is essential for the county and national governments supported by the private sector and the vibrant civil society to invest long term resilience building efforts which are more cost effective compared to humanitarian actions.
The prevalence of obesity has immensely increased around the world in both adults and children. In fact the World Health Organization (WHO) estimates that at least 1 billion people are overweight, and three hundred million of these are obese [1]. The rising prevalence of obesity merits the need for accurate methods of assessing adiposity. There are now, however, many measures of obesity, anthropometrics and otherwise [2]. Evidence from recent epidemiological studies has yielded the advocacy of WC (waist circumference) and BMI (Body Mass Index) as easy-to-use, low-cost, yet reliable measures of obesity [3,4]. As it is clear, BMI provides a simple numerical scale for body status often applied in population studies. BMI in medical literature is reported as variable (dependent or non-dependent) and also used for descriptions and classification of groups or populations. However, some studies apply BMI and WC based on self-reported body weight and height without any valid protocol; this subjective approach may potentially lead to inaccurate data [5]. So here, as very small part of the big world of researches, I am writing this letter to brief those involved in researches about my own study in which body weight changes following a single meal session could affect BMI and WC values.
We conducted a cross-sectional study on 120 students of Golestan University of Medical Sciences (GOUMS) in 2015; all participants were healthy college students and their mean ages were 19±3 years. Body weight and waist circumference were measured before and after a meal according to the standard guideline of World Health Organization [6]. BMI was calculated as Mass (kg)/ [Height (m)]2 and was compared by Paired Student t-test. We noticed that there were statistically significant differences between body weights, WC and BMI values before and after a meal (P<0.05). Because having some food could increase body weight and immediately measurement of BMI and even WC thereafter make these indices be overestimated. Besides we observed that where to put the plastic tape on the body to measure the WC is crucial to the correct outcome; we performed WC measurement on three different point of the body: below the anterior superior iliac spine (ASIS), on and above it and three different values were obtained.
What we found in most studies applying BMI and other anthropometric indices, fasting condition for measurements of body weight might be ignored or at least not clearly reported while BMI calculation because it is supposed that its effect is very small and hardly can lead to the BMI categorical states bias. We suggest that the site of measurement and the time since last meal should be standardized for the development of a protocol of BMI and WC measurement; changes, although being slight, in cut-off points can make BMI and WC values move into another category; may cause analysis mismatch and remarkable changes in the final results which lead to interpretation bias because such anthropometric indices are generally used in population studies; so very little changes may produce very profound consequences thus inconclusive and wrong interpretations.
When provided with educational materials on hand hygiene, is there an increase in knowledge among healthcare professionals in acute care settings?
Problem Statement
According to the World Health Organization (WHO), health care associated infections (HAI) cause major problems for patient safety and health promotion [2]. The impacts of HAIs include prolonged hospital stays, long-term disabilities, increased resistance of microorganisms to antimicrobial treatments, financial burdens, high costs for patient care that is not reimbursable by insurance agencies and mortality [2]. The risk of patients developing HAIs is universal and exists in every healthcare facility and system worldwide [2]. Studies estimate that 1.4 million patients throughout the world are affected at any one time with an HAI [2]. In the United States (U.S.) alone, Haverstick et al. [3] reports on research that revealed about 1 in 25 patients in the acute-care setting will develop a HAI during their stay. Data from 2011 shows that about 700,000 patients had an HAI in the U.S. and 10% of those patients died from complications related to the HAIs. There have been many studies completed by numerous organizations, to include the National Institute of Health (NIH) [4], showing that the lack of hand hygiene among healthcare professionals contributes to the most HAIs in patients [2]. Prior studies have shown that hand hygiene compliance can be hindered by unsafe patient to nurse ratios, skin irritation from the antimicrobial treatment, and lack of knowledge. The persistence of staff education regarding practices can be a critical measure in reducing HAIs [5]. Therefore, the purpose of this study is to provide educational materials to healthcare professionals on acute care units in hospitals. Consequently, this will result in increased knowledge on the importance of hand hygiene and the proper technique of hand hygiene to reduce HAIs.
Florence Nightingale
Florence Nightingale is one of the most recognized names in nursing. Nightingale was born in 1820 in Italy to a wealthy British family, who, in 1844, were angry when she told them her decision of becoming a nurse. Nightingale became known for her work in the field during the Crimean War. She became known at the “Lady with the Lamp,” because of her night rounds tending to wounded soldiers [6]. Nightingale started a Sanitary Commission after she pointed out that the unsanitary conditions of the soldiers were a major cause of death. Her work led to reduced death rates from 42% to 2% [6]. Hand hygiene compliance in health care professionals today is important in preventing the spread of infections.
Literature Review
A comprehensive literature review was conducted through multiple databases and search engines including Google Scholar, EBSCO Host, and CINAHL nursing database. The inclusive criteria for research collection included articles between 2012 and 2017. Search terms included “hand washing, prevention of illness, flu season, and hand washing compliance.” With infections today adapting and becoming stronger, the need for protection in the hospital setting is at an all-time high. Hand hygiene is an easy and very efficient way of preventing the spread of infections.
Hand Hygiene: Past to Present
Ignaz Semmelweis was a Hungarian physician of ethnic-German ancestry, known as the “father of hand hygiene”. Semmelweis along with other colleagues established that hospital acquired infections were transmitted through the hands during activities with patients [2]. In 1847, Semmelweis discovered this after witnessing an alarmingly high number of mortality rates from puerperal fever at an obstetric clinic in which he held the position house officer. Semmelweis proposed all healthcare professionals wash their hands with a chlorinated lime solution before all patient contact and as a result the mortality rate at the obstetrics clinic fell to dramatically to three percent and subsequently remained low [2]. Since Semmelweis initial hand hygiene implementation, hundreds of studies and investigations have been conducted to provide evidence based practice recommendations on hand hygiene and the prevention of hospital acquired infections.
In the 1980’s the first national guidelines for hand hygiene were published and in 1995 and 1996, the Center for Disease Control (CDC)/Healthcare Infection Control Practices Advisory Committee (HICPAC) recommended either antimicrobial soap or a waterless antiseptic agent to be used for cleansing hands for all healthcare professionals entering and exiting patient rooms to decrease the spread of multidrug-resistant pathogens [2]. In 2002, HICPAC released guidelines that define alcohol-based hand rubbing, as the standard of care for hand hygiene practices in healthcare settings and that hand washing is reserved for specific situations only. The current CDC recommendations for hand hygiene in healthcare settings include decontaminating hands with healthcare facility approved antiseptics when hands are visibly soiled, routinely, before direct contact with a patient, before donning gloves and after removing gloves and when performing any healthcare related procedure on a patient.
Hand Hygiene Barriers
Although, 100% hand hygiene compliance may seem like a straightforward and effortless task, a variety of challenges have been recognized as hindrances to accomplishing this objective. WHO [2] presented common barriers of hand hygiene including “skin irritation caused by hand hygiene agents, inaccessible hand hygiene supplies, interference with HCW [health care worker]– patient relationships, patient needs perceived as a priority over hand hygiene, wearing of gloves, forgetfulness, lack of knowledge of guidelines, insufficient time for hand hygiene, high workload and understaffing, among other issues. Kirk et al. [7] performed a crosssectional examination of health care workers utilizing a survey to inquire about one’s knowledge, attitude, and self-reported practice of point of care hand hygiene. A convenience sample of 200 health care providers in the United States and 150 health care providers in Canada were chosen for the study. Half of the respondents were physicians and half were nurses. Forty-one percent of the respondents listed “dispensers/sinks not in a convenient location”, 36% reported “being busy”, and 32% reported “products dry our hands” as barriers. Increased workload and crowding was recognized as a main factor to low hand hygiene compliance in an observational study performed over 22- months in a 40-bed emergency department located in Toronto, Ontario, Canada [8].
Although this study is limited to a single emergency department with potential bias of an observational study, the theme of increased workload as a barrier to hand hygiene is valid. In a society involving technology in all facets of the health care system, it’s very important to comment on the use of mobile phones and the potential barrier to proper hand hygiene. A study performed by Mark et al. [9], at a hospital in Northern Ireland, involved swabbing 50 mobile phones for bacterial growth and simultaneously administering a questionnaire investigating cell phone usage among staff. Sixty percent of the phones yielded bacterial growth. The results from the questionnaire found 45% of the participants never wash their hands after cell phone usage and 63% report never decontaminating their phone. In addition, 57% stated that if their phone was proven to be contaminated, this would change their hand hygiene practice when using their device [9]. Despite these barriers, is it possible to educate hospital staff about the overwhelming importance of proper hand hygiene to result in increased hand hygiene compliance?
Methodology
A 13-question Quality Improvement survey was developed to assess healthcare professionals’ baseline knowledge regarding hand hygiene practices. The survey included 10 knowledge questions and three demographic questions. This tool was developed using various resources including: National Institute of Health (NIH) [4], Healthy People 2020, CDC and WHO. This survey was used as a pre-test to assess the base knowledge of healthcare personnel. These surveys were sent via email to nurses, physicians, mid-level providers, and nursing assistants. These staff members work on medical surgical units and/or intensive care units. The research took place in three different locations: Bennington, Vermont, Norfolk, Virginia, and Washington D.C./Maryland. After obtaining permission from each nurse manager from each region, an email was sent to approximately 75 people at each location [10]. This email contained information on the quality improvement project, a consent letter, a link to the survey tool, and contact information of the researchers. Finally, the participants were asked to complete this pre-test in anticipation of receiving an educational PDF file (poster) and post-test to be sent out in the near future.
An email containing a one-page educational PDF (poster) attachment and a link to the post-test was emailed to the same sample population. The educational PDF file (poster) contained brief sentences with visual cues displaying information regarding hand hygiene. This file was developed to provide facts to the participants to aid in the completion of the post-test and to improve baseline knowledge of participants. See Appendix B to visualize the educational PDF file (poster). The post-test contained the same 10 knowledge questions and three demographic questions, as the pretest. The pre and post-tests was constructed using survey monkey, allowing for analysis of results from both test surveys.
Results
The pre-test survey was completed by 96 healthcare professionals; two doctors, three nurse practitioners (NPs), 74 registered nurses (RNs) and 17 certified nursing assistants (CNAs). The post-test was completed by 45 healthcare professionals; two doctors, one NP, 37 RNs and five CNAs. There were 60 participants in the pre-test that worked in medical-surgical units and 36 participants that worked in ICUs. There were 28 participants in the post-test that worked in medical-surgical units and 17 participants that worked in ICUs. The total participants from Maryland and Washington, D.C. for the pre-test were 35 and the total for the post-test were 13. The total participants from Vermont for the pre-test were 39 and the total for the post-test were 20. The total participants from Virginia for the pre-test were 22 and the total for the post-test were 13. The above research has demonstrated an enhancement in hand hygiene knowledge among healthcare providers, one can assume that these individuals will be more motivated to be hand hygiene compliant (Figure 1).
Figure 1: Number of individual participated in test.
Limitations
There were several limitations noted after analyzing the data. One limitation was there was no way to track whether the same participants completed both the pre-and post-survey. Another limitation includes the fact that when one completed the postsurvey, there was no guarantee that the participant had a secure screen. This means the participant could have had direct reference to the educational materials while completing the post-survey, skewing the results. Finally, another limitation was participant attrition. 96 individuals completed the pre-test, but only 45 individuals completed with post-test (Figure 2).
After analyzing our data, it has become obvious that there was a lack of physician and midlevel provider participation. To include more of these providers, a different approach to recruit these individuals should be considered. The surveys were sent out via email. Perhaps, most physicians and midlevel providers do not frequently check their email, limiting these individuals from participating. An approach to recruit physicians and midlevel’ in future studies might include utilizing text or approaching these providers directly. Furthermore, this research study only analyzed hand hygiene knowledge improvement among healthcare professionals. Therefore, future studies should focus on identifying whether an increase in knowledge results in improved hand hygiene compliance.
Conclusion
Hand hygiene is an inexpensive and effective way of preventing the spread of infections and in promoting the safety and health of our patients. The purpose of this quality improvement project was to increase knowledge on the importance of hand hygiene and the proper technique of hand hygiene to reduce HAIs. When provided with educational materials on hand hygiene, there was an increase in knowledge among healthcare professionals in acute care settings. After completing a pre-and post-test survey, analyzing hand hygiene knowledge of healthcare professionals, before and after educational material was provided, the data showed an increase in knowledge regarding hand hygiene. The pre-test average score being a 51% and the post-test average score being 75% shows a significant increase in knowledge regarding importance of hand hygiene. The goal of this project was to raise awareness of the importance of hand hygiene in preventing the spread of nosocomial infections.
Bioactivities of Extracts from Different Marine Organisms around the World (2000 to Present)
Introduction
Marine ecosystem covers 70% or more of the earth’s surface. These ecosystems are habitat to a great diversity of marine organisms that produce highly structural diverse metabolites as defense mechanisms [1,2]. Of these marine organisms, invertebrates produce bioactive natural products that may be useful for developing and producing novel drugs [3,4]. Marine invertebrates include several species which cut across different taxa including Porifera, Annelids, Coelenterates (Cnidaria and Ctenophora), Mollusks, Echinoderms and etc. [3,5]. Several studies have already been conducted determining natural products from marine invertebrates and their biological activities and also the biosynthetic studies which lead to revision and modification of their structures [5]. In a study conducted to determine the bioactivity of sterols isolated from marine organisms, it was found out that sterols play as anti-inflammatory, antimicrobial, anti-HIV, and anticancer activities [6].
Sponges are the most studied marine invertebrate in terms of secondary metabolites. More than 5300 different chemically potent and bioactive substances have been discovered present in sponges and many of these substances have pharmaceutical activities against human diseases like malaria, AIDS, and cancer. These substances are classified as alkaloids, lipids, steroids, and terpenoids and some of these compounds exhibit cytotoxic activities like the polyacetylenic lipid derivatives, glycerol ethers, and linear alcohols. It was also found out that secondary metabolites from sponges are cytotoxic to human gastric tumor cells, KB-16 human cell line, ovarian sarcoma cell line, pancreatic cancer cell line, colorectal adenocarcinoma cell line [4,7]. Several secondary metabolites were also identified to have anti-inflammatory action in human neutrophils and possess inhibitory activities against α-Glucosidase which is important in controlling glucose concentration [7]. Sponges were also found out to contain natural products that have anti-inflammatory, antioxidant, and immunomodulatory activities [8].
Other marine invertebrates also were studied and found to have antimicrobial, antioxidant, anti-cancer, anti-inflammation, cytotoxic activities. Natural products found from the extract of soft corals in Vietnam were found to have significant inhibitory effects against T. brucei. The growth inhibition and cytotoxicity of pentamidine were evaluated in terms of EC50 value and against human cell lines [2]. Several secondary metabolites were also isolated and elucidated from marine crinoid invertebrate, Colobometra perspinosa, which demonstrated non-selective anti-cancer activity [9]. Dolabellanin, a 33 amino acid residue peptide from a mollusk sea hare from Japan also exhibits a broad spectrum of antimicrobial activity [5]. Secondary metabolites isolated from the digestive glands and mantle of nudibranchs were found to have mild toxicity against brine shrimp and very potent antimicrobial activity [5]. The extracts from a bivalve P. viridis showed antibacterial activity as reported in the assay where the extract had maximum zone of inhibition against V. cholerae [10]. It was also shown in the same study that the crude protein extract of P. viridis had a potential antioxidant activity.
New marine natural products were also identified and isolated from marine invertebrates under the phylum of Echinodermata as reported by the review of [3]. It was found out that for the last two decades, majority of the new marine natural products from this phylum came from sub-phylum Asterozoa (54.9%) and was followed by sub-phylum Echinozoa (33.7%). The classes Asteroidea and Holothuroidea of Phylum Echinodermata accounted for 91.7% of new marine natural products having 529 and 213 new marine natural products respectively [3]. It was also showed that since 1990, majority of the research on new marine natural products isolated has been focused on less than 1% of the recognized marine invertebrate biodiversity and that 7.4% came from the species under phylum Porifera while only 2.1% were yielded from phylum Echinodermata. Although only few were identified and isolated, echinoderm is considered as an exceptional source of polar steroids with vast structural diversity which shows a wide range of bioactivities [11].
Since 1990, 7.2 new natural products are discovered per species from the 1,849 valid species under Class Asteroidea and 74% the species of this class are with new natural products [3]. Several studies are also reports the isolation of novel marine natural products this class with several bioactivities. Secondary metabolites from sea stars show a rich source of activity against microbes. These sea stars are Luidia maculate, Stellaster equestris, Astropecten indicus, Protoreaster lincki, Pentaceraster regulus crude, fractioned, ethanolic, n-butanol and methanolic extracts demonstrate antibacterial and antifungal activity against human pathogens [11]. Crude extracts of the species Astropecten polyacanthus also possess inhibitory activities against inflammatory components [12]. Other sea stars like Tremaster novaecaledoniae, Asterias amurensis, Styracaster caroli, and Echinaster brasiliensis contain sterol compounds which exhibits inactivation activity against HIV 6.
Other sea stars also play cytotoxicity against cancer cells. Novel marine natural products were also elucidated from the species Leptasterias ochotensis demonstrated cytotoxic activities against cancer cell lines RPMI-7951 and T-47D [13]. Saponins isolated from the sea star, Culcita novaeguineae, showed cytotoxic activity against human carcinoma cell lines by the apoptosis of the cells [14,15]. The sea star species Asterina pectinifera exhibits an anticancer activity by preventing the initiation of enzymes involved in carcinogenesis in human colon cancer and breast cancer cell lines [12,16]. Cytotoxic activity against human cervical cancer cell line and human mouse epidermal cell line was also reported from the metabolites of asterosaponins from the species Archaster typicus [17]. Cytoxicity against a small panel of human solid tumor cell lines of Certonardoa semiregularis were evaluated and it was found out that several saponin compounds were active against five different cell lines [6,18]. It was also found out that two saponins were active against 20 clinically isolated strains. In another study, carotenoids of Marthasterias glacialis were evaluated in terms of cytotoxicity against rat basophilic leukemia cancer cell line and it was found out that toxicity was low as desired as an anti-cancer lead substance [19].
Antioxidant activities of extracts and isolated marine natural products from sea stars were also recorded and established. In a study [20], extracts of the sea star L. maculata showed a potential antioxidant activity in all four assays. The study also further established the presence of steroidal glycosides and glucocerebrosides which might be the responsible agents for the antioxidant activity. In another study, antioxidant activities and neuroprotective effect were reported from the polysaccharides extracted from the starfish Asterias rollestoni [21]. It was also found out that the mannoglucan sulfate had the highest antioxidant activity among all polysaccharides tested. These handful studies on bioactivities of extracts from different marine organisms were mostly conducted in temperate and Polar Regions, thus signifying the need to further and advance interest on marine organisms from biodiversity-rich tropical regions .Conservation efforts must be given priority especially with these marine organisms due to various threats of extinction due to climate change, and various destructive anthropogenic activities and pollution [22-24].
Unusual Locked Trigger Finger Due to Tophaceous Infiltration of Wrist Flexor Tendon
Introduction
Gout is an inflammatory arthritis caused by cellular deposition of monosodium urate crystal [1]. Tophi are chalky, gritty accumulations of monosodium urate crystals that build up in soft tissue in an untreated gouty joint [1]. The peak incidence of gout is between the ages of 30-50 with the prevalence increasing with age [1]. Gout is five times more common in men [1]. Tophi can present as a first sign of hyperuricemia but reports on the involvement of the flexor tendons of the hand is rare [2]. Although tuberculous tenosynovitis is a more common etiology for swelling over the wrist, gout must be among the differentials irrespective of hyperuricemic status. It can present clinically as tendon rupture, nerve compression, or digital stiffness and can be complicated with infection1. Because of its low frequency, gouty involvement of the flexor tendons is not often considered in the differential diagnosis of tenosynovitis [2]. We report a rare case in which intratendinoustophaceous gout was found within the flexor digitorum superficialis tendon at the wrist causing patient to have trigger finger like symptoms.
Case Report
A 37-year-old male presented to our orthopaedic clinic with inability to extend his left ring finger. He previously diagnosed with left trigger ring finger and had surgical treatment to relieve the symptoms at another hospital centre 8 months before coming to our hospital. Post-operatively he was still unable to extend his finger (Figure 1). On physical examination, there was a swelling of 2x2cm found over the volar aspect of the forearm just proximal to the flexor retinaculum when the patient flexes his fingers. However on finger extension, the mass disappears. Due to prolong inability to extend his left ring finger he had developed a flexion contracture. Our working diagnosis was incomplete released of left 4th finger A1 pulley, TB tenosynovitis, followed by soft tissue neoplasmand subsequently surgical exploration was performed. We proceed with surgical exploration over the previous surgical scar over the A1 pulley of left ring finger. We noted that the A1 pulley has been completely released. However his finger was still in the flexed position. We decided to do an extended carpal tunnel incision to explore the mass proximal to flexor retinaculum. We noted whitish chalky infiltration of the Flexor Digitorum Superficialis (FDS) tendon, synovial adhesion to other tendons and hypertrophy of the flexor tendon (Figure 2 & 3). Synovectomy and excision of the chalky infiltration of the FDS tendon was performed. Histo-pathological evaluation confirmed the diagnosis of gout. Post-operatively, patient was able to extend his left ring finger but with some degree of stiffness but after 2 months of intensive physiotherapy he regained full range of movement to that finger.
Figure 1: (Volar view): The ring finger in flexed position.
Figure 2: Classical carpal tunnel release done-noted whitish chalky infiltration of the tendon of the FDS, synovial adhesion to other tendons and hypertrophy of the flexor tendon.
Figure 3:The arrow shows tophaceous gout in the FDP of the left ring finger.
Discussion
Gout is a disorder of purine metabolism that predisposes to hyperuricaemia, leading to monosodium urate crystal depositions in joints [3]. The underlying metabolic disorder is hyperuricemia [4]. The most common primary cause is renal under excretion (90%) [5]. Some patients suffer from enzyme defects, which lead to overproduction of uric acid [5]. Dietary causes like high consumption of alcohol or purine-rich foods may also lead to hyperuricemia [5]. Other causes are rare genetic disorders, medical disorders (metabolic syndrome, renal failure, hemolytic anemia), and medication use [5]. Approximately 10% of patients with elevated blood levels of uric acid develop gout at some point in their life [5]. In patients in whom the disease has been neglected tophaceous destruction of musculoskeletal structures may be helped by carefully selected surgical procedures [4]. The pathological process involvesdeposition of urates and cause destruction to the skin, ligament, tendon and cartilage [4]. This process will cause inflammatory response at the site of involvement and cause symptoms to patient. Lesions may be encapsulated in bursae and subcutaneous tissue and infiltrative in skin, tendon, and bone [4].
The exact trigger of an acute attack of gout is poorly understood however predictors for the development of gout in hyperuricemic individuals have been identified. These include: increasing uric acid level, alcohol consumption, hypertension, use of diuretic drugs (thiazides and loop diuretics), increased body mass index, and family history of gout [3]. In addition to joints, uric acid crystals are reported to deposit in soft tissues such as tendons, median nerve, bursae and intrinsic muscles. Gouty arthritis of the wrist is uncommon although gout itself is the most common inflammatory arthritis in older patients [3]. Gout at the wrist as the initial appearance of the condition occurs between 0.8 to 2% of all gout cases. Gout patients who are not treated have a 19-30% chance of developing gout in the wrist [1] during their lifetime. Gouty tenosynovitis can induce flexion contracture of the digits by involvement of the flexor tendons at the wrist, as in our patient, or at the digital canal [2]. Gouty tophi presenting as mass are uncommon and often mistaken for a neoplasm [1].
These nodules may not be recognized as tophi because the clinical diagnosis of gout in many instances is not straightforward [3]. In the past reports, all intratendinous infiltrations of tophaceous gout occurred at the wrist and existed with carpal tunnel syndrome [1]. We present an uncommon and unusual case of gout in the flexor tendon of the forearm which occurred in isolation in a patient with no prior medical history of the recorded disease. Measurement of serum uric acid level in chronic tophaceous gout may or may not be conclusive of hyperuricemia as some patients with diabetes or even alcoholics can have normal to lower levels. Surgical intervention like tenotomy or teno synovectomy are required to debulktophaceous deposits, improve smooth gliding of tendon and decompress nerves but primarily medical management to treat gout remains the gold standard [2]. Short-term outcomes are consistently good but the risk of rupture or recurrence remains if medical control is not achieved.
Conclusion
This is an uncommon and unusual case of tophaceous infiltration of the flexor digitorumsuperficialis of the ring finger. This case demonstrates several issues that clinicians should keep in mind when assessing patients with a history of gout. Early diagnosis based on a high index of suspicion is paramount to the initiation of proper surgical management.
Giant Umbilical Cord with Pericentric Inversion of Chromosome 9: A Case Report
Introduction
A giant umbilical cord is a rare anomaly of the umbilical cord that can easily be diagnosed on prenatal scan. This malformation can present as a cyst in the umbilical cordantenatally, whereas the most common symptom is leakage of urine from the umbilicuspostnatally. Although it is rare, operative exploration must be performed to repair the associated urachal remnant [1]. Here, we report the case of a new born with a diffuse giant umbilical cord and pericentric inversion of chromosome 9.
Case Report
A male infant weighing 2420g was born at 36 weeks of gestation by cesarean section to a 31-year-old mother. The Apgar scores were 9 and 10 at 1 and 5minutes, respectively. Routine ultrasonography conducted in the 16th week of gestation showed cystic changes in the umbilicus, and chromosomal examination of amniotic fluid conducted in the 20th week of gestation showed 46, XY, inv (9) (p12q13). However, fetal development was progressing successfully. At delivery, the infant presented with a diffuse giant umbilical cord measuring 25cm in length and cm in diameter with a glistening surface and hydropic consistency (Figure 1). No abdominal contents were noted within the cord. The cord was clamped approximately 30cm from the abdominal wall, where it became thinner. Ultrasonography conducted when the infant was 2 days old showed a probable connection between the umbilicus and bladder, which was confirmed by a fistulogram.
Figure 1: a. Prenatal ultrasonography image in the 34th week of gestation showing the dilated umbilical cord with edema and b. Postnatal findings of the neonate with a diffuse giant umbilical cord.
The dried umbilical stump detached after 14 days, but a granulomatous structure remained, and persistent umbilical fluid loss from the clamped umbilicus indicated urine leakage. Operative exploration was conducted via an infra umbilical incision when the infant was 16 days old. The umbilical cord was contiguous with aurachal remnant (Figure 2). Excision and repair of the urachal remnant was completed. Histological examination of the umbilical cord confirmed the presence of focal edema with no epithelial lining. On postoperative day 9, a fistulogram showed no evidence of leakage in the bladder. The infant was discharged in good health, and all follow-up examinations were normal.
Figure 2: a. Ultrasonography showing a connection between the umbilicus and bladder and b. Surgical resection performed on the 16th day of life (dissected and everted patent urachus).
Discussion
A review of the literature showed that the finding of a giant umbilical cord is a patho gnomonic sign for the presence of a patent urachus, which requires surgical intervention, and only a few related case series have been published thus far [1-6] (Table 1). The exact etiology of the giant umbilical cord, however, remains unknown. One hypothesis suggests that reflux of fetal urine into the umbilical cord via the patent urachus, results in swelling of Wharton’s jelly. Wharton’s jelly of human umbilical cords was infused with distilled water, 0.9% saline, 3% saline, or 10% saline. Enlargement occurred in the umbilical cords infused with distilled water or 0/9% saline. Tsuchida and Ishida concluded that prolonged reflux of fetal urine into the umbilical cord, via a patent urachus, caused umbilical cord swelling [4]. Patent urachus arises from incomplete regression of the connection between the cloaca, which is the future bladder, and the allantois, which ist he extra embryonic urinary bladder [6].
Table 1: Summary of prior reports associated with giant umbilical cords.
Therefore, close clinical observation is necessary since continuous urinary loss from the umbilicus serves as a clinical indicator or persistent urachus. To our knowledge, cases of a giant umbilical cord with pericentric inversion of chromosome 9 have not been previously reported. Pericentric inversion in the heterochromatic region of chromosome 9 [inv (9), inv (9) (p11q13), or inv (9) (p12q13)], is the most common found in the human karyotype [7]. Although it is categorized as a minor chromosomal rearrangement that is not correlated with abnormal phenotypes, this inversion has often been reported to be associated with mental retardation or multiple congenital anomalies [8,9]. A high frequency of inv (9) (p12q13) was detected in children with dysmorphic features and congenital anomalies [10]. From our experience with this rare anomaly, we recommend that chromosomal examination along with immediate operative exploration be conducted for infants born with a giant umbilical cord. Further, imaging studies for patent urachus are also essential.
Successful use of Ustekinumab in a Patient with Psoriasis, Psoriatic Arthritis and Systemic Lupus Erythematosus: A Case report and Review of literature
Introduction
Interleukins (IL) are involved in the pathogenesis of several autoimmune disorders such as rheumatoid arthritis, crohn’s disease and psoriasis (Ps), and specific IL blockade has been utilized successfully to treat patients with these diseases [1-2]. The role of the p40 subunits of IL-12 and IL-23 is increasingly being explored in the pathogenesis of systemic lupus erythematous (SLE) [3-7]. Ustekinumab is a human monoclonal antibody that binds and inhibits the p40 subunit of interleukins 12 and 23, which are involved in the TH-17 signaling pathway. It is approved for the treatment of moderate to severe plaque Psoriasis [8]. While there have been isolated case reports of its successful use for the treatment of refractory cutaneous lupus (non SLE), suggesting that targeting the TH-17 pathway may be useful in this condition, its use in SLE remains poorly described [9-10]. Here, we report a case of successful use of Ustekinumab in a patient with active SLE and psoriasis (Ps).
Case Report
Our patient is a 48-year-old man with history of psoriasis (Diagnosed in 2001), autoimmune hemolytic anemia (AIHA) treated with rituximab (diagnosed in 2006 and rituximab given in 2006), who presented to the rheumatology clinic (2008) for management of psoriatic arthritis (PsA). His psoriasis has only partially responded to topical isotretinoin, PUV-A and etanercept. Physical examination showed erythematous, scaly plaques involving the lower extremities, lower back, and the extensor surfaces of both elbows. His left knee was warm, tender, and non-erythematous with a moderate sized effusion. Synovial fluid analysis showed 18,000 wbc/μL (87% neutrophils) without crystals. The gram stain was negative. Laboratory evaluation showed thrombocytopenia (52,000 cells/μL), positive anti-nuclear antibody (ANA) (1:640), anti-double stranded DNA antibody (Anti Ds-DNA) (42 IU), anti SS-B antibodies at 1.18 (≥ 1.10 is considered Positive) and lupus anticoagulant (LAC). Anti ribonuclear protein (RNP), anti- Smith(Sm), anti-histones, anti-SS-A, anti β-2 glycoprotein and anti cardiolipin antibodies were negative. His White cell count was at 9200 cells/μL & hemoglobin was stable at 14g/dL.
Comprehensive metabolic panel, complements and urinalysis were within normal range. Knee radiograph revealed mild narrowing of the lateral patello-femoral compartment. MRI of right knee showed mild enthesopathy of anterior patella). infliximab 500mg every 8 weeks. A diagnosis of psoriasis, psoriatic arthritis (oligo-articular asymmetric arthritis pattern with inflammatory synovial fluid) was made. Patient was started on etanercept 50mg every week subcutaneously (5/2010). His skin disease and arthritis did not improve after 3months on this medication so he was switched to infliximab 500mg every two weeks subcutaneously. In retrospect he was diagnosed with SLE clinically based on positive ANA, anti-Ds-DNA, positive lupus anticoagulant, AIHA that responded to rituximab and thrombocytopenia. Concern for the drug-induced lupus was low because AIHA preceded any biological agent administration and thrombocytopenia preceded infliximab administration. Infliximab was discontinued on the side of caution. He was substituted to mycophenolate mofetil. Patient’s skin lesions, synovitis of knees and platelets did not improve after 6 months therapy with mycophenolate 500mg twice a week and prednisone 10mg daily. He was switched to cyclosporine 300mg daily along with prednisone.
This regimen improved his psoriatic skin lesion (Figure 1) but he had persistent arthritis, recurrent inflammatory knee effusions and low platelets. We initiated ustekinumab 90mg subcutaneously at 0, 4 week and then every 12 weeks and cyclosporine was stopped. After three doses of ustekinumab, there was complete resolution of skin lesions and normalization of platelet count (200,000 cells /μL). Methotrexate 10 mg every week was added given persistent arthritis. After four months of therapy with ustekinumab and methotrexate, prednisone was tapered off. Patient had been on the current regimen for the last five years without recurrence of thrombocytopenia, AIHA, psoriatic skin lesion or arthritis. No medication related adverse effects have been noted. Timeline of events have been depicted in chart form in Figure 2.
Figure 1: Panel A showing psoriatic plaques on elbow and leg prior to treatment with Ustekinumab. Panel B showing complete resolution of skin lesions after treatment with Ustekinumab.
Figure 2: Timeline for the events.
Methods and Materials
This systematic review was conducted according to the PRISMA guidelines. A computer-assisted literature search of PubMed and Ovid search engine was conducted. In order to increase the sensitivity of our search, we used free text words and MeSH terms with and without Boolean operators (“AND” and “OR”). Search terms keywords to identify different types of systemic and cutaneous lupus included “SLE”, “systemic lupus erythematosus”, “discoid lupus”, “acute cutaneous lupus erythematosus”, “subacute cutaneous lupus erythematosus”, “discoid lupus” and “chronic cutaneous lupus erythematosus”. We came across 85347 cases of different types of lupus when we used “OR” Boolean operator. We associated these 85347 studies with “ustekinumab”, “IL-12 inhibitor” and “IL-23 inhibitor” and the output was 12 studies. Out of these 12 only 5 studies fulfilled the inclusion criteria and were included in the study (Figure 2).
No language restrictions were enforced. The inclusion criteria for our systemic review were rather liberal. We included all the human studies that enrolled patients with different kinds of lupus treated with Ustekinumab. Other major inclusion criteria were minimum age of more than 18 years, SLE, CCLE and DLE patients who have received Ustekinumab. Exclusion criteria included review articles, basic science research, animal studies, irrelevant articles in which either the patient did not have any kind of lupus or in whom ustekinumab was not used for treatment. Following data were extracted and compared for all the studies: lupus type and subtype, age, gender, ethnicity, lupus clinical manifestations, ANA antibody levels, treatment duration and adverse reaction (Figure 3 and Table 1).
Figure 3: Search strategy for literature search.
Discussion
Psoriasis (Ps) is a relatively common disease affecting 1% to 3% of the US population, whereas SLE is significantly less common with reported prevalence rates in the United States ranging from 20 to 150 cases per 100,000 for SLE [11-12]. Cases of coexistent Ps and SLE are uncommon [13]. In one of the largest studies to date, it has been reported that cutaneous lupus manifestations (CLE) with or without SLE can occur in 0.69% of patients with Psoriasis and 1.1% of patients with CLE with or without SLE had concomitant psoriasis. Of these patients that had both, about 50% had SLE of which 64% also had cutaneous Lupus manifestations as part of SLE [14-15]. Three case reports of successful use of Ustekinumab in cutaneous Lupus Erythematosus without systemic involvement suggest that Th-17/ IL-23 may be involved in the pathogenesis of CLE [9,10,16]. Unlike our patient who had ongoing active SLE manifestations (thrombocytopenia), Chyuan et al reported a case of well controlled SLE and biopsy proven lupus nephritis, that developed resistant psoriasis, wherein Ustekinumab was successfully used and tolerated well, indicating its role in SLE [17].
In a recent retrospective chart review, of 96 patients with concomitant psoriasis and CLE with or without SLE (90% of them had SLE), 5 patients were treated with Ustekinumab [15]. Four were maintained successfully on the drug, and one experienced loss of efficacy. Two of these patients had SLE with chronic plaque psoriasis, 2 had SLE, chronic plaque psoriasis and psoriatic arthritis, and 1 had discoid lupus erythematosus (DLE) with chronic plaque psoriasis. Improvement of all, cutaneous psoriasis, cutaneous SLE symptoms (malar and discoid rashes) lupus arthritis, oral ulceration, and hematologic abnormalities (thrombocytopenia or anemia) were reported among these patients. Although this study was not able to clearly discriminate between the improvements noted in the arthritis of psoriatic or lupus it showed that the patients on Ustekinumab had better outcomes. Although psoriasis and SLE have relatively different pathophysiologic mechanisms, they both share up regulation of the Th17 immune pathway with elevated levels of interleukin (IL)-17, IL-23, and IL-12 [18]. IL-17 & IL-23 have been associated with the pathogenesis rheumatoid arthritis (RA), inflammatory bowel disease, ankylosing spondylitis and SLE [1-2]. Although not observed in all studies, serum IL- 17 levels have been found to be elevated in SLE compared to controls [19].
Correlations between serum IL-17 levels and SLE disease activity and anti-double stranded DNA (anti-dsDNA) antibody levels have previously been reported in some studies [5-6]. IL-12, elevated in SLE, stimulates the production of gamma interferon by TH-17 lymphocytes leading to glomerular nephritis and vasculitis. Serum p40 titers (IL-12 subunit) have been reported to be significantly higher in SLE patients compared to those of healthy subjects and patients with RA [20-21]. In SLE patients, serum p40 levels were positively correlated with the SLE disease activity index (SLEDAI) [21]. Ustekinumab that binds and inhibits p40 subunit of interleukins 12 and 23, which are involved in the TH-17 signaling, should be a potentially useful drug in the treatment of psoriatic arthritis with CLE or SLE. This case highlights the therapeutic challenges in managing a patient with concurrent psoriasis and SLE. Ustekinumab has been used frequently in treatment of psoriasis and psoriatic arthritis. However, its use is largely limited to a few case reports in patients with either CLE or in patients with psoriasis with coexistent CLE. Herein, we report a patient with active psoriasis, active psoriatic arthritis and active SLE, who responded well to ustekinumab, there by suggesting the potential role of TH-17/ IL23 pathway as therapeutic target in treatment of cutaneous (non SLE) lupus and SLE. Further clinical studies are warranted in order to investigate the role of Th-17 blocking agents in SLE.
Table 1: Overview of all the existing case reports of patients who have been treated with Ustekinumab for cutaneous lupus or SLE & Psoriasis or cutaneous lupus or SLE alone.
Abbreviations: M: male, F, Female, W: White, B: Black, H: Hispanic, N/A: Not available
Implications of Trainer and Trainee based Dynamics Affecting Farmers Training Meetings
Introduction
Agriculture is the most important sector of Pakistan’s economy however some diversifications are being along with it over many years. The cultivated cropped area is contributing to GDP at about 21.0% as it covers 22 million hectares. About 45.0% of the employed force of Pakistan is related with agriculture. Rural areas’ population about 66.0% is directly or indirectly dependant on agriculture for its livelihood (Govt. of Pak., 2008). Foreign exchange earnings of the country are largely related with agriculture. A large population of Pakistan is dependent on natural resources like forestry, fisheries, agriculture and livestock as an agriculture country [1].
Agricultural extension has always remained a key element in improving the capacity building of farmers. In addition, it is fulfilling its due responsibilities by keeping the marginalized small and poor rural masses aware of latest agricultural technologies through its unique services. As agriculture is mainstay of Pakistan, so, every agricultural extension approach always focused on developing agriculture ultimately up-scaling the living standards of poor farming communities. The initiation of several extension programs like Village-AID Program, Basic Democracy System (BDS), Integrated Rural Development Program (IRDP) and Training and Visit (T&V) Program clearly indicates that these programs did not achieve their respective objectives and the relative authorities find it feasible to abolish the running program and replace it with another program Ali et al. [2-5].
To combat the short-comes of previous extension modalities the Govt. of Punjab has launched an innovative agricultural extension modalities decentralized extension system, farmer field school and agricultural extension “Hub Program” Ali, et al. 2011 [6-8]. The Government of Punjab introduced a technology transfer program named as Agricultural Extension “Hub Program” in 2008 to overcome all these weaknesses of past programs and to meet the challenges of changing circumstances. In this modality, Field Assistants (FA) were granted more responsibilities including frequently demonstration of agricultural plots, strong connection with hub farmers, keeping aware the high authorities with farmers’ problems and getting feedback from the research institutes with recommended solutions of their problems [9].
Methodology
Study Area: Punjab province was selected as the study area because it was the most populous and the most literate province of the country. It consisted of 36 districts. Faisalabad was selected as study area because it was 2nd populous district of the province and 3rd of country. It consisted of 6 tehsils: Faisalabad City, Faisalabad Sadar, ChakJhumra, Samundri, Jaranwala, and Tandlianwala. Purposively Faisalabad sadar and Jaranwala were selected because of having more farming communities.
Sampling Procedure and Selection of Study Respondents: Simple random sampling technique was used to select the sample. Time and resources were limited. Therefore, total 120 women (60 from each tehsil) were selected as respondents.
Research Instrument for Data Collection: Interview schedule and focus group discussions were used as research tool. Personal observation was also used to evaluate the collected data. The interview schedule was pre-tested before final data collection. The reliability and validity of research instrument was also checked. Further, respondents were personally interviewed for the accurate acquisition of data. Five point likert scales was used for the extent assessment.
Data Analysis: Collected data were analyzed through computer software Statistical Package for Social Sciences (SPSS) for tabulating results and drawing conclusions and recommendations. Average mean and standard deviation were also computed for the better understanding.
Results and Discussion
The data given in Table 1 depict that poor psychological assessment of local talent was the most significant trainer based element which impedes the harvest of time and money investment by the state to train the farmer. The qualitative data reveal that normally the EFS select the resource rich person like feudal, lord or Choudhary as a focal person. The agriculture department selects a biased focal farmer who is not venerable among the entire village farming community. The biased hub farmer does not call all the farmers for farmers meeting. The respondents argued about inspection of small farmer’s and negligence of small farmers between low and medium category with tendency towards low. The poor extension services and lacking in technical skills of EFS ranged between low to very low category and hence ranked 5th and 6th respectively.
Responses: 1= very Low, 2= Low, 3= Medium, 4= High, 5= very High, X= No Response.
The data in Table 2 show that the Farmers less interest/ Depending upon conventional knowledge was the main reason affects the training meetings. Illiteracy and poor local information system of focal person with rest of community hence fell between medium and high category tending towards high and ranked 2ndand 3rd serious elements affecting the success story of farmer training meeting. Non-Technicality of farmer as the 4th deleterious element with mean value of 2.68 followed by communication gap between resource rich and resource poor farmer and farmers personal conflicts. The rest of three factors castism, farmers’ poor attendance in meetings as well as lack of resources to purchase the said technology were ranged between low to very low with inclination towards very low. Qualitative data illustrate that multidimensional classification of rural society i.e. on the basis of cast, landholding and political belongings etc.are the major reasons behind the poor participation of farmers in training meetings. It is direst need to integrate the rural society not only for the welfare of state but also for them.
Conclusion
Above discussion reveal that trainer and trainee both are affecting the training meetings. The poor assessment of local talent is the most significant trainer based factor. It is a syndrome which not only inflicts but also triggers the rest of factors. The relaying of trainees on their previous knowledge reduces the farmer’s interest in training meetings. It is indirectly defying of sole purpose of this maneuver. Therefore, it is suggested that agriculture sector should select the focal persons on merit or unbiased bases. The farmer should also join the meeting with perceptive minds so that dream of agricultural prosperity becomes true.
The Undrainable Post-Traumatic Right Massive Haemothorax
Introduction
Diaphragmatic injuries related to thoraco-abdominal trauma are rare, with an incidence of 0.8-5% [1]. Due to coexisting injuries, small herniation and the silent nature of diaphragmatic ruptures, the diagnosis can sometimes be missed in the acute phase, and may present later with obstructive symptoms due to incarcerated organs in the diaphragmatic defect [2]. Right sided diaphragmatic herniation is infrequent due to protection by the liver and the congenitally stronger, right hemi-diaphragm [1,3]. This case report discusses an adult patient who was diagnosed to have right-sided diaphragmatic rupture and hepatothorax, acutely following a road traffic accident.
Case Report
A 26 year old man was admitted to accident and emergency following a road traffic accident. On admission the patient was distressed, dyspnoeic and hypotensive. Initial primary survey revealed a right massive haemothorax, a pelvic fracture, a right femoral fracture and a left tibial fracture (Figure 1). Despite initial resuscitation and chest drain insertion the patient remained in respiratory distress with a puzzling and seemingly undrainable haemothorax. A placement of a second intercostal chest drain was queried, due to the position of the first drain ‘above the massive haemothorax. Before this, a review of the gentleman was carried out to consolidate or refute the diagnosis of a massive haemothorax.
Figure 1: Chest radiograph showing marked elevation of the right hemi-diaphragm with an in-situ intercostal chest drain (red arrows).
This evaluation revealed the trachea was central and not displaced, and an ultrasound scan of the right side of his chest wall revealed a hepatothorax. Hence, a diagnosis of a traumatic right diaphragmatic rupture with herniation of the liver was proposed. And a second chest drain insertion with potentially catastrophic consequences was obviated. He underwent emergency laparotomy, which confirmed this diagnosis and his liver was reduced back in to the abdominal cavity and the diaphragmatic defect was closed. He was subsequently admitted to ITU for airway management and optimization for further orthopaedic intervention.
Learning Points/Take Home Messages
i. A thorough examination and patient reassessment is crucial in a major trauma setting especially if your initial diagnosis does not fit the clinical picture.
ii. Traumatic diaphragmatic rupture should be considered and a high index of suspicion maintained in patients with multiple injuries and an abnormal chest x-ray.
iii. Radiological investigations are helpful in reaching a clear diagnosis.