Journal on Animal and Avian sciences

Estimating and Quantifying the Production Outcomes and Lifestyle Changes for Small-to Medium Sized Dairy Farms When Transitioning From Conventional to Automatic Milking Systems in the Northeast Region: A Case Study Report

Introduction

Primary reasons for lack of expansion of small to mediumsized dairies in the Mid-Atlantic region are the high cost of land, low profits, and labor availability. As herd size continues to increase globally, new technology allowing farmers to remain sustainable is greatly desired. Automatic milking systems (AMS) represent the most recent technology available by offering improved management and production efficiency, quality of life and attractiveness to successors. However, the financial investment is substantial. Although there is growing data on production impacts for European farmers, this technology is fairly new to the U.S. In turn, U.S. farmers lack information from independent sources regarding return on production performance and animal health associated with the transition from conventional to AMS for U.S. dairy operations. Results from a survey to dairy farmers in the Mid-Atlantic region of the U.S. reported that improving herd management and personal flexibility were some of the most important factors regarding their interest in AMS Moyes et al. [1]. Only 18.0% of farmers said they have access to information regarding changes in animal health and personal flexibility. Producers stated that more information on animal health and personal flexibility would be helpful when considering a transition to AMS. The objective of this study was to estimate and quantify the animal health, productivity and lifestyle changes for small-to medium sized dairy farms regarding the transition from conventional to AMS in the Mid-Atlantic region. Economic impact (including cash flow and labor) is not reported here.

Materials and Methods

Four dairy herds (n = 286 ± 154 milking cows/herd) in the mid- Atlantic region were used for this study. A general survey, including geographics, herd management and personal time commitments, for each farm was conducted before and after their transition to AMS. Monthly herd summaries using data from two years relative to transition were used from either the dairy herd improvement association (DHIA) or Ag Source Cooperative services to generate monthly average herd production, reproduction, disease, and culling information (i.e. cull rate and reasons culled) where treatment was either conventional (CON; before transition) or automatic (AMS; after transition) milking systems. All numeric data was analyzed by herds using the PROC MIXED procedure of SAS (SAS/STAT version 9.3; SAS Institute Inc., Cary, NC).

Results and Discussion

Results from the survey indicated that all cows fully transitioned to AMS within a few weeks. One herd was pasture-based whereas all other herds (n = 3) were housed in free stalls with access to a partial mixed ration. One herd did not continue with the monthly DHIA service and therefore results were not used. Herds implemented either the De Laval, Lely or Galaxy-Astrea robots. This decision was primarily based on the location of the dealer service. For all herds, fresh and sick cows where milked using the conventional parlors. Producers observed improved personal flexibility transitioning to AMS (as measured by family and vacation time). Daily robot maintenance was minimal (~1 hour/day). No change in herd size or cull rate was observed. Regarding reproductive traits, calving interval (12.9± 0.15 CON; 13.13± 0.18 months AMS) and number of days open (121 ± 6.0 CON; 128 ± 5.0 days AMS) increased for AMS than CON and maybe partly attributed to more naturally occurring heat detection methods for AMS than CON. Regarding animal health, the reasons for culling shifted towards low milk production being the main reason animals were removed from the herd.

There was no change observed regarding the monthly milk SCC as increases in milk yield were observed when producers transitioned to AMS (Table 1). Milk yield increased in all herds and is most likely attributed to an increase milking frequency that is commonly observed when transitioning to AMS. In conclusion, producers were happy with their decision to transition primarily via the labor reduction (not reported here) and improved personal time. Daily maintenance of robots is minimal. Cull rate does not seem to be impacted when transitioning to AMS. Milk yield, calving interval and days open increased for AMS when compared to CON. Animal health (based on SCC)did not change for all herds enrolled but previous research indicates clinical mastitis can improve when producers transition to AMS Tse et al. [2]. Automated milking systems may improve animal productivity and lifestyle changes but AMS may not impact animal health or reproduction for small-to medium sized dairy farms in the Northeastern region.

Table 1: Monthly milk somatic cell count (SCC1) and milk yield for herds that transitioned to automatic milking systems (AMS; ± 2 years relative to transition).

a) 1SCC based on weighted averages. Data reported as SCC/μL.

b) 2CON = conventional milking system.

c) 3No after AMS data available for this herd.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on Gastroenterology

Esophagitis Dessicans Superficialis – Impressive Endoscopic Appearance of a Benign Condition

Cases

Case 1

15-year-old male diagnosed with cerebral palsy and seizure disorder was referred for dysphasia to solid food progressing over 3 months to dysphasia to both solids and liquids. His medications included Lamotrigine for his Seizure disorder, Supplemental Iron for Iron deficiency anemia and Lansoprazole once daily for reflux disease. Esophagus gastro duodenoscopy (EGD) showed that the lower third of the esophagus has yellowish plaque like material (Figure 1a) which on removal revealed sloughed and friable mucosa (Figure 1b) and a partial stricture at the distal esophagus. A 9 mm scope could not pass the distal esophagus due to a partial stricture but a 5 mm slim scope was passed. Histopathology showed esophageal mucosa with extensive ulceration and granulation tissue. There were no viral inclusions, granuloma, dysplasia or malignancy. Staining for Periodic Acid Schiff (PAS), Herpes Simplex Virus (HSV) and Cytomegalovirus (CMV) was negative. He was prescribed twice daily dose of Proton pump inhibitor therapy (PPI) for 2 months. Upon clinical improvement the patient was prescribed once daily PPI therapy for the next 4 months. Repeat OGD after 6 months of PPI therapy showed a normal and completely healed esophageal mucosa with no evidence of the previously noted partial stricture. (Figure 1c).

Figure 1:

Case 2

35-year-old male known to have colon cancer status post total hemicolectomy and on chemotherapy was referred due to persistent dysphasia and recurrent vomiting. Medication history was significant for chemotherapy regimen Folfiri Cetuximab with his last cycle received two weeks back. OGD revealed sloughed lower esophageal mucosa with whitish cast like material adherent to it (Figure 2a). After flushing the casts with water jet the underlying mucosa showed ulcerations and sloughing (Figure 2b). Histopathology revealed ulceration and inflammation with no granuloma, metaplasia or dysplasia. Stains for CMV and HSV were negative. His chemotherapy was continued and a prescription of twice daily PPI therapy was added. After 4 months the patient had a repeat OGD that showed complete healing and normal esophageal mucosa except for a small hiatus hernia (Figure 2c). The dysphasia and vomiting subsided with mild residual reflux symptoms at 6 months.

Figure 2:

Case 3

55 y/o male known to have diabetes and chronic reflux symptoms not responding to PPI once daily were referred for endoscopy. His medications included metformin, glimepiride, sitagliptin, aspirin and esomeprazole. OGD was remarkable for whitish membranes at lower esophagus resembling ‘gift wrap ribbons’ suggestive of esophagitis Dessicans Superficialis (Figures 3a & 3b). Histopathology showed inflammation without evidence of granuloma, dysplasia or malignancy. Staining for PAS, HSV CMV was negative. The patient was maintained on once daily PPI therapy. He had minimal reflux symptoms on his follow up 1 year post first presentation. A repeat OGD was not pursued in the patient.

Figure 3:

Case 4

An 80 y/o female with background of end stage renal disease on hemodialysis, diabetes and hypertension presented with hematemesis. Her medication history included multiple medications including Carvidilol, Hydralazine, Amlodipine, Pantoprazole, along with iron and phosphate supplementation. OGD showed whitish membranes sticking to the lower esophagus (Figure 4a). When washed and removed with water jet underlying sloughing of mucosa at lower end of esophagus was observed (Figure 4b). Histopathology of the esophageal mucosa showed inflammatory and necrotic debris with fibrin, suggestive of sloughing esophagitis. There was no evidence of granuloma, dysplasia or malignancy. PAS stain for fungi was negative. She was prescribed twice daily PPI therapy. The patient did not suffer from any more hematemesis or vomiting after 2 weeks. A repeat endoscopy to assess mucosal healing was not pursued at that stage due to her advanced age.

Figure 4:

Discussion

Esophagitis Dessicans Superficialis (EDS), also referred to as Sloughing Esophagitis in literature is an endoscopic diagnosis characterized by sloughing of the esophageal mucosa and overlying casts or membranes. Various clinical presentations, endoscopic appearances and associations with systemic diseases have been described in previous case reports. We provide a literature review from such previous reports and compare it with the cases described in the current series.

Associations: EDS has been thought to be idiopathic and unexplained in most recognized cases [1]. Few case reports have suggested strong association with desquamating esophageal disorders, in particular Pemphigus Vulgaris [2,3]. Rao at el evaluated 42 patients with vesciculobullous dermatosis and found esophageal involvement in 27 patients (67%) and classical appearance of EDS in 2 patients (5%) [4]. other reports have described its association with bullous Systemic Lupus Erythematosus (SLE) [5] and with celiac disease [6]. Is also believed that EDS occurs in older population on multiple medications. Purdy et al evaluated thirtyone patients with endoscopic appearance of necrotic superficial squamous epithelium, with endoscopic appearance of white plaques or membranes i.e. sloughing esophagitis [7].

Compared with controls these patients were older and more likely to be taking five or more medications. No particular cause could be identified for the condition but the authors concluded stasis and contact injury due to multiple medications may lead to the condition. Other case reports have described associations with medications, typically bisphosphonates [8] and post sclerotherapy of varices [9]. One of the four patients we describe here was over 60 year’s age and one was female. None of our four patients suffered from vesciculo bullous disease. Pre-existent co morbid conditions was a consistent factor among all the cases – Cerebral Palsy in case 1; GI malignancy post treatment in case 2; Diabetes Mellitus in case 3,4 and ESRD in case 4 All patient in the series were on multiple medications. Two of our patients were on oral iron therapy (cases 1,4). Three of the four patients (Case 2,3,4) were on five or more medications as described by Purdy et al. [7].

Clinical Presentation: Patients with EDs may present with a variety of symptoms ranging from dysphasia – odynophagia to more dramatic presentations of vomiting of casts (1). The variety of presentations in our case series is diverse – dysphasia in cases 1 and 2, chronic reflux in case 3, vomiting in cases 2 and 3, hematemesis in case 4.

Endoscopic Appearance: Endoscopic appearance of EDS is typically described as peeling of esophageal mucosa with linear plaques or membranes [1,10] or vertical strips of sloughing esophageal mucosa creating a remarkable ‘Gift Wrap Ribbons’ appearance [11]. Yellowish material causing luminal occlusion, which on removal revealed extensive sloughing of underlying mucosa has also been, described [12]. Our cases too have varied endoscopic appearances – yellowish plaques in cases 1, whitish plaques in 2, whitish membranes in cases 3 and 4. One of our cases had a classical gift wrap ribbon (Case 3) appearance on endoscopy. A partial stricture was observed in case 1. Histopathological exam shows sloughing and flaking of superficial squamous epithelium with occasional bullous separation of the layers, parakeratosis and varying degrees of acute or chronic inflammation [1]. Histopathology has not been well described [10], but may be more useful to exclude other differentials of sloughing esophagitis like fungal or viral infections and Eosinophilic esophagitis.

Differential Diagnosis: includes conditions that cause endoscopic appearance of sloughing of the esophageal mucosa like Eosinophilic esophagitis, Corrosive injury of the esophagus and infections of viral or fungal etiology.

Management: Inspite of the impressive endoscopic appearance EDS remains a benign condition without lasting pathology [1]. Acid suppression and discontinuation of any precipitins factors can help in mucosal healing. Steroids are needed if associated with bullous dermatosis. Indeed, all our patients responded symptomatically to acid suppression with PPI. Complete endoscopic healing was demonstrated in two cases (case 1 and 2). No comment could be made on endoscopic healing in cases 3 and 4 due to lack of a repeat endoscopy. Symptomatic response was seen in all of our patients to 3 to 6 month courses of PPI.

Conclusion

a) EDS is a benign condition that can occur at any age with varying clinical presentation related to upper GI tract.

b) EDS is usually associated with an underlying co morbid illness. Patients on multiple medications are at increased risk.

c) Diagnosis is made endoscopic ally with findings of yellowish or whitish plaques or membranes with underlying sloughed mucosa.

d) Histopathology is needed to rule out other etiologies of similar endoscopic findings.

e) EDS responds to correcting the underlying condition, reducing the number or offending drugs and acid suppression with proton pump inhibitor therapy.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Medical Journal

Introduction

Homeothermy (corresponding to rectal temperature that remains between 36.5 and 37.5°C) is optimal for extra-uterine survival. Body temperature control is particularly important for preterm and/or low-birth-weight newborns because their thermoregulatory processes are inefficient and their body heat losses to the environment are greater. Despite extensive research, the incidence of hypothermia among neonates is still high in developing countries, and remains a significant cause of short-and long-term morbidity and mortality. To prevent hypothermia during resuscitation and during transport from the delivery room to the neonatal intensive care unit, warming mattresses can be used to provide the neonate with conductive heat. This method of heat supply is often combined with other warming techniques (such as radiant warmers and closed incubators). It is important to ascertain the effectiveness and safety of warming mattresses because some cases of hyperthermia have been reported.

The greater the contact surface area between an infant’s skin and the mattress, the greater the conductive heat exchange. Hence, conductive warming only occurs when the mattress’s surface temperature is higher than that of the infant’s skin. The mattress temperature must be maintained at between 35°C and 40°C; at these temperatures, no cases of burns have been reported. A warming mattress can also supply radiant heat to skin surfaces that are not in direct contact with it. In a study of a black-painted copper manikin representing a small-for-gestational age neonate (body surface area: 0.086 m2; simulated bodyweight: 900 g), lying in a convectively heated closed incubator (ISIS+ from Médipréma, Tauxigny, France), Décima et al. [1] showed that the radiant energy provided by the mattress accounted for 42.9% of the body’s radiant heat loss as a whole. The mattress can also generate a microclimate by increasing the temperature of the air above the infant, which reduces convective and evaporative heat losses. A warming mattress can be combined with a conventional incubator, a transport incubator and/or a radiant warmer. The addition of clothing and/or bedding increases the mattress’s ability to warm a hypothermic infant but can also lead to hyperthermia if the clothing provides too much thermal insulation

The Evaluation of Warming Mattresses

A review of the literature in this field reveals that two types of studies have been performed. Firstly, anthropometric thermal manikins (i.e. physical models of the same size and shape as infants) have been used to precisely quantify the various heat exchanges involved (via conduction, convection and radiation; evaporative heat loss cannot be measured with this method). Secondly, a number of clinical studies of hypothermic or normothermic newborns of various ages have focused on either transport between the delivery room and the care department or the use of warming protocols in intensive care units. These experiments enable researchers to not only characterize a mattress’s thermal performance but also to define the devices’ conditions of use and requisite safety measures.

Experiments on Thermal Manikins

In 1984, LeBlanc [2] assessed the heat provided by a warming mattress with a rectangular polymethyl methacrylate tunnel placed above a heated manikin lying in a double-walled transport incubator (Model T167-13, Air Shields, Hatboro, WI, with the air temperature servo-control mode set to 35.6°C) or in a single-walled transport incubator (airVac, Ohio Medical products, Madison, WI regulated in air mode at 36.1°C). A water-filled manikin (simulating a 1000g newborn) made of thin plastic was heated so that it had an internal temperature of 37.0°C and a surface temperature of 36.8°C. The room temperature ranged between 20 and 24°C. The use of a warming mattress (Porta-Warm Mattress, Kay Laboratories, San Diego, CA, whose surface temperature did not exceed 40°C) enabled a 3-5% reduction in the incubator’s heating power and an approximately 3°C reduction in the incubator’s air temperature.

LeBlanc commented that the incubator’s heating system had to be turned off 30 minutes after the warming mattress had been turned on, in avoid an excessively rapid increase in the manikin’s temperature. 150 minutes after the start of the experiment, the incubator’s air temperature had to be adjusted manually in order to stabilize the manikin’s temperature at values similar to those measured during the baseline reference period. By extrapolating these observations to the infant, LeBlanc indicated that the use of a warming mattress makes it difficult to stabilize the body temperature. Hence, the use of an incubator with a skin servocontrol operating mode is preferable when a warming mattress is employed. In view of the interventions needed to stabilize the manikin’s temperature, LeBlanc recommended that a warming mattress should only be used for hypothermic infants and/or when the incubator’s heating system, the transport vehicle and the infant’s clothing isolation are ineffective.

Sarman et al. [3] studied a plastic foam manikin (surface temperature set to 36.5°C) as a model of a 1003g infant wearing a nappy and a cotton vest and covered with two quilts. The manikin was placed on a water-filled mattress (Kanthal Medical Heating AB, Stockholm, Sweden, heated at four different surface temperatures between 35°C and 38°C) inside in a single-walled incubator (Isolette C100, Air Shields, Pennsylvania) with an air temperature set to 30, 32, 34 or 36°C. All the experiments were performed in a climatic chamber at two different air temperatures (25°C and 15°C; in the latter condition, a bonnet was placed on the manikin’s head). The results showed that the manikin’s heat loss was between 20 and 40W/m2 with a mattress temperature of between 36°C and 37°C and an air temperature of 25°C. At 15°C, the same heat loss was only observed when the manikin was wearing a bonnet and was covered with a quilt. When the difference between the skin surface temperature and the mattress temperature was 4°C, Sarman et al. estimated that the conductive heat gain was 1.6W. The researchers considered that this was a non-negligible heat gain, since it corresponded to 53% of the 3W of metabolic heat produced by an infant weighing 1500g.

Clinical Studies

Clinical studies of warming mattresses have varied greatly with regard to the body weight of the participating infants (from 800g to 3000g), the incubator’s temperature control mode (airor skin servo-controlled or sometimes not even specified) and the country (including developing countries in which nursing care differs from that in developed countries). The physiological parameters measured also differed from one study to another (e.g. oxygen consumption, body weight, and the axillary, rectal, and abdominal skin surface temperatures). The risks of hypothermia or hyperthermia were assessed with regard to the latter temperature values and (in some cases) the difference between rectal and mean skin temperatures (considered to be a marker of cold stress) [4]. Different warming mattresses are often used as an additional source of heat for infants at birth, regardless whether or not they are hypothermic. The mattress is often warmed up prior to use. The degree of clothing thermal insulation also differs from one study to another but is rarely specified. Thus, the data from these various studies are not comparable – making the results difficult to interpret.

Most of the literature data show that the warming mattress is an effective device for preventing hypothermia in preterm neonates. A notable exception is the study by Boo et al. [5], in which 71 out of 119 initially hypothermic neonates (axillary temperature: 36.5°C) treated with a heated water-filled mattress (KanMed, Bromma, Sweden kept at a constant temperature of 37°C; room air temperature: 20°C) remained hypothermic. Nevertheless, the literature data have highlighted conditions in which the use of a warming mattress can be dangerous for the infant and cases of moderate hyperthermia are often reported [4,6-9]. L’Hérault et al. [6] reported that a greater proportion of neonates transported to hospital on a gel mattress (Prism technologies, San Antonio, TX) had rectal temperature of ≥37.5°C (and even 39°C in four cases) on arrival. Gray et al. [7] also reported that the proportion of individuals with high (˃37.5°C) axillary temperature was greater for neonates nursed on a heated, water-filled mattress (Kan Med Baby Warmer) in a cot than for those nursed in an air-heated incubator. In a resuscitation setting, Singh et al. [9] showed that the incidence of hyperthermia (axillary temperature ˃ 37.5°C) increased in neonates nursed on a gel mattress (Drager, Hemel Hempstead, UK) and wrapped in a food standard plastic bag (Lakeland Plastic, Windermere, UK) compared to traditional care including radiant warmer (+27%) and to neonates only covered by the bag (24%). The harmful effects of hyperthermia mean that this risk must be taken in consideration. Hence, the continuous measurement of the rectal, abdominal or axillary temperature is highly advisable for monitoring changes in an infant’s thermal status.

Implications for Practice and for Research

When used alone, a warming mattress is not highly effective because little heat is gained by conduction. An older study [10] found that in cool environment, a warming mattress alone could not warm an unclothed, premature newborn. Most researchers have shown that a heated, water-filled mattress is not only effective for warming an infant dressed in a single cotton shirt and a diaper (room air temperature: 24.6°C) but it can also reduce the resting oxygen consumption (-3.1%) and the heart rate (-3bpm) [11]. Several publications have assessed the thermal performance of a mattress used with the neonate clothed and covered by blankets and placed in a cot [5,7,12-14] or in a sleeping bag [15]. The warming mattress has very often been combined with other heating devices used in routine thermal care including closed incubators [6,10,16], hats, radiant warmers, plastic bags and plastic heat shields [8,9,17- 19].

Few publications have assessed the thermal performance of a mattress used alone on covered neonates [4, 20]. When the infant is covered (e.g. by clothing or a plastic heat shield), a microclimate is created between the covering and the infant’s skin. The resulting increases in air temperature and relative humidity reduce radiant, convective and evaporative heat losses and thus accentuate the risk of hyperthermia. A mattress is of course more effective when the infant is covered, although the quantification of the thermal insulation provided by clothing and/or blankets is also an ongoing research topic. In the absence of values that enable one to assess the reduction in dry and latent heat exchanges between the infant and its environment, the use of a mattress to warm a covered infant is therefore imprecise. In any case, this procedure can never achieve the level of thermal control obtained in a closed incubator

In order to improve the thermal management of these patients, other factors must be taken into account. Simon et al. [16] assumed that a cool delivery room temperature might have been associated with inability to prevent hypothermia for 7 out of 17 neonates (41%) nursed on an exothermic mattress and for 13 out of 19 neonates (68%) wrapped in an occlusive polyethylene bag. The mother’s body temperature [18] and, in particular, the infant’s gestational age [6] should also be taken into account, although their influence on the occurrence of hyperthermia is subject to debate. It also appears to be necessary to standardize care procedures and warming techniques with regard to the neonate’s body temperature measured at birth. Lastly, L’Hérault et al. [6] suggested that the warming rate should also be investigated; although it is sometimes stated that warming up the infant very rapidly is an advantage, the physiological effects of this procedure have not been characterized.

Conclusion

In view of the diverse range of settings and physiological measurements studied to date, one can conclude that the warming mattress is an easy-to-use means of warming ill or low-birth-weight newborns and is less costly than an incubator. The mattress creates less of a barrier between the mother and her baby than a closed incubator does [14]; this might reinforce mother-baby bonding and might therefore explain (at least in part) the increase in mother’s milk production following discharge from hospital [14]. However, this hypothesis is subject to much debate [21], and needs to be analyzed on a broader scale in the future. Carers should be aware that some cases of hyperthermia have been linked to the use of a warming mattress – notably when the latter has been combined with other heating systems and/or the degree of clothing insulation is too high. Many researchers advise against the use of a warming mattress with thermally unstable or low-birth-weight infants [4,6- 11,13,16,21].

The use of a warming mattress should thus always be accompanied by continuous monitoring of the infant’s body temperature. Caring for premature newborns in open cots might also be associated with potential risks, such as the risk of nosocomial infection (due to easier access and greater handling by nursing staff and mothers). Although no cases of infection have reported in the literature, Gray et al.’s meta-analysis [21] indicated that further research on this topic is necessary. Although a warming mattress is a good option for the care of hypothermic infants in centers where warming techniques are limited, the impact of this device’s long-term use remains to be determined. There is also a need to harmonize practice across care centers and thus ensure greater safety in use.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on Food Technology and Biodiversity

Physico-Chemical Characterization of Arbutus unedo L. From kabylian Region (Northern Algeria)

Introduction

Arbutus berries (Arbutus unedo L.) is a Mediterranean typical tree which fruit is generally not consumed in fresh form but after processing [1]. Like other plants which are fitted with wonderful defense system assured by various biopharmaceuticals [2], the berries are also known to be used in folk medicine as antiseptic, diuretic and laxative [3] Moreover, Ruiz-Rodríguez et al. [4] having earlier supported that the higher antioxidant potential of the arbutus berries may be due to the activity of various bioactive components including vitamin C. So, considering dietary ingredient any herbal or botanical material containing vitamins and minerals, arbutus berries may be repertories as a dietary supplement. Arbutus unedo (Ericaceae) part of the range of Algeria medicinal plants [5]. The Arbutus fruit (FA) is poorly exploited, not very well-known from point of view nutritional and industrial by the population Algerian and its consumption remains seasonal. In this context, the present work main purpose the physico-chemical study powder freezedried (PL) of Algerian FA (Arbutus unedo L.).

Materials and Methods

Ripe Arbutus berries were picked in Kabylian region (northern Algeria) in 2017. The fruit is submitted to freeze drying at 109 K (4.5 Pa) during 2 days. The dried product is then ground and sieved (sieve of type Euromatest-Sintoo, NFX11-501) to obtain homogeneous powder (LP) which is kept in closed glass flask at 277K. The general chemical parameters of LP A. unedo berries, namely; crude fiber [6], titrable acidity (with NaOH, 0.1 N), pectin [7], ash and Acid-Insoluble Ash [8] were evaluated. The electrical conductivity of 20% LP solution in distilled water was measured at 20 °C (mScm-1); the lipid was determined, using a Soxhlet apparatus. The X-ray diffraction (XRD) of LP was investigated using diffractometer (Panalytical Xpert Pro ®.)

Results and Discussion

The different quality parameters of LP are summarized in Table 3. Crude fiber of LP is comparable to that reported by Ruiz- Rodríguez et al. [4] and is less than that reported by Özcan and Hacıseferogulları, [9] for fresh strawberry tree fruits (6.4 g/100 g of cellulose, 2.93 g/100 g soluble fibers respectively). The titratable acidity is close to that indicated in the literature 0.4% [10], On the other hand, it is less than that given by Celikel et al. [11] (0.48 – 1.24 and 0.8 – 1.59% respectively) for the Turkish variety electric conduvtivity is greater than that calculated by Ulloa et al. [12] (0.643 mScm-1) for strawberry tree (Arbutus unedo L.) honey. The XRD pattern of LP powder is presented in Figure 1. A broad band with very weak peaks, characteristic of amorphous forms, is observed in the pattern indicating the presence of amorphous sugar obtained by freeze-drying fruits berry. Furthermore, the amorphous characteristics are clearly reported on different dried mango powders [13] and fluidize-dried gum extracted from the fresh fruits of Abelmoschus esculentus [14] (Table 1) & (Figure 1).

Figure 1: X-ray diffraction patterns of powders freeze dried arbutus berries.

Table 1: Physicochemical characterization of LP.

Conclusion

The results showed that all physicochemical parameters were comparable to those the literature

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Physiology

Pathophysiological Mechanism of Intestinal Gas Production

Introduction

The presence of hydrogen and methane in intestinal lumen was first suggested in 1816, when Magendie hypothesized that these gases were present in the intestine of guillotined convicts [1]. Eventually, reports of explosions during colonic surgery supported the notion that the gut may contain combustibles gas [2]. Colonic gas explosion, although rare, is one of the most alarming iatrogenic complications during colonoscopy with electrocautery [3]. The explosion results from the accumulation of colonic gases at explosive concentrations and may be averted by scrupulous bowel preparation prior to the surgery [2].

Sources of Intestinal Gas Production

Aerophagia: Frequently swallowing large amounts of air (aerophagia) may lead to continuous and repeated belching. Air swallowing is the major source of gas in the intestine and stomach. It is conventional to swallow a small amount of air when eating and drinking and when swallowing saliva. Vast amounts of air may be swallowed when rapidly eating foods, gulping liquids, chewing gum, and during smoking [1].

Bacterial Production of Intestinal Gases: The colon provides habitation for billions of harmless bacteria, some of which support bowel health. Carbohydrates are normally digested by enzymes in the small intestine. However, certain carbohydrates are incompletely digested, allowing bacteria in the colon to digest them. The byproducts of bacterial digestion included or less vapors, such as carbon dioxide, hydrogen, and methane [4]. Minor components of flatus (gas expelled through the anus) have an unpleasant odor, including trace amounts of sulfur containing gases that are liberated by bacteria in the large intestine. Some carbohydrates, such as raffinose, are improperly digested, and therefore cause increased amounts of gas. Vegetables containing raffinose, such as cabbage, Brussels sprouts, asparagus, broccoli, and some whole grains tend to cause more gas and flatulence [5]. Some people are unable to digest certain carbohydrates. A classic example is lactose, the major sugar contained in dairy products. Thus, consuming large amounts of lactose may lead to increased gas production, alongside cramping and consequently diarrhea [1].

Diseases Associated With Increased Intestinal Gas Production

Certain diseases also result in excessive bloating and intestinal gas formation. For instance, people with diabetes or scleroderma may, over time, have slowing in the peristaltic activity of the small intestine. This may lead to bacterial overgrowth within the bowel, with poor digestion of sugars and other nutrients [6]. Carbohydrate mal absorption can occur in people with celiac disease (intolerance to a protein), short bowel syndrome, and those who have rare primary disorders of the enzymes needed to digest specific forms of carbohydrates [7].

Characterization of Intestinal Gases

Among the first complete reports characterizing intestinal gases contents include the important work of Levitt and Kirk. They identified five major components of intestinal gases and estimated their concentrations [6].

i. Nitrogen – N2 (23 to 80%)

ii. Oxygen – O2 (0.1 to 2.3%)

iii. Hydrogen – H2 (0.06 to 47%)

iv. Methane – CH4 (0 to 26%)

v. Carbon dioxide – CO2 (5.1 to 29%)

Hydrogen and methane are the two major combustible gases contending the normal colon [8]. They are produced in the colonic lumen from fermentation of non absorbable (lactulose, mannitol) or incompletely absorbed (lactose, fructose, sorbitol) carbohydrates by the colonic flora, from air swallowing (absence of the gastric bubble in subjects with advanced achalasia), from CO2 produced by interaction of bicarbonate and acid in duodenum, and from diffusivity of a gas across the mucosa of the gastro intestinal tract (CO2 diffuses much more rapidly than H2, CH4, N2, and O2) [9]. Since 1974 it has been known that no mammalian cell is capable of producing H2 or CH4, but bacteria do it by fermentation of appropriate substrates under anaerobic conditions1.

In these light 64 strains of intestinal bacteria were cultured under anaerobic conditions in lactulose containing media to assess their ability to ferment lactulose. Some organisms were unable to metabolize disaccharide, while others; example is clostridia and lactobacilli, extensively metabolized lactulose. Intestinal gases, however, are not the only metabolites originating from bacterial fermentations of indigestible carbohydrates. Qualitative analyses of the fermentation products in vitro indicated that the major nongaseous metabolites were acetic, lactic and butyric acids that are characteristically produced by clostridia. Bacteroides predominantly metabolized lactulose to acetic and succinic acids, but produced smaller quantities of higher fatty acids during lactulose fermentation than with basal medium alone. Hydrogen and carbon dioxide were the only gases detected [10]. Starting from these settings, it is easy to understand that hydrogen and methane are just two components of the complex activity of the metabolic gut microbiota activity involving “indigestible” carbohydrates which are part of the human diet [6].

Pathophysiological Mechanism of Intestinal Gas Production

Hydrogen Gas: In 1974, Newman et al. [1] found that after feeding baked beans to volunteers, H2 appeared in exhaled breath and that the rise in breath H2 concentration paralleled the subjects’ abdominal discomfort. In vitro studies further demonstrated that fecal or ileal flora, incubated with various substrates produced striking amounts of CO2 and H2. When stacchyose, a sugar abundantly present in baked beans, was incubated with ileal or colonic flora as much CO2 or H2 were evolved as when glucose, galactose, or other common sugars were incubated. This was of particular interest since stacchyose is an oligosaccharide hydrolyzed by an enzyme not present in human intestine but possessed by enteric bacteria that are able to split stacchyose into fermentable monosaccharides [1]. It is likely that the wind producing potential of a food is related to its content of non absorbable fermentable substrates, most probably oligosaccharides and fibrous in nature. As far as concern the diet, it is common folklore, verified by old studies, that apple, grape and prune juices, all Bran cereals more than refined wheat or bland formula diets, soya beans, lima beans are all gas inducer food; in contrast orange, apricot, pineapples and peanuts are poor gases inducers in humans [5].

Studies from same period showed that a minority of people would display an excessive production of gas because of carbohydrate mal absorption (lactose mal absorption or celiac disease) [9]: these studies brought over time the definition of carbohydrates mala absorption. Both Levitt and Calloway, in fact, reported an excellent correlation between lactose tolerance tests and breath H2 measurements after lactose ingestion. Levitt has shown that as little as 5 g of lactose was followed by a rise in breath H2 in severely hypolactasic subjects, while Calloway has established that a rise in breath H2 greater than 20 ppm after ingestion of 0.5 g lactose/ kg was as accurate as a lactose tolerance test in diagnosing lactose mal absorption. In addition the amount of lactose absorbed was dose dependent and there was no detectable H2 in breath in some lactase deficient subjects when the test dose was halved, though, as showed by Levitt, some subjects were exquisitely intolerant to the sugar [10].

Methane (CH4) Gas: The world’s population may be classified into CH4 ‘producers’ and CH4 ‘non producers’, with some familial tendency towards CH4 production, but with no evidence that spouses share the propensity. Producers of CH4 usually exhale a concentration of more than 23 ppm while ‘non producers’ exhale less than 3 or 4 ppm. CH4 production never begins before the age of 2 [10]. It was observed that the pattern of CH4 exhalation is fairly constant in CH4 producers over the course of a 24hour day, thus apparently not depending on an exogenous substrate: in fact it was hypothesized and then demonstrated that CH4 is generated under strictly anaerobic conditions as the result of the reduction of CO2 with H2, arising from the fermentative action of bacteria1. The main CH4 producing organism in humans is Methanobrevibactersmithii, but other microorganisms in the human gut, such as certain Clostridium and Bacteroides species, are capable of producing CH4 [4]. It is estimated that the conversion of hydrogen in methane is a reaction associated to a clear reduction of intestinal gas volume: in fact 4 moles of hydrogen and 1 of CO2 are metabolized in order to produce 1 mole of methane and 2 of water. In addition, if H2 is not metabolized, the volume of gas accumulating in the gut will be substantially greater than if CH4 is produced [11].

Catabolic Pathways of Hydrogen in the Gut

Hydrogen could be metabolized not just in methane by gut bacteria but also through a variety of other pathways, including sulphate reduction and acetogenesis [11]. An interesting paper assessed in vitro factors associated to a different catabolic activity. In this paper, stools were taken from 30 healthy subjects and incubated as 5% (w/v) slurries with Lintner’s starch. On the basis of methanogenesis rates and numbers of sulphate reducing bacteria (SRB) in faces, the subjects were divided into two groups; A that had less than 107 SRB/g dry weight faeces and B that had more than 107 SRB/g of faces. Most subjects (group A; n=23) shared high rates of fecal methanogenesis. In this group, 21 out of 23 subjects had methane in the breath. None of the subjects in group B (n=7) had methane in the breath and produced methane in vitro, while had high rates of sulphate reduction in feces and higher concentrations of sulphide. Considerable methane production occurred only when sulphate reducing bacteria were not active [9].

The SRB were found using lactate as a source of carbon and energy and their counts showed a strongly positive association with H2S concentrations in faces. So sulphate reduction and methanogenesis seems to be mutually exclusive in the colon and this is probably linked to sulphate availability [12]. When sulphate is available, SRB are known to have higher substrate affinity for hydrogen and H2S is produced. In conditions of low sulphate availability methanogenic bacteria and acetogenic bacteria are able to combine H2 with CO2 to form methane and acetate respectively [13]. Bjorneklett and Jenssen have shown that subjects, who produce methane during fermentation, produce appreciably less H2 in breath in response to a standard dose of lactulose. Secondly, if H2 is not further metabolized, fermentation may be incomplete and intermediates such as lactate, succinate, and ethanol are likely to accumulate [12]. Dlactate, produced by colonic bacteria, is only partially metabolized in humans and can cause severe metabolic disturbance in certain situations. The end products of these terminal oxidative reactions differ in their toxicity. Methane is a harmless gas, readily expelled and acetate is absorbed and metabolized by peripheral tissues such as muscle, but H2S is highly toxic and may poison colonic epithelial cells if not oxidized rapidly after absorption [13].

The capacity for high rates of H2S production exists in some people and it may be that SRB play a part in the etiology of some intestinal and extra intestinal disorders [13]. Indeed, disorders in H2 and CH4 pathways, with or without intestinal symptoms, have been also detected in several diseases, including endocrinological (thyroid, diabetes), neurological (Parkinson disease), autoimmune disorders (psoriasis), infectious diseases and iatrogenic diseases (chemotherapy or surgery) [12]. Recent studies suggest that enteric bacteria play a crucial role in H2 pathways dis metabolism. In fact H2 breath tests are more frequently altered in subjects with irritable bowel syndrome (IBS), which also display several alterations in gut microbiota composition. This concept was initially based on the common finding of an abnormal lactulose breath test, suggesting the presence of small intestinal bacterial overgrowth in IBS patients [14]. A meta analysis by Shah showed that an altered breath test is more common in IBS patients compared to control subjects and the prevalence of abnormal breath test was even more significant when examining high quality aged and sex matched studies [15]. The abnormal fermentation timing and dynamics of the breath test findings support a role for an abnormal intestinal bacterial distribution in IBS. However many bacteria in the gut utilize hydrogen gas for their energy source including methanogens and SRB. The presence of these bacteria can significantly impair the accurate detection of hydrogen [16].

Pathophysiological Implications of Gastro Intestinal Gas Production

The volume of each gas within the intestinal lumen reflects the balance between the input and output of that gas. Input may result from swallowing, chemical reactions, bacterial fermentation, and diffusion from the blood, whereas output involves belching, bacterial consumption, absorption into the blood, and anal evacuation [17]. Measurements of intestinal gas volume originally obtained using a body plethysmograph and later using a washout technique, indicated that the volume of intestinal gas in healthy subjects is approximately 200 mL [9]. Similar data have been reported using a specifically designed and validated computed tomography (CT) technique [8]. In the fasting state, the healthy gastrointestinal tract contains about 100 mL of gas, distributed almost equally among six compartments: stomach, small intestine, ascending colon, transverse colon, descending colon, and distal (pelvic) colon. Postprandially, the volume of gas increases by 65%, primarily in the pelvic colon [17] and enters the stomach primarily via air swallowing and a sizable fraction is eructated. Some oxygen in swallowed air diffuses into the gastric mucosa.

The reaction of acid and bicarbonate in the duodenum yields copious CO2, which diffuses into the blood, while N2 diffuses into the lumen down the gradient established by CO2 production. In the colon, bacterial metabolism of fermentable substrates releases CO2, H2, and CH4, as well as a variety of trace gases. Fractions of these bacteriaderived gases are absorbed and metabolized or excreted in expired air. In addition, a large proportion of H2 is consumed by other bacteria to reduce sulfate to sulfide, CO2 to acetate, and CO2 to CH4, thereby reducing the net volume of gas derived from bacterial metabolism. N2 and O2 diffuse from the blood into the colonic lumen down a gradient created by the production of gas by bacteria. Gas ordinarily is propelled through the gastrointestinal tract and excreted per rectum [4]. The net result of these processes determines the volume and composition of intestinal gas [18].

Symptoms commonly attributed to too much gas, such as abdominal bloating and distention, are among the most frequently encountered gastrointestinal complaints. Bloating refers to subjective sensations of a swollen abdomen, full belly, abdominal pressure, or excess gas. Abdominal distention refers to an objective increase in girth. Distention usually develops following meals or at the end of the day and resolves after an overnight rest. Some IBS patients, particularly those with rectal hypersensitivity, however, complain of bloating in the absence of objective distention. A major question is to what extent subjective bloating and objective distention are associated with or caused by an increased rate of production or volume of intestinal gas [17]. The role of intestinal gas in functional abdominal pain has been studied since 1975. By using a washout technique with intestinal infusion of an inert gas mixture in 12 fasting patients with chronic complaints, the volume of gas excess did not differ significantly from that of 10 controls. Similarly there was no difference in the composition or accumulation rate of intestinal gas. However, more gas tended to reflux back in to stomach in patients who complained of abdominal pain [17].

Bowel habit is strictly reliant on intestinal transit time [19]. In particular three independent studies reported slower intestinal transit time in subjects with known production of methane compared to non methane producers [17]. Lactulose breath test among IBS patients is highly associated with constipation. The role of methane in slowing down the transit time was shown by Pimentel et al, using an interesting and well characterized canine model. Briefly, two chronic small intestinal fistulas were created surgically, at 10 cm distal to the bile and pancreatic ducts and 160 cm (mid gut fistula) from the pylorus. To test for the effect of gas on transit, room air or methane was delivered into the distal half of the gut. Luminal methane infusion reduced radioactive marker recovery in all dogs compared with room air by an average of 59% [19]. If it is true that methane is modifying gastrointestinal transit time it is also true, according to other reports that gastrointestinal transit time could influence methane and gas production. El Oufir et al, in fact, have investigated the relations between transit time, fermentation products and hydrogen consuming flora in healthy humans [7].

Eight healthy volunteers, four methane excretors and four non methane excretors were studied for three week periods during which they received a controlled diet alone and then the same diet with cisapride or loperamide. At the end of each period mean transit time (MTT) was estimated and H2 lactulose breath test was performed. Cisapride and loperamide induced MTT changes but did not affect the number of viable anaerobes per g of faces. Cisapride administration induced a significant decrease in MTT and a significant increase in H2 excretion in breath while methane excretion was significantly reduced during cisapride administration [17]. No significant effect in H2 excretion but significant methane excretion was observed with loperamide administration. The authors concluded that MTT was inversely related to the volume of H2 excreted in breath test after lactulose ingestion. Methane excretion in breath was at a higher level during loperamide administration while the volume of exhaled H2 was hardly reduced [7].

Methods for Measurement of Intestinal Gases

Three methods are currently available for the measurement of intestinal gases in clinical settings:

a. in vivo by analyzing rectal air;

b. in vitro by fecal culturing and

c. ex vivo by breath analysis [4].

Breath analysis has a number of advantages as compared with others. The diffusivity of a gas across the mucosa of the gastrointestinal tract depends on its solubility in water; for a given partial pressure difference, CO2 diffuses much more rapidly than H2, CH4, N2, and O2. The rate and direction of diffusion of each gas is a function of the diffusivity, partial pressure difference between lumen and blood, and exposure of the gas to the mucosal surface. H2 and CH4 absorbed from the bowel are not metabolized thus excreted in expired air, and breath analysis provides a simple means of assessing the volume of these gases in the gastrointestinal tract because it equals the their rate of absorption [20]. H2 excretion contained in the breath is the results of the alveolar ventilation rate and alveolar H2 concentration. Over the last few years, breath test analysis tried to interpret the finding of several gases and products not mentioned in this review, but the lack of standardized systems of sampling made difficult to interpret the results.

H2, CO2 and CH4 measurement, on the contrary, are commonly measured through relatively well standardized procedure and technical instrumentation. The correct measurement of these gases, however, needs to consider pulmonary physiology and in particular the assumption that blood concentration, which is in equilibrium with intestinal concentration of the gases, is in equilibrium with alveolar concentration of gases. Exhaled air is a mixture of alveolar air and ambient air retained in the respiratory dead space. Alveolar air is a part of exhaled air, which has been in contact with blood inside alveoli. Dead space is the volume of air which is inhaled that does not take part in the gas exchange, either because it remains in the conducting airways (anatomical dead space) and it reaches alveoli that are not perfused or poorly perfused (physiological dead space) [7]. This volume is equal to approximately 2 mL/kg of body weight and with a normal volume of about 500 mL/breath; the first one third volume is represented by dead space air. Because of the laminar pattern of air flow through the major airways, roughly twice that volume should be exhaled before all of the dead space air is washed out. The problem is even greater with neonates, in whom dead space volume is represented by up to 50% of the tidal volume [9].

Conclusion

Intestinal gas is primarily produced by the inhabiting bacteria organisms, aerophagia and other gastrointestinal diseases associated with gas production. Nitrogen, hydrogen, methane, Oxygen and carbon di oxide are important gases within the gut lumen. Diet, specie, genetics and individual idiosyncrasies are chief factors that determine the extent of intestinal gases produced by different individuals. A better understanding of the pathophysiology of intestinal gases is essential for the improvement of current strategies employed in the decreasing the quantity of intestinal gases and in treatment of the different diseases associated with bloat and intestinal gas production and accumulation.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Nursing

Effect of Nursing Intervention on the Knowledge and Short- Term Utilization of Quality Time Activity by Parents of Children with Behavioral Problems

Introduction

Children are the most precious possession of mankind. They should be nurtured with the utmost care and affection. The greatest gift that parents can give to their children is a sense of personal worth. The self-esteem of a child should be more valuable to a parent rather than achievements in studies, sports or any other field [1]. Behavioural disorders among children are universal and recent studies indicate the higher prevalence rate. The prevalence of behavioural problems in the western literature has been reported to vary between 5-10% [2]. The prevalence of behavioural problems in India has been explored by different authors-36% by Bassa , 9% by Chacko,10.6% by Raju 4.6% by Singh and Guptha 1970 [3]. Sarita, Bhargava et al. [4] From Ajmer reported 38.1%, Bhatia et al. [5] from Delhi reported 20% and Indira Guptha et al. from Ludhiana reported 36.5%.

The home today is smaller. The housewives have entered career in order to supplement the family income. The flat system in the cities confines the child within four walls and offers little chance to have companionship and peer groups. Because children have fewer people to share their experiences, parents must work harder to make the home a place where there is a fun, activity and a variety of things to do together. In India, children constitute about 40% of the total population. Behaviour disorders are one of the most common childhood disorders, which can hinder the normal development of children. The present study aimed to find out the effectiveness of a nursing intervention on the knowledge and short- term utilization of quality time activity by parents of children with behavioural problems.

Methodology

‘Quasi experimental, one group pre-test and post-test design’ was adopted. In this design, a single test group was selected and knowledge and utilization of quality time activities were measured before the introduction of intervention. Teaching program on quality time activities was then introduced in four sessions and the effectiveness was measured. The differences due to the application of the experimental program were then determined by comparing the pre-test and post-test scores. Sample consisted of parents of children with behavioural problems between the age group of 4-15 years admitted in child psychiatry center. Either father or mother or both staying with children at the time of conducting study were chosen as sample. Purposive sampling was used to select subjects on the basis of inclusion criteria. Participants signed the written informed consent after being explained about the risks and benefits of the study. Privacy was provided and confidentiality was maintained throughout the study.

Description of Research Tools

a) Socio-demographic and clinical profile

b) Quality Time assessment questionnaire which was prepared for the study to assess the knowledge on quality time activity of parents

c) A recording sheet on Quality time activities to record the interactional activities of parents and children

Quality-Time Assessment Questionnaire:

This questionnaire consisted of 44 items divided into 3 sections to assess the knowledge and quality time activity by parents with their children. Section A consisted of 11explorative questions on which information was collected from the parents regarding quality time. Section B consisted of nine statements to assess the knowledge of parents on quality time activity. Section C consisted of twenty-four statements of activities that normally parents do with their child.

Description of Nursing Intervention

Each parent had four sessions of educational program on alternate days and each session lasted for one hour. In addition to that, the researcher observed parent-child interaction and encouraged them to have more fun and other enjoyable activities. The nature of activities carried out were playing in-door and outdoor games, story- telling, discussing with children about their activities on general topics and listening to their feelings and interests. Further, parents were instructed to record their activities in detail mentioning the date, time, duration etc. in the recording sheet.

Results

Among the subjects (n=30), majority of fathers (53.4%) belonged to the age group of 36-45 years, and majority of mothers (46.7%) belonged to the age group of 25-35 years. Most of them were graduates (fathers-46.6% and mothers 43.3%). 66.7% subjects were from urban areas. 83.3% had non-consanguineous marriage and 53.3% subjects belonged to nuclear families. The results of the study found that 43.3% of children belonged to the age group of 13- 15 years; 30% were between 10-12 years; 13.4% were in the age group of 7-9 years and 13.4% were in the age group of 4-6 years. Male children (76.7%) outnumbered the female children (23.35). Before the intervention, 20 (66.7%) subjects said that they had no idea about quality time whereas 3(10%) of the subjects narrated that it is time spent with children in better and productive way. After the intervention, 11 (36.7%) subjects told that quality time is the time with their child by having mutually enjoyable activities while 5 (16.7%) subjects narrated that it is having fun together with their child. There was statistically significant increase in the knowledge (p<0.01) and in the quality time activities (p<0.01) following nursing intervention program. Domain wise comparison of scores is shown in Table 1.

Table 1: Comparison of pre and post test scores variables domain wise.

Discussion

The present study was an attempt to find out the effectiveness of nursing intervention on the knowledge and short term utilization of quality time activities by parents of children with behavioural problems and to develop a package on quality time activities. In India, this study is the initial study on quality time. The following is the summary of similar studies conducted in this area and the findings are given below; Bryant & Zick [6] found that dinner conversation were important for the child’s development. In their study, mothers spent 44 minutes per day sharing household work with their children and father spent about 34 minutes. Bradley and Caldwel [7] emphasized the parents’ socio-emotional investment in children. They suggested that the quality of parent’s socio-emotional investment should manifest in the amount of joy, expressions of affection toward a child, sensitivity to the child’s needs and responsiveness to those needs, and consistent choices on the parent’s part to act in the best interest of the child.

Marsiglio [8] found that paternal engagement activities, which is time spent in one-to-one interaction with a child in involving activities such as private talks, playing together influenced the quality of father child interaction. Cooksey & Fondell [9] examined the frequency with which parents spent time with their children in general. Fathers were asked, how often do you spend time with the children in the following activities

a) Leisure activities away from home

b) At home working on a project or playing together

c) Having private talks

d) Helping with reading or homework.

Results indicated that the fathers eat just over half of their breakfasts and dinners with their children, several times per month had leisure activities but fewer activities at home. The desired outcome of the study was achieved by combination of factors such as availability of parents in the ward and parents were free from their household/office work. Routine activities carried out in the inpatient unit such as recreational activities, permission for week end outings and the picnics arranged by the multidisciplinary team also promoted the positive parent-child interaction. Physical facilities like play area and the pleasant atmosphere of the child psychiatry centre also enhanced the quality of parent-child interaction [10].

Conclusion

Behaviour disorders of children are one of the most common childhood disorders which affect the mental health development of children. The present study has scientifically proved that planned structured teaching with the parents of children with behavioural disorders increases their knowledge about quality time and quality time activities. Nurses have ample opportunities to extend the health teaching services to the parents to improve their knowledge on various issues related to mental health promotion.

Journal on Geo Sciences

Examining the Spatial and Temporal Variability of Soil Moisture in Kentucky Using Remote Sensing Data

Introduction

Soil moisture plays a key role in controlling the exchange of carbon, water, and energy fluxes between the land surface and atmosphere in local, regional and global scales. It is an important term for accurate prediction of runoff, infiltration, drainage, soil evaporation and other important variables [1]. It thus affects near-surface climate by changing soil property such as albedo, soil thermal capacity. Soil moisture plays an important role in constraining plant respiration, transpiration, and photosynthesis in many regions of the world [2]. Also, soil moisture is involved in a number of feedback at different spatial scales and plays a major role in climate change projections [3]. For instance, it is found that up to 40% of the heat wave anomalies can be explained by interannual and seasonal variability of soil moisture because of the significant effect of soil moisture on air temperature [4].

Although soil moisture plays an important role in weather, climate and ecosystem functions, there is a lack in adequate longterm observations of soil moisture which represent challenges to the efforts to accurately predict soil moisture. Soil moisture measurement have been taken using field samples and more recently using soil moisture sensors, but are labor intensive and offer limited spatial variability. Thus, such point-based observation does not represent the high variability in soil moisture both temporally and spatially and are insufficient for regional or global analysis of soil moisture. Meanwhile, satellite imagery is an important tool for soil moisture studies because it’s continuous spatial and temporal coverage. Several studies have used remote sensing derived surface temperature and reflectance indices to estimate soil moisture [5,6]. The validation of such techniques with ground measurements is important for improving remote sensing soil moisture retrieval.

Since the lunch of the Advanced Microwave Scanning Radiometer (AMSR-E), global soil moisture data are available at a 25km spatial resolution. Soil moisture from AMSR-E is calibrated and validated at different sites. Good agreement is found between AMSR-E soil moisture and ground data at different sites ranging from natural vegetation to crops [7-10]. This consistent level of agreement between AMSR-E soil moisture with ground observation over a range of meteorological and surface conditions, offers a promise for application of AMSR-E soil moisture data to areas with different environmental conditions. The primary objective of the paper is to investigate the spatial and temporal variability in remote sensing soil moisture for the State of Kentucky. In particular, the spatial pattern of soil moisture during the vegetation growing season and it’s relation to vegetation type is explored. The strategy is to compare and analyze any observed trends in satellite soil moisture estimates for different Kentucky biome types. The goal of this paper is to evaluate the trends in Kentucky soil moisture and whether these trends have possible link to climate change. To answer these questions, the growing season soil moisture for 10 years are analyzed.

Methods Satellite Data

Daily soil moisture data from the Advanced Microwave and Scanning Radiometer (AMSR-E) are downloaded for the state of Kentucky from the National Snow and Ice Data Center achieve center for year 2003-2011 [11-14]. Moderate Resolution Imagining Spectroradiometer (MODIS) combined Aqua and Terra land cover data (MCD12) are downloaded for years 2004-2011from Earth Explorer. Land cover data are used to subgroup the AMSR-E data for the major Kentucky biomes: Forest, cropland, and mixed forest/ crop. All land cover classes designated as forest (e.g. evergreen, deciduous) are combined to create a new lad cover class named “forest”. Since MODIS combined land cover data are available starting year 2004, land cover data for year 2004 is used to extract Kentucky biomes for AMSR-E 2003. All satellite images are mosaic and subset to the State of Kentucky borderline. Image processing is done in Erdas Imagine (Hexagon geospatial) and statistical analysis and plotting are done in R (R statistical group).

Statistical Analysis

Daily AMSR-E data are averaged to obtain monthly soil moisture for years 2003-2011. Then to generate a growing season average soil moisture, the monthly soil moisture data are averaged to produce 9 years averages for January-February-March (Jan-Feb- Mar), April-May-June (Apr-May-Jun), July-August-September (Jul- Aug-Sep), and October-November-December (Oct-Nov-Dec).

Results and Discussion

The spatial variability in AMSR-E soil moisture is illustrated in (Figure 1) Lower soil moisture in Appalachian Kentucky can be related to errors in AMSR-E algorithm/retrieval due to elevation and surface terrain resulting in fewer collected data by AMSR-E. Otherwise, the spatial variability in soil moisture tracks the seasonal variability in precipitation. It is not clear how the spatial variability of AMSR-E soil moisture is related to vegetation types, but still some pattern can be detected. For example, low soil moisture increases in Western Kentucky (dominated by agricultural fields) from January to September and then decreases afterword.

Figure 1: Spatial variability of 10 years average AMSR-E soil moisture in Kentucky.

This can be related to irrigation that is used during the crop growing season and ceases before harvest, which is typically around September/October depending on the crop type. It is important to remember that AMSR-E soil moisture is for the top 1cm of the soil that is highly correlated with precipitation amount and variability. Temporal variability in soil moisture reflects the temporal variability in precipitation with higher soil moisture during the winter and spring seasons and lower during the summer season (Figure 2). AMSR-E data shows a decreasing trend in Kentucky soil moisture after year 2009 which can be related to decrease in precipitation. What is driving this decrease in soil moisture? Is it changes in winter, spring, or summer precipitation? To answer this question, the growing season soil moisture (April-September) is averaged for each of the 10 years (data not shown).

Figure 2: Time series of AMSR-E soil moisture for years 2002-2011.

Analysis reveals that the apparent decreasing trend in Kentucky soil moisture during the vegetation growing season is due to a decrease in July-September AMRS-E soil moisture (Figure 3). This decreasing trend is consistent with observed decrease in the precipitation duration (not intensity) during the growing season in Kentucky, particularly from July to September. Since the shift in precipitation is toward higher rates in shorter time period, runoff is increasing leading to a decrease in water infiltration and thus soil moisture. Soil moisture is higher in crops compared to forest ecosystems by about 20% (Figure 4). The seasonal variability in forest soil moisture is similar thought out the study nine years period. Higher soil moisture is detected by AMSR-E during the dormant season for forest ecosystem (Figure 4).

Figure 3: Time series of average AMSR-E soil moisture for the months of July-September.

Figure 4: Time series of AMSR -E soil moisture for forest (top), crop (middle) and mixed crop/forest (bottom) biomes in Kentucky.

Whereas, lower soil moisture can be detected during the vegetation growing season reflecting the use of soil water by plants for transpiration. Also, water interception by leaves can impact soil moisture by decreasing the amount of precipitation reaching the forest floor. Crop soil moisture does not show a distinct seasonality most probably due to human activities such as irrigation (Figure 4). For the mixed crop/forest, the soil moisture signal resemble more the crop soil moisture in that there is no apparent seasonal cycle in soil moisture estimates (Figure 4). Nevertheless, AMSR-E soil moisture data are capable of detecting the temporal variability in soil moisture for the major Kentucky biomes.

To check the ability of AMSR-E soil moisture data to capture the variability in soil moisture, two sites in Kentucky (agricultural and forest sites) are used and compared to AMSR-E soil moisture data. Results show that AMSR-E soil moisture underestimated observed soil moisture and show better agreement for the months of July to September (Figure 5). AMSR-E soil moisture is able to capture soil moisture during the dry period for both sites (Figure 5). To a great extent, the trend is represented by AMSR-E not the actual values. The lower variability in AMSR-E soil moisture compared to site data is likely the results of different measurement depth (~1cm for AMSR-E vs 5cm for site data). Lower moisture is expected in the top 1cm of the soil as it dries quicker due to evaporation and infiltration compared to 5cm soil depth. Another reason for this difference is the coarse resolution of AMSR-E (25km), where the variability in soil moisture due to texture, vegetation cover, and topography are not well captured by AMSR-E. It is important to note that soil moisture varies at very fine spatial scale that can be captured at the site level, but are harder to detect using satellite signal that represents the average of heterogeneous areas (mixed of different land cover types, topography, soil type, etc.)

Figure 5: Field soil moisture measurement at 5cm depth (connected dots) for Mammoth Cave (top) for years 2009- 2011 and Princeton field station (bottom) for year 2010. Piont data represent AMSR-E soil moisture.

This work presents the capability of remote sensing to capture the spatial and temporal variability in soil moisture and its relation with vegetation types in Kentucky. AMSR-E soil moisture data are capable of detecting the spatial and temporal variability in soil moisture for Kentucky biomes. The temporal variability in soil moisture for forest ecosystems reflect the continuous drying during the growing season. This is more apparent toward the end of the growing season due to decrease in precipitation amounts. The nine years data revealed a decrease in soil moisture, but continuous availability of satellite soil moisture is needed before such a trend can be significantly related to climate change. Evaluating satellite soil moisture products is important for improving our understanding of the spatial variability in vegetation carbon and water cycles.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on Science and Technology

Strengthening Early Warning and Early Action strategies for Urban for Security in Kenya

Executive Summary

Kenya is witnessing rapid population growth in the urban centers estimated at 4.4 % per year. It is projected that the number of people living in urban areas will exceed those in rural areas in the next two decades where majority of the population (60%) are living in informal settlements. Due to diverse physiographic conditions, urban areas are more exposed to various types of risks than even rural areas which are likely to worsen due to climate change. An increasing concentration of population coupled with extreme events, results in high damages to assets, interruptions in business continuity, loss of lives, displacement of populations, which is further enhanced by economic and social vulnerability. Informal settlements in urban areas face serious threat from such emergencies with food security top amongst the crises facing them.

Those living in the slums earn low wages which are often uncertain and unreliable to meet their needs. Majority of the dwellers whose earnings range from KS 12, 845 to KS 6.666 spend between 80 % – 100 % of their income on food. Unlike those living in rural areas the changes in food prices impact seriously on these categories by price global fluctuations. Additionally, most of them exhibit shift in diet from those consumed in rural areas. Besides, food security indicators, thresholds and coordination mechanism are weak and not well developed like those in rural areas such as the Arid and semi Arid Lands (ASALs) spearheaded by NDMA. This depressing and uncertain state of affairs justified the development of the IDSUE project spearheaded by Concern Worldwide under the START consortium.

a) To determine indicators for early detection of humanitarian emergency situations; and coping strategies.

b) To develop surveillance systems for detection of early warning signs of a humanitarian emergency/crisis and

c) To identify thresholds and triggers for action for defining when a situation has reached an emergency/crisis stage.

Despite the impressive work done by the START consortium to develop these indicators and thresholds for food security, a number of challenges may hinder full adoption of this mechanism in order to address urban emergencies. First, policies and strategies are quite unfriendly to the urban poor and have not integrated food security issues with agriculture budgets at 6 % below the recommended 10%; secondary, there is limited resources dedicated to monitoring food security issues in urban areas by the NCCG and stakeholders; thirdly, there appears to be a weak coordination mechanism without a dedicated agency with a strong convening and coordinating power similar to NDMA and finally tools and approaches developed by Concern Worldwide may need further testing and research to validate them. It is for the grounds that this policy brief to galvanize efforts of the policy makers including NCCG, national government, donors, private sector and CBOs to ensure the good work by Concern worldwide and partners is fully adopted in support of urban food security. Such efforts will ensure mechanism and strategies for the urban poor and dwellers of slums are supported to cope with the crises in urban areas and challenges associated with climate change.

Background to Urbanization and Disaster Risks

Humanity is now half urban and is expected to be nearly 70 percent by 2050 [1]. Urbanization is predominantly taking place in cities in developing countries, most notably in Africa and Asia. In Kenya, urbanization is rapidly increasing at about 4.4 per cent annually with an estimated 60 % of urban population living in informal settlements [2]. High population densities, soaring crime, poor sanitation and inadequate health care services are contributing to high disease burdens more than rural populations [3]. Most of these slum dwellers are powerless, lack personal security, tenure to their land and access to stable income sources [4]. Additionally, sub-standard infrastructure including housing and inherent socio economic inequalities increase susceptibility of the urban residents to a variety of emergencies which seriously undermine their social capital and life expectancy [5]. Such rapid growth of urban settlements could gradually turn into crucibles of death from natural and manmade disasters unless targeted policies and programmers are enacted to enhance their resilience.

The common shocks that the urban dwellers are exposed to include fires, floods, security risks, water shortages and rising food prices. Mortality and malnutrition levels are routinely used to detect when a disaster situation has entered a crisis phase and trigger humanitarian response in rural settings. However, use of these indicators and corresponding thresholds pose some challenges due to high population densities, different livelihoods and coping strategies in urban settings. It is imperative therefore to develop a surveillance system with indicators that reveal stress in early stages of a crisis to aid humanitarian and development agencies activate early action [6]. Hunger, food insecurity and negative coping strategies have been noted as strong outcomes of slow onset emergencies in informal settlements. Urban crises disproportionately affect the poorest disproportionately particularly the female headed households, youth, children, marginalized groups, people living with AIDS, elderly and stigmatized ethnic groups [7].

Urbanization and food security

Food and nutritional security are basic needs for not only human survival but also for economic productivity and human development. It is widely accepted that there are four dimensions to food security: availability, access, utilization and stability. While each dimension is necessary for overall household food security, they may have different weightings particularly in urban settings. The main trends impacting on urban food security include: demographic changes, diversification of diets, high cost of farm inputs, poor investments in agriculture, natural disasters, climate change and unfriendly policies and legislation toward farmers [8]. Promoting food security directly contributes to the realization of human rights and fundamental freedoms as provided by the Kenya Constitution 2010. It also contributes directly to achievement of two Sustainable Development Goals (SDGs) number one

a) Ending poverty in all its forms everywhere, and SDG number

b) End hunger, achieve food security and improved nutrition and promote sustainable agriculture.

For many years, emergency interventions focusing on food security and nutrition have had a predominantly rural focus and there is need to reverse this approach to accommodate the needs of the fast growing urban populations. The 2007-2008 global food crises that occurred in most cities across the world highlighted and exposed the vulnerability of the urban food system [9]. In Kenya, the 2007-2008 post election violence contributed further to deterioration of the living conditions of the residents in urban areas thus underpinning the need to develop and strengthen food security surveillance systems in urban set ups. These crises revealed that the urban poor and other minorities were hardest hit making it difficult to maintain a decent standard of living [10] (Figure 1).

The urban food security environment presents several challenges that differentiate it from a rural context. First, the urban dwellers purchase almost all their food requirements from markets. Secondly, majority of urban dwellers exhibit shift in staple food from maize, sorghum and traditional foods to rice and wheat. This exposes the urban poor to international markets than their rural counterparts as their diet is controlled more by global markets and by many actors (Figure 1). Urban residents are also exposed to changes in global events such as Foreign Direct Investment (FDI) which would influence remittances that some urban dwellers depend on. Lastly, the income sources of the urban poor are casual, insecure, uncertain and low paying posing huge risk to food access for the residents. Thus when emergencies strike, the urban dwellers are negatively impacted even more than their rural counterparts resorting to negative coping strategies including theft.

Figure 1: Illustration of food supply chain in urban setups (Adapted from Teng et al., 2010).

Urban Humanitarian Response

Humanitarian response to urban emergencies is not new. However, the approaches and tools are relatively new and continue to evolve. Recent experiences in urban humanitarian responses emphasize the need for engagement through a strong coordination mechanism with a wide spectrum of stakeholders such as the national and local/county governments, private sector, civil society, donor community, the media, multilateral organizations, CBOs and local communities. A key challenge affecting effectiveness of urban humanitarian response has been lack of consensus on indicators for detecting when chronic poverty has tipped into a crisis [11]. A clear methodology and indicators appropriate for urban set ups to monitor the situation and trigger timely humanitarian response when it has reached a certain threshold is essential. Such a mechanism should consider urban peculiarities including high population densities which can unravel the actual humanitarian situation (Figure 2).

Figure 2: Hidden crisis in urban areas.

Equally essential is putting in place a credible people–centered early warning system [12] (EWS) to support early humanitarian response in the event of urban crises. The EWS should generate and disseminate timely and meaningful information to allow populations at risk and stakeholders sufficient lead time to take appropriate actions to mitigate the impacts of disaster [13]. For the EW information to be effective, a prepositioned contingent fund and response plan should be rolled out immediately the indicators have reached a certain threshold. An effective multi stakeholder coordination mechanism to champion delivery of the various interventions by the government and humanitarian actors should also be established. This is vital to enhance efficiency and effectiveness of humanitarian operations, limit duplication of efforts and clarify roles and responsibilities of various actors. Often, coordination particularly during emergencies has proved to be a challenge leading to unacceptable delays and inefficient delivery of interventions.

The humanitarian IDSUE research project has made remarkable progress in developing surveillance systems, strengthening institutional capacities and development of indicators and thresholds for urban emergencies [14]. Generally, the study found that majority of the community members living in the informal settlements are food insecure with up to 64 per cent of sampled household being food insecure [15]. This pioneering work needs to be up-scaled by NCCG, national government, private sector donors and other stakeholders as a mechanism for tackling food insecurity in urban areas. The study has contributed to development of indicators and thresholds for phase classification of urban crisis as shown in Table 1 below.

Table 1: Thresholds for early warning indicators (Source: IDSUE survey data, 2015).

A number of legislation and policies frameworks exist at global, regional and national levels in support of urban food security and agriculture. At county and national levels, the IDSUE spearheaded by Concern Worldwide under the START Network, the NCCG Disaster & Emergency Management Act, the National Disaster Management Bill and the National Food and Nutrition Security are driving the agenda to ensure the right to food is achieved. However, in the event of emergencies particularly in the urban set-ups, the these policies and legislation frameworks do not clearly spell out mechanisms for ensuring the nutritional and food security needs of the urban poor are realized.

At regional level, the efforts supporting food security include IGAD Drought Disaster Resilience and Sustainability initiative [16] (IDDRSI) – a regional initiative for reducing vulnerability and building resilience by IGAD and the Comprehensive Africa Agriculture Development Programme [17] (CAADP) Framework and Agenda 2063. The Africa Risk Capacity [18] (ARC) of AU is supporting agriculture insurance mechanism thus promoting food security and agricultural productivity. Agenda 2063 which is building on existing continental initiatives is a strategic framework of the AU to accelerate socio-economic transformation including food security and agriculture. The CAADP aims at committing countries to increase agricultural productivity by increasing public investment to at least 10 % of their national budgets by 2008. The national and county governments need to capitalize on these initiatives to ensure appropriate programs and strategies are developed to combat urban food insecurity.

The 10 % CAADP commitment on urban agriculture and food security issues are not integrated in the NCCG legislation as the current agriculture budget allocation is only 6 % of the total. These targets need to be cascaded to all city counties of Nairobi, Mombasa and Kisumu to ensure agricultural budgets are at least 10 per cent of the allocation from the national government to support food security (Figure 3). However, there are critical gaps in these strategies and legal frameworks. Generally there are limited provisions in these documents for engaging with urban communities. The policy frameworks also assume homogeneity in populations being targeted. Even in informal settlements all urban populations are not homogenous – there are huge differences among the residents. Additionally, there seems to be no concrete plan for funding the implementation of these strategies and policy frameworks. There is also limited formal engagement with political leaders and decision makers which makes it hard to get urban emergencies and food insecurity issues high on the political agenda.

Figure 3: A glance at policies, legislation and strategies in support of urban food security.

Cost Effectiveness of Early Humanitarian Response

Most slow onset emergencies such as drought and food insecurity generally predictable. Major problem has been the long time taken to mobilize humanitarian aid from donors and governments to respond to the crisis. Late humanitarian response often leads to loss of lives, damage to livelihoods, depletion of assets and erosion of coping capacity leading to destitution of affected households. For instance it is estimated that the 1984 famine in Ethiopia caused a half a million deaths while the 2011 drought in Horn of Africa resulted in 50,000 to 100,000 deaths [19] When food aid is provided in time mortality rates declines by up to 40% [20] particularly among the under 5 children. With regard to economic impacts, drought in every 5 years lowers GDP per up to 4 % per year. Reduced food intake common among low income households shocks is associated with increased mortality and contributes to 33 % of childhood deaths among the under 5 in Africa. In absence of rapid early response, the mortality rates increase substantially.

Further studies by ARC in Africa has shown that a total cost of USD 81 Million is lost in the event of a high magnitude drought, a slow onset disaster. This is equivalent to USD 221 M over 5 years or USD 44 M per year [21]. In urban settings it may be much lower due to nature of livelihoods in slums and peri-urban areas. With regard to food aid alone it is estimated that the humanitarian community spent about USD 54 per person per year on a high impact slow onset emergencies. At macroeconomic level, the ARC assessment indicates that slow onset emergencies have adverse impact on the GDP of 4 percent per year for 1 – in – 10 year drought [22]. The study concluded that 1 US $ saves $ 4.40 spent after the crisis. At household level, ARC analysis reveals that nearly US $ 1,300 is lost per household in three month as a result of slow onset emergencies in HoA. Thus early intervention is critical in saving lives, protecting economic turn down and sale of assets at household level.

Implications, Conclusions and Recommendations

The pioneering research work by Concern Worldwide under START consortium on developing indicators, thresholds and strengthening surveillance systems for urban emergencies is promising good results. It is quite relevant and strategic to changing livelihoods and demographic needs of the urban dwellers. However, there is need to move forward this work to focus more on socio economic contexts of slow onset emergencies on the urban poor. The tools developed may need further testing and refinement by the academia and research organizations to suit evolving issues. Comparing the performance of these tools and approaches with similar research work undertaken in Ethiopia would also help validate these tools. Further studies need to be pursued on urban vulnerability and hazard assessment and the linkages between weather forecasting and contingency planning to enhance early response mechanisms within a DRR framework.

Conclusion

Kenya is witnessing rapid growth in urban centers which is projected to exceed the rural population by 2050. Majority of these urban dwellers are living in informal settlements characterized by inhuman and deplorable living conditions without adequate access to food, health care, water and sanitation and housing and are at greater risk to food insecurity and climate change related disasters. There is a wide range of humanitarian and development actors in urban areas operating in environment without clear terms of reference thus impeding their effectiveness and delivery of their assistance. Besides, the existing NCCG Disaster and Emergency Act lacks a policy and a plan to operationalize and support it is full implementation. The coordinating structures are unclear and need urgent review to enhance effectiveness to EWEA mechanisms.

The current IDSUE work carried out in Nairobi, Mombasa and Kisumu slums is shifting the humanitarian context from the familiar rural emergency response to emerging urban response mechanism. The ground breaking innovative approach has yielded good results on strengthening systems and developing tools, indicators, thresholds and approaches tailored to the peculiar livelihoods and demography of informal settlements. Such initiative should be integrated into stakeholder actions and policy frameworks in order to safeguard the lives and livelihoods of residents of the informal settlements and support progress towards achievement of SDGs and Vision 2030. The current state of affairs in monitoring the slum crises may not be sustainable as it is donor aided with little resources from NCCG and national government to sustain this important work. The role of private sector in urban food supply chain [23] (Figure 1) has also not been fully exploited although it plays an important role in urban food security system and building resilience.

Recommendations

Think and Act Ahead of the Needs of Rapidly Growing Urban Populations: Whereas there is strong interdependency between rural and urban areas, the urban food security system merits distinctive greater attention from governments, private sector and donors due to rapid urbanization. There is need to strengthen policies and strategies supporting urban food system that address the growing problem of food insecurity in informal settlements. This shall involve reviewing the NCCG disaster Act and other Urban Agricultural Polices in the city counties of Mombasa, Nairobi and Kisumu to comply with the 10 % agriculture budget requirement which currently stands at about 6 % to support urban food security. Experiences and lessons learnt from many years of drought response in the ASALs reveal a slow and sometimes unacceptable delay in timely response to emergencies as a result of slow mobilization of aid and inefficient mechanisms.

Promote a Strong DRM Coordination Mechanism, Strategies and Policy Frameworks as key to Enhancing Early Response to Urban Emergencies: A robust coordination mechanisms and institutional arrangements that is well resourced and working collaboratively with various stakeholders in all the city counties is critical in managing slow onset emergencies. In addition, there is need to develop a relevant policy and plans to operationalize NCCG Emergency and Disaster Act as rightly pointed out in Part III of the Bill in Article 8. The disaster Act should be reviewed to firmly integrate food security issues, EWA mechanism, reinforce coordination arrangements and clarify the roles and responsibilities of the many actors.

Support Periodic Monitoring, Information Management and EWS to Enhance Early Response Mechanisms in Urban Areas: There is urgent need to adopt a tested and credible early action framework to guide early response to slow onset humanitarian crisis in urban areas especially slums modeled on the initiatives being spearheaded by CONCERN Worldwide under the START consortium. These mechanism need to be integrated into the city counties planning and budgeting processes to ensure it is fully embedded in county business processes.

Adapt Tools and Approaches from UEWEA Project to Strengthen Urban EWEA Systems: During emergencies humanitarian agencies have often focused rural areas leading to well established tools and systems for monitoring rural crises. Adoption of UEWEA tools and approaches by the stakeholders through an aggressive campaign is essential. However, these tools require further refinement and testing through the research process which calls for continued support from the donor and increased commitment from the both national and county governments.

Mobilize Financial Resources to Strengthen Urban Food Security Systems Including Monitoring of Risks: Adequate financial resources and human capacity is needed to periodically collect, analyze and forecast early warning information for prompt action before the crisis turns into an emergency. The city counties of Mombasa, Nairobi and Kisumu should create a specific budget line for this important activity which should be clearly spelt out in the Act and NCCG budget.

Building Resilience and Poverty Reduction Initiatives is Key to Sustainability of Urban Areas: Urban emergencies are a serious setback to progress in attaining Vision 2030 and SDGS including Sendai Framework of Action and regional initiatives. Climate change is expected to escalate the impacts slow onset emergencies of the urban poor. It is essential for the county and national governments supported by the private sector and the vibrant civil society to invest long term resilience building efforts which are more cost effective compared to humanitarian actions.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on Medicine

Body Mass Index Variation by Even a Single Meal!

Opinion

The prevalence of obesity has immensely increased around the world in both adults and children. In fact the World Health Organization (WHO) estimates that at least 1 billion people are overweight, and three hundred million of these are obese [1]. The rising prevalence of obesity merits the need for accurate methods of assessing adiposity. There are now, however, many measures of obesity, anthropometrics and otherwise [2]. Evidence from recent epidemiological studies has yielded the advocacy of WC (waist circumference) and BMI (Body Mass Index) as easy-to-use, low-cost, yet reliable measures of obesity [3,4]. As it is clear, BMI provides a simple numerical scale for body status often applied in population studies. BMI in medical literature is reported as variable (dependent or non-dependent) and also used for descriptions and classification of groups or populations. However, some studies apply BMI and WC based on self-reported body weight and height without any valid protocol; this subjective approach may potentially lead to inaccurate data [5]. So here, as very small part of the big world of researches, I am writing this letter to brief those involved in researches about my own study in which body weight changes following a single meal session could affect BMI and WC values.

We conducted a cross-sectional study on 120 students of Golestan University of Medical Sciences (GOUMS) in 2015; all participants were healthy college students and their mean ages were 19±3 years. Body weight and waist circumference were measured before and after a meal according to the standard guideline of World Health Organization [6]. BMI was calculated as Mass (kg)/ [Height (m)]2 and was compared by Paired Student t-test. We noticed that there were statistically significant differences between body weights, WC and BMI values before and after a meal (P<0.05). Because having some food could increase body weight and immediately measurement of BMI and even WC thereafter make these indices be overestimated. Besides we observed that where to put the plastic tape on the body to measure the WC is crucial to the correct outcome; we performed WC measurement on three different point of the body: below the anterior superior iliac spine (ASIS), on and above it and three different values were obtained.

What we found in most studies applying BMI and other anthropometric indices, fasting condition for measurements of body weight might be ignored or at least not clearly reported while BMI calculation because it is supposed that its effect is very small and hardly can lead to the BMI categorical states bias. We suggest that the site of measurement and the time since last meal should be standardized for the development of a protocol of BMI and WC measurement; changes, although being slight, in cut-off points can make BMI and WC values move into another category; may cause analysis mismatch and remarkable changes in the final results which lead to interpretation bias because such anthropometric indices are generally used in population studies; so very little changes may produce very profound consequences thus inconclusive and wrong interpretations.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on Medical Sciences

Hand Hygiene: A Quality Improvement Project

Research Question

When provided with educational materials on hand hygiene, is there an increase in knowledge among healthcare professionals in acute care settings?

Problem Statement

According to the World Health Organization (WHO), health care associated infections (HAI) cause major problems for patient safety and health promotion [2]. The impacts of HAIs include prolonged hospital stays, long-term disabilities, increased resistance of microorganisms to antimicrobial treatments, financial burdens, high costs for patient care that is not reimbursable by insurance agencies and mortality [2]. The risk of patients developing HAIs is universal and exists in every healthcare facility and system worldwide [2]. Studies estimate that 1.4 million patients throughout the world are affected at any one time with an HAI [2]. In the United States (U.S.) alone, Haverstick et al. [3] reports on research that revealed about 1 in 25 patients in the acute-care setting will develop a HAI during their stay. Data from 2011 shows that about 700,000 patients had an HAI in the U.S. and 10% of those patients died from complications related to the HAIs. There have been many studies completed by numerous organizations, to include the National Institute of Health (NIH) [4], showing that the lack of hand hygiene among healthcare professionals contributes to the most HAIs in patients [2]. Prior studies have shown that hand hygiene compliance can be hindered by unsafe patient to nurse ratios, skin irritation from the antimicrobial treatment, and lack of knowledge. The persistence of staff education regarding practices can be a critical measure in reducing HAIs [5]. Therefore, the purpose of this study is to provide educational materials to healthcare professionals on acute care units in hospitals. Consequently, this will result in increased knowledge on the importance of hand hygiene and the proper technique of hand hygiene to reduce HAIs.

Florence Nightingale

Florence Nightingale is one of the most recognized names in nursing. Nightingale was born in 1820 in Italy to a wealthy British family, who, in 1844, were angry when she told them her decision of becoming a nurse. Nightingale became known for her work in the field during the Crimean War. She became known at the “Lady with the Lamp,” because of her night rounds tending to wounded soldiers [6]. Nightingale started a Sanitary Commission after she pointed out that the unsanitary conditions of the soldiers were a major cause of death. Her work led to reduced death rates from 42% to 2% [6]. Hand hygiene compliance in health care professionals today is important in preventing the spread of infections.

Literature Review

A comprehensive literature review was conducted through multiple databases and search engines including Google Scholar, EBSCO Host, and CINAHL nursing database. The inclusive criteria for research collection included articles between 2012 and 2017. Search terms included “hand washing, prevention of illness, flu season, and hand washing compliance.” With infections today adapting and becoming stronger, the need for protection in the hospital setting is at an all-time high. Hand hygiene is an easy and very efficient way of preventing the spread of infections.

Hand Hygiene: Past to Present

Ignaz Semmelweis was a Hungarian physician of ethnic-German ancestry, known as the “father of hand hygiene”. Semmelweis along with other colleagues established that hospital acquired infections were transmitted through the hands during activities with patients [2]. In 1847, Semmelweis discovered this after witnessing an alarmingly high number of mortality rates from puerperal fever at an obstetric clinic in which he held the position house officer. Semmelweis proposed all healthcare professionals wash their hands with a chlorinated lime solution before all patient contact and as a result the mortality rate at the obstetrics clinic fell to dramatically to three percent and subsequently remained low [2]. Since Semmelweis initial hand hygiene implementation, hundreds of studies and investigations have been conducted to provide evidence based practice recommendations on hand hygiene and the prevention of hospital acquired infections.

In the 1980’s the first national guidelines for hand hygiene were published and in 1995 and 1996, the Center for Disease Control (CDC)/Healthcare Infection Control Practices Advisory Committee (HICPAC) recommended either antimicrobial soap or a waterless antiseptic agent to be used for cleansing hands for all healthcare professionals entering and exiting patient rooms to decrease the spread of multidrug-resistant pathogens [2]. In 2002, HICPAC released guidelines that define alcohol-based hand rubbing, as the standard of care for hand hygiene practices in healthcare settings and that hand washing is reserved for specific situations only. The current CDC recommendations for hand hygiene in healthcare settings include decontaminating hands with healthcare facility approved antiseptics when hands are visibly soiled, routinely, before direct contact with a patient, before donning gloves and after removing gloves and when performing any healthcare related procedure on a patient.

Hand Hygiene Barriers

Although, 100% hand hygiene compliance may seem like a straightforward and effortless task, a variety of challenges have been recognized as hindrances to accomplishing this objective. WHO [2] presented common barriers of hand hygiene including “skin irritation caused by hand hygiene agents, inaccessible hand hygiene supplies, interference with HCW [health care worker]– patient relationships, patient needs perceived as a priority over hand hygiene, wearing of gloves, forgetfulness, lack of knowledge of guidelines, insufficient time for hand hygiene, high workload and understaffing, among other issues. Kirk et al. [7] performed a crosssectional examination of health care workers utilizing a survey to inquire about one’s knowledge, attitude, and self-reported practice of point of care hand hygiene. A convenience sample of 200 health care providers in the United States and 150 health care providers in Canada were chosen for the study. Half of the respondents were physicians and half were nurses. Forty-one percent of the respondents listed “dispensers/sinks not in a convenient location”, 36% reported “being busy”, and 32% reported “products dry our hands” as barriers. Increased workload and crowding was recognized as a main factor to low hand hygiene compliance in an observational study performed over 22- months in a 40-bed emergency department located in Toronto, Ontario, Canada [8].

Although this study is limited to a single emergency department with potential bias of an observational study, the theme of increased workload as a barrier to hand hygiene is valid. In a society involving technology in all facets of the health care system, it’s very important to comment on the use of mobile phones and the potential barrier to proper hand hygiene. A study performed by Mark et al. [9], at a hospital in Northern Ireland, involved swabbing 50 mobile phones for bacterial growth and simultaneously administering a questionnaire investigating cell phone usage among staff. Sixty percent of the phones yielded bacterial growth. The results from the questionnaire found 45% of the participants never wash their hands after cell phone usage and 63% report never decontaminating their phone. In addition, 57% stated that if their phone was proven to be contaminated, this would change their hand hygiene practice when using their device [9]. Despite these barriers, is it possible to educate hospital staff about the overwhelming importance of proper hand hygiene to result in increased hand hygiene compliance?

Methodology

A 13-question Quality Improvement survey was developed to assess healthcare professionals’ baseline knowledge regarding hand hygiene practices. The survey included 10 knowledge questions and three demographic questions. This tool was developed using various resources including: National Institute of Health (NIH) [4], Healthy People 2020, CDC and WHO. This survey was used as a pre-test to assess the base knowledge of healthcare personnel. These surveys were sent via email to nurses, physicians, mid-level providers, and nursing assistants. These staff members work on medical surgical units and/or intensive care units. The research took place in three different locations: Bennington, Vermont, Norfolk, Virginia, and Washington D.C./Maryland. After obtaining permission from each nurse manager from each region, an email was sent to approximately 75 people at each location [10]. This email contained information on the quality improvement project, a consent letter, a link to the survey tool, and contact information of the researchers. Finally, the participants were asked to complete this pre-test in anticipation of receiving an educational PDF file (poster) and post-test to be sent out in the near future.

An email containing a one-page educational PDF (poster) attachment and a link to the post-test was emailed to the same sample population. The educational PDF file (poster) contained brief sentences with visual cues displaying information regarding hand hygiene. This file was developed to provide facts to the participants to aid in the completion of the post-test and to improve baseline knowledge of participants. See Appendix B to visualize the educational PDF file (poster). The post-test contained the same 10 knowledge questions and three demographic questions, as the pretest. The pre and post-tests was constructed using survey monkey, allowing for analysis of results from both test surveys.

Results

The pre-test survey was completed by 96 healthcare professionals; two doctors, three nurse practitioners (NPs), 74 registered nurses (RNs) and 17 certified nursing assistants (CNAs). The post-test was completed by 45 healthcare professionals; two doctors, one NP, 37 RNs and five CNAs. There were 60 participants in the pre-test that worked in medical-surgical units and 36 participants that worked in ICUs. There were 28 participants in the post-test that worked in medical-surgical units and 17 participants that worked in ICUs. The total participants from Maryland and Washington, D.C. for the pre-test were 35 and the total for the post-test were 13. The total participants from Vermont for the pre-test were 39 and the total for the post-test were 20. The total participants from Virginia for the pre-test were 22 and the total for the post-test were 13. The above research has demonstrated an enhancement in hand hygiene knowledge among healthcare providers, one can assume that these individuals will be more motivated to be hand hygiene compliant (Figure 1).

Figure 1: Number of individual participated in test.

Limitations

There were several limitations noted after analyzing the data. One limitation was there was no way to track whether the same participants completed both the pre-and post-survey. Another limitation includes the fact that when one completed the postsurvey, there was no guarantee that the participant had a secure screen. This means the participant could have had direct reference to the educational materials while completing the post-survey, skewing the results. Finally, another limitation was participant attrition. 96 individuals completed the pre-test, but only 45 individuals completed with post-test (Figure 2).

Figure 2: 96 individuals completed pre-test; 45 individuals completed post-test.

Recommendations for Future Research

After analyzing our data, it has become obvious that there was a lack of physician and midlevel provider participation. To include more of these providers, a different approach to recruit these individuals should be considered. The surveys were sent out via email. Perhaps, most physicians and midlevel providers do not frequently check their email, limiting these individuals from participating. An approach to recruit physicians and midlevel’ in future studies might include utilizing text or approaching these providers directly. Furthermore, this research study only analyzed hand hygiene knowledge improvement among healthcare professionals. Therefore, future studies should focus on identifying whether an increase in knowledge results in improved hand hygiene compliance.

Conclusion

Hand hygiene is an inexpensive and effective way of preventing the spread of infections and in promoting the safety and health of our patients. The purpose of this quality improvement project was to increase knowledge on the importance of hand hygiene and the proper technique of hand hygiene to reduce HAIs. When provided with educational materials on hand hygiene, there was an increase in knowledge among healthcare professionals in acute care settings. After completing a pre-and post-test survey, analyzing hand hygiene knowledge of healthcare professionals, before and after educational material was provided, the data showed an increase in knowledge regarding hand hygiene. The pre-test average score being a 51% and the post-test average score being 75% shows a significant increase in knowledge regarding importance of hand hygiene. The goal of this project was to raise awareness of the importance of hand hygiene in preventing the spread of nosocomial infections.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us