Open Access Journals on Nursing

Psychosocial Stressors among Suicide Attempters Attending JIPMER Hospital Puducherry

Introduction

Living life to the fullest is the significant challenge faced by most of the people in this world. The value of the life is determined by how people give meaning to their life. Each person is a part of life [1]. All human beings will periodically experience psychological burdens, pain, and stressors during their lifetime. Having transient thoughts of wanting to die may be a natural response to emotional pain. In the midst of the psychological pain, suicide can become a gripping and viable means of escape [2]. Suicide then may be considered as both a coping mechanism and a failure to cope [3]. Suicide is the act of purposefully killing oneself. In broad terms, an act is a suicide if a person deliberately brings about his or her death in a situation where others do not coerce him or her to the action [4]. Suicide is not a diagnosis or a disease; it is a behavior that should alert us to an underlying problem, difficulty or disorder [5].

According to the WHO estimation each year approximately one million deaths from suicide globally, 16 people per 100,000 or one death in every 40 second. Suicide attempts are up to 20 times more habitual than completed suicides notably; a prior suicide attempt is the single most potent risk factor for death in the general population. Suicide is one among the three leading reason of death in the age group of 15-44 [6]. Although suicide rates have been highest amongst elderly males, rates among young people have been growing [7]. Mental health disorders (particularly substance abuse and depression) are associated with 90% of all cases of suicide. Death results from many complicated socio-cultural factors and is more likely to occur during periods of individual family and socioeconomic crisis [8]. the highest number of suicide deaths (16.927) was reported in Tamil Nadu in 2012, accounting for 12.5% of total suicide deaths in the country.

Pondicherry reported the highest rate of suicide (36.8) followed by Sikkim (29.1), Tamil Nadu (24.9), Kerala (24.3). The male: female ratio of suicide victims for the year 2012 was 66.2:33 [9]. Suicide is complex with biological, social, psychological, cultural and environmental factors. These stress factors vary from person to person and time to time. Identification of these stressors at the earliest will be the only solution to control the death statistics due to suicide. Suicide attempt is an urge for help from the environment; in more than 42% cases repeat attempts are known [10]. According to a London study by Royal College of Psychiatry, the primary triggers for self-harm were social and family, issues, relationship break-up [11]. According to the study conducted in Fiji among the clients with attempted suicide, the major stressor was the interpersonal loss (69%) followed by family instability (36%) [12]. Study done in Belgium observed that 42% of the victims had academic failures, and 36.8% of them had criminal offences [13].

Multicentre study by WHO in Europe concluded that 94.1% suicide attempters had a history of relationship conflicts, 79.2% reported a death or loss and 65.5% reported stress due to physical abuse [14]. Suicide is a preventable phenomenon. The many psychosocial stressors contributing to the suicide attempt can be avoided with timely intervention [15]. Understanding these stressors largely contribute to the future prevention strategies. Studies assessing stress full life events and suicidal intent are done nationally and internationally [16-19]. These studies mainly try to focus on one aspect of the stressors. The Present study is focusing on stressful events as well as day to day life stressors, and attempting to assess all the areas of psychosocial stressors in suicidal clients

Methodology

This cross sectional descriptive survey aimed to assess the psychosocial stressors among suicide attempters. The study was conducted in medical wards and crisis intervention clinic of Jawaharlal Institute of Post-graduate Medical Education and Research (JIPMER) hospital Puducherry. Main objective of the study was to assess the psychosocial stressors and suicide intent among the suicide attempters and to identify any association of these with demographic and clinical variable. Study population was patients admitted to the Medical Ward or attending Crisis Intervention Clinic in Psychiatric OPD after a suicide attempt, at JIPMER hospital Puducherry. Sample consisted of 50 suicide attempters. Subjects selected through convenience sampling technique that fulfilled the inclusion and exclusion criteria during the 6 weeks of study period. Patients with an attempted suicide aged above 18 years, both sex and Patients who can comprehend and speak English or Tamil were included.

Medically unstable patients were excluded from the study. Research instruments used were Presumptive Stressful Life Events Scale [20], Revised Daily Hassles Scale [21], and Beck’s suicide intent Scale [22]. Data collected through direct interview technique. All the three instruments were validated in the Indian population and were standardized, at international, national settings. Reliability of Presumptive Stressful Life Event scale was assessed in Indian population and found to be satisfactory65 (0.8). Reliability studies of Beck’s Suicide Intent Scale showed that internal consistency of the total score was acceptable67 (0.81). Daily hassle’s scale was found to have good internal reliability66 (Cronbach’s alpha 0.88). The JIPMER Scientific Committee as well as Institute ethics committee, Human studies JIPMER approval was obtained to conduct the study (PGMRC /MHN1/2014). The investigator approached the study subjects with a brief self-introduction. A written informed consent was procured from the subjects.

Psychosocial stressors were assessed by using Presumptive Stressful Life Events Scale and Revised Daily Hassles Scale. The scales were explained to the subjects in their convenient language and assistance was given to those who required. Suicide intent of the patient was assessed by Beck’s Suicide Intent Scale; a clinicianrated scale and data collected by direct interview method. Privacy of the patients was taken care during the data collection, and the confidentiality was maintained throughout the research process. Statistical analysis was done using SPSS 20. Demographical and clinical variables were analysed with descriptive statistics. Patients mean stress scores and mean suicide intent score were computed using descriptive statistics. Chi square test and independent t test was used to find the association between socio demographicclinical variables and psychosocial stressors.

Results

Among the 50 participants females accounted for 56% (n=28) of the sample. Most of them were from nuclear family (58%) and was married (58%). Unemployed suicide attempters accounted for 58% of the sample and 74% of them hailed from rural area (Table 1). Alcohol use was reported by 28% of the subjects and only 10% of them had any history of Psychiatric illness. Chemical poisoning was the mode of attempt in majority of the attempters and only 10 % of them had a previous history of suicide attempt (Table 2). The most frequently occurred life event was family conflicts 31(62%). Second most commonly occurred event was financial loss or problem 25(50%), followed by alcohol or drug use of the family member 20(40%) and marital conflict 19(38%). Change in sleeping pattern was the stressful event for 18 (36%) subjects while 14 (36%) of them had self or family unemployment as a stressful event. Conflict with in-laws was the stressful event for 8 (16%) subjects and 6 (12%) of the subjects identified to have the lack of child as a stressful event. (Figure 1) Use of alcohol 9(18%) was the most frequently reported daily hassle. Troubling thoughts about the future, physical illness overload with family responsibilities 6(12%) equally occurred hassles. Among the study sample, at least 5(10%) had the hassles due to responsibilities, lack of money for food, television and menstrual problem (Figure 2).

Figure 1: life events.

Figure 2: daily hassles.

Table 1: Socio-demographic data of sample.

Table 2: Clinical variables of sample.

In the regard of the suicide intent of the study subjects Majority 26 (52%) of sample had medium suicide intent, while 23 (52%) of them had low intent. suicide intent and only 1 (2%) had high suicide intent. (Figure 3) & (Table 3) Association of psychosocial stressors score with suicide intent severity (N=50) Spearman’s correlation coefficient test showed that there was no significant correlation between the mean number of stressors obtained from Presumptive Stressful Life Event Scale and Daily Hassles Scale-R with suicide intent score. The association of demographic variables with suicide intent severity was established using Chi-square test results showed that there was no significant association between demographic variables and severity of suicide intent. Association of PSLE stresses scores with demographic variables. Independent t test results showed that employment status was associated with the life events score with p=.030. None of the other demographic variables were significantly associated with psychosocial stressors. Table 3 the association of PSLE scores with the clinical variables. Independent t test results elicited no significant association between psychosocial stressors and clinical variables except medical history with p=.053. The association of DHS-R with clinical variables were assessed using Independent t test showed no significant association between DHS-R stress score and clinical variables except medical history with p=.05

Figure 3: suicidal intent.

Table 3: Association of PSLE score with demographic variables N=50.

*significant

Discussion

Supporting to the current study findings various national and international studies also reports increased suicide attempts among the females. Chinese study [23] reported that the females are at higher risk of attempting suicide. Similar demographic descriptions were found from an Indian study [24] done by Mathew in Southern India. Majority of the suicide attempters were females 62(62%), and 84 (84%) were from the nuclear family. Among the attempters 70 (70%) of them were Hindus. Concerning employment 80 (80%) of them had high school education and 55 (55 %,) were unemployed. History of suicide attempt was found in 8 (8%), and 4 (4%) of them had physical illness. Another Indian study [25] resulted that the male-female ratio of suicide attempters was almost equal to one (1:1.4). Higher proportion 80 (54.3%) of the suicide attempters were married which coincided with the findings of the current study, stating that marriage is a possible risk for suicide attempts. The majority 146 (98.6%) of the attempters were of Hindu religion as seen in our study. There were some findings contradicting to the current study results. In the current study majority 29 (58%) of them belonged to nuclear family, and most were 42 (84%) literate and. Very low number of subjects 5 (10%) had past history of suicide attempt, and psychiatric illness. Study from Orissa [26] reported that majority 129 (87%) of the suicide attempters had an extended family, and most of them were educated.

South Indian [27] study reported that 13 (13.3%) suicide attempters had a family history of suicide attempts and majority 78 (78%) of the suicide attempters had history of psychiatric illness. However, there are also reports which suggest that psychiatric illness is very low in suicide attempters from Asia [18]. Family conflicts accounted for the major stressful life event of 62% of the study subjects while 50% of the subjects had financial loss or problems. Excessive alcohol or drug use by a family member was a reason for stress for 40% of the subjects. Marital conflicts and change in sleeping pattern accounted for stress in 38% and 36% of the study subjects respectively. Mean stressors score of 50 study subject for the presumptive stressful life event scale was 206.8 with a standard deviation of 88.2.

The results of a Belgium study [28] on life events of suicide attempters showed that 8 (42.1%) of them had relational problems, but the majority of them had academic failures since majority of the study subjects were adolescence. Another international study [29] also supported the present study results that 44 (72.1%) of the study subjects had family-related conflicts and 39 (63.9%) of them had marital conflicts. The reviewed studies have given various results. An international study done in UK reported a high suicide intent score in 63% of males 79% of females. Another study [30] done among 229 suicide attempters in Finland resulted that 40% of them had severe suicide intent followed by another 40% of them with moderate suicide intent. Mild suicide intent was present in 20% of the attempters. Indian studies done on suicide intent had tried to classify the high and low intent attempters; study done by Kumar et al.

The present study assessed the association of suicide intent with the psychosocial stressors of suicide attempters. No significant association was elicited between psychosocial stressors and suicide attempt in the study. Neither the stress arising from major life events nor stress arising from daily hassles showed association with the suicide intent. A study done in the same setting [31] of the present study failed to observe an association of suicide intent with any of the demographic or clinical variable except a significant (p=0.04) association with psychiatric illness. There was a significant association found with the PSLE score and employment status. Being unemployed may indirectly increases stress by adding financial burden and family conflict. Other studies have not commented on this. There were no studies with a similar result. Study done by Pompli [32] reported that psychosocial stressors are significantly higher in repeaters than that of first time attempters. Another study from Iran found a significant difference (p<0.001) in the stress score of male and female suicide attempters. Study Limitations were samples were limited to 50 due to the inclusion and exclusion criteria, The study period was limited to 6 weeks, Sampling technique adopted in the study was convenience sampling, There were some sensitive issues like emotional distress anxiety familial conflicts; however asking about them by using questionnaires always carries concerns about the truthfulness of the participants.

Acknowledgement

The authors thank all the clients who participated in the study as well as JIPMER hospital for giving permission and support to conduct the study.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Forensic Medicine

Child Abuse and Dental Practice: Finding the Nexus

Introduction

Many reports indicate that children are abused everyday worldwide. They are abused in many different ways [1]. Literature suggests that the numbers of reported abuses against children are a mere tip of an iceberg accordingly, there are many unreported cases – some are hidden or covered. Thus, child abuse becomes a social problem. Many seem to ignore child abuses, others of course will justify. Admittedly there are a few who will hide such abuses under a carpet or give a blind eye. In some societies child abuse has become a part of an accepted cultural practice- for example corporal punishments or female genital mutilations. Fortunately, there are some of us in the society who will voice against abuse of children. The social responses to child abuse are reflected in clinical sets up in similar ways. This means that some clinicians will ignore abuses while others may not care. The problem becomes aggravated in clinical scenarios especially when the clinician is not trained in identifying an imminent child abuse. In this context, this paper seeks to find a connection between clinical dental practice and child abuse. In short, I will argue and demonstrate in this paper that the presentations of cases of child abuse are not uncommon to the dental clinician, but that they are often times presented with alternate histories so that they can be easily missed by the clinician, if not looked through forensic lenses.

Contextualising Child Abuse

According to international law, especially based on the United Nations Convention of the Right of the Child (UNCRC), any person below the age of eighteen years is considered as a child. In this sense any violence or abuse against a person below eighteen years can be considered as child abuse. While child abuse can be presented in different forms, including physical, sexual, emotional or neglect forms, the perpetrator is usually a family member, care giver, custodian or a neighbour or at least a relative. Despite the fact that child abuse is frequently seen as an ongoing event with multiple chronological encounters, it can certainly happen as a single event. Further, child abuse can result from an action by the perpetrator or by inaction, for example not giving food or care. Given the serious impact it has on the child in his physical and psychological development, abuse and violence against children should be identified early to avoid further harm.

Connecting the Dental Clinician

While injuries to teeth and face are common among children, according to research, considerably a large number of them are due to abuse and violence [2]. Owing to their obvious appearance, swelling, bleeding and pain, usually the child will be presented to the dental clinician, of course with a misleading history. Those that are not presented to the clinician by parents or care givers, outsiders may notice injuries and may proceed to enquire – for example school teachers or neighbours. It is essential that the dental surgeon conduct an intra oral and peri oral examination carefully with a forensic eye in dealing with a suspected case of child abuse. Apart from the history from the guardian, a private interview with the child in the clinical set up may reveal some important information. The possibility of child abuse as a differential diagnosis has to be considered while attending to a child with a history of fall or accident, especially when the child is brought repeatedly. In the event where the clinician reasonably suspects child abuse, the case should be reported for medico-legal management-which is a legal requirement under almost all jurisdictions.

Presentations of Abuses

There have been instances of inflicting oral injuries with feeding instruments, for example using feeding bottles perhaps due to forced feeding. Further, fractures of teeth, contusions in and around the mouth and face, abrasions especially by finger nails or another object, fractures of mandible or maxilla due to violent assault, finger tip contusions around the mouth , burnt marks in and around the mouth especially with cigarettes, firewood or heated electric instruments, scalding resulted with hot water or liquids are common. The lips, teeth, buccal mucosa tongue and facial bones, gingival, alveolar bone are the commonest anatomical sites that are reported to have been affected [3]. Despite oral sex is a common type of sexual abuse among children, more often than not the child may not show physical features. However, it is possible that contusions occur inside the oral cavity mucosa or in and around the mouth due to refusal by the child and the concomitant force by the assailant. The confirmation of oral or peri-oral gonorrhoea or lesions associated with HIV for example oral candidacies are highly likely to be associated with child sexual abuse.

Another frequent presentation will be bite marks [4]. While it can be alleged that the bites were made by peers or siblings, or as self inflicted, a careful forensic investigation may reveal whether or not the bites were made by an adult. In these scenarios it is important to distinguish a human bite mark from that of an animal origin and then from a peer child or an adult. A swab taken from the site for DNA and a careful forensic bite mark investigation can reveal the perpetrator. However, taking photographs, taking a swab for DNA and then timely attending to the analysis is important in this regard for administering justice. Dental neglect is another common occurrence. According to American Academy of Paediatric Dentistry, ‘dental neglect is wilful failure of parent or guardian to seek and follow through with treatment necessary to ensure a level of oral health essential for adequate function and freedom from pain and discomfort’. If oral diseases of the child are left untreated without taking the child to a suitable clinician, then it can constitute dental neglect. In countries like Sri Lanka where all dental and medical health care is free for people, the responsibility of the guardian to seek dental health care rises. On the contrary, if the dental clinicians attending to children, neglect them either by not treating or not preventing dental diseases among the children they treat, that too can constitute dental neglect. With authors experience in a forensic set up, the commonest presentations of child abuse in Sri Lanka are contusions intra and peri oral, then fracture, dislocation or exfoliation of teeth. Author has reported mandibular fractures and fractures in teeth in chronic physical child abuse cases [5]. Further, corporal punishments especially by school teachers and parents where assault to face lead to dental injuries are common in Sri Lanka due to the cultural acceptance of physical child control.

Conclusion

Child abuse is prevalent in any society and the dental clinician should be aware and vigilant in managing children with suspected dental or oral presentations. Given the focus is more on the clinical management of the condition, a busy dental clinician can easily missing a child abuse being detected especially when the history given by the parents or guardian is misleading. Further, if the patterns of injuries are inconsistent with the history provided or there is evidence of repeated oral and dental injuries with multiple time intervals, it is highly likely that the child is being abused. In such a suspicious situation, it is the responsibility of the dental clinician to initiate a medico-legal management involving the authorities.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Nursing

Pregnancy with Myelomeningocele Foetus: A Case Study

Introduction

Neural tube defects are the congenital malformation of the CNS resulting from defective closure of the neural tube during early embryogenesis between 3rd and 4th week of intrauterine life. It involves defect in the skull, vertebral column, the spinal cord and other portion of CNS. It occurs about 1 to 5 per 1000 live births. Risk in second siblings is high. The exact cause is not known, but the triggering factors are maternal radiation exposures, anticonvulsive drugs, exposure to chemicals, folic acid deficiency and genetic determinant [1]. The most common neural tube defect is Spina bifida contributing one in 1000 births. Spina bifida is the congenital defect of the spinal column due to failure of the fusion of vertebral arches with or without protrusion of the meninges and dysplasia of the spinal cord [1]. It is classified as spina bifida occulta and spina bifida cystica. Myelomeningocele is most common and severe form of spina bifida cystica, characterized by protrusion of spinal cord through the open vertebrae into the amniotic fluid. The severity of disability varies in accordance to the neurological and extent of intracranial abnormalities. The malformation is associated significant lifelong disability including motor and sensory deficits, neurogenic bowel and bladder, hind brain herniation and associated hydrocephalus, orthopaedics abnormality and cognitive deficits [2].

Case Report

We report a case of 25years old multigravida woman with 5 months of amenorrhea with meningomyelocele foetus admitted to labour room for termination. Her history says that she has 6 years of marital life and it was nonconsanguineous marriage. There was history of use of oral contraception for four years after marriage. Regarding her obstetrical history, she underwent induced abortion one and half year back as she was diagnosed as anencephalic foetus at 12weeks. The patient given a history of regular menses. After one year of her induced abortion she again conceived naturally. Ultrasonography was done at 8weeks in which no abnormality was detected. After this she was not regular for her follow up. But at 20weeks she visited OPD for follow up, USG was done and report showed Lumbosacral Meningomyelocele measuring 1.76cm and 0.85cm. According to patient she was not taking the folic acid supplements due to the side effects. Genetic counselling was given to her and her husband and was advised for termination of pregnancy. Induction of labour was started as per prostaglandin regimen. She delivered dead male foetus weighing about 520gms. Mother was counselled for pre conceptional care and regular follow up in future (Figure 1).

Figure 1: Induction of labour was started as per prostaglandin regimen. She delivered dead male foetus weighing about 520 gms.

Discussion

Meningomyelocele results in injury of spinal cord tissues and requires lifelong support and rehabilitation. If neural tube defects occurred in a woman’s previous pregnancy, increased antepartum foetal surveillance is required for the current pregnancy [3]. This surveillance should include consultation with a geneticist and targeted foetal ultrasonography to assess the foetal spine and cranium. In addition, preconception supplementation with folic acid at 4 mg/day is recommended; this dosage is higher than that advised for a woman without such a history. A recent RCT named management of Meningomyelocele showed that prenatal correction by fetoscopic or open surgery resulted in improved neurological outcomes. Prenatal meningomyelocele surgery is a complex surgical procedure that requires an experienced multidisciplinary team with dual focus on both the mother and the foetus. Despite of confirmed benefits there is a chance of PPROM, Oligohydramnios, Uterine Dehiscence and preterm labour are the challenging out comes [4]. Neurological defects in foetus occur dueto many reasons. The identified cause in present case is deficiency of folic acid and history of pregnancy with anencephaly. Women planning to become pregnant should avoid all alcohol consumption, smoking, and use of illegal drugs before and during the pregnancy, because these activities may have serious deleterious effects on the foetus. It is also advisable for the prescribing provider to review all medications and supplements the woman is taking to assess for possible teratogenicity.

Conclusion

All women of childbearing potential should be receiving folic acid at a dosage of 0.4 mg/day, in accordance with the recommendation of the US Center for Disease Control and Prevention (CDC). In this case if patient had come for regular check-up and taken folic acid regularly this incidence could be prevented. This is very important to stress on folic acid supplementation and regular follow up in antenatal advice.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on Orthopedictics

Comparison of various Cement less femoral Stems in Total Hip Arthroplasty 

Introduction

Total hip Arthroplasty is becoming routine procedure for various hip diseases, such as osteonecrosis of the femoral head, development dysplasia hip, and hip arthritis [1]. The evolution of various femoral stem design, fixation methods, size, and bearing surface of implants for total hip replacement have led to considerable improvement and survival in the implant life in turn leading to a great change in the quality of life of patient. The need to determine the most optimal combinations of Total hip Arthroplasty implant is based on various factors like age, bone quality and even financial constraints. In primary Total hip Arthroplasty, basically the Cementless stems comes in 2 types of prostheses, which are available as: conventional stems are a standard length of ~150 mm, compared with Short Stems, which are <120 mm in length [2].

Conventional Cementless implants in total hip Arthroplasty have shown excellent clinical results; however, it is unclear whether Short Stem prostheses can obtain the same clinical and radiological outcomes. With conventional femoral stems proximal stress shielding and thigh pain often occur after surgery. Advantages in case of Short-stem prostheses are less resection of the femoral neck, helps in having a physiological load pattern in the proximal femur, reduce stress shielding, and aids bone conservation. Hence these are beneficial for young patients as it conserves bone mass and extend the service life of prostheses. Also providing favourable conditions for revision. The short stems are mainly based on metaphyseal fixation. Few authors conducted meta-analysis found strong evidence indicated no difference in HHS and WOMAC when comparing Short Stems to Conventional Stems after Total Hip Arthroplasty.

From their studies, it was found that the short follow-up time (6 weeks) did not influence the heterogeneity of the pooled results of HHS and WOMAC. And meta-analysis found that there were no significant differences in the presence of femoral offset and leglength discrepancy. Short Stem prostheses achieved the same clinical and radiological outcomes as conventional implants, and were superior in terms of reducing thigh pain. But whether the postoperative thigh pain applied in 2nd-generation Cementless prosthesis still needs further large-scale multicenter studies with longer follow-up to confirm [3]. Short-stem hip Arthroplasty (SHA) was designed to preserve bone stock and provide an improved load transfer. To gain more evidence regarding the load transfer, this review analysed the periprosthetic bone remodelling of SHA in comparison to standard hip Arthroplasty. Periprosthetic bone remodelling is also present in SHA, with the main bone reduction observed proximally. However, certain SHA stems show a more balanced remodelling compared to Total Hip Arthroplasty, arguing for a favourable load transfer. Also, the femoral length where bone remodelling occurs is clearly shorter in SHA [4].

In another study which has compared the bone quality by using Bone mineral density noted the following findings – With a mean follow-up was 3.35 years in two groups. Bone mineral density was significantly increased in femoral zone 1 but slightly decreased in zone 7 in the short, metaphyseal-fitting stem group. In the conventional metaphyseal and diaphyseal-fitting stem group, bone mineral density was markedly decreased in both zones 1 and 7. Clinical and radiographic results were similar between the 2 groups. No hip in either group required revision of the components [5]. One of the studies on ultra short stems came out with following findings: At follow up into the second decade, ultra short stems showed no differences from conventional cement less stems in terms of validated outcomes scores or fixation, while showing slightly less stress shielding and less thigh pain, although this difference may not have been clinically important; in the conventional group, thigh pain was mostly mild, and there were no differences in hip scores. Reduction of stress shielding may reduce the long-term risk of periprosthetic fracture, but this was not shown here. Future studies might document the reduction of the long-term risk of periprosthetic fracture by reduction of stress shielding [6].

Finally to conclude choosing the right kind of femoral stem, the surgeon should keep an optimal distribution of stress in proximal femur, implant design should have maximal preservation of bone without compromising stability and for long term survival. So with the recent trend going towards maximal bone preserving and Cement less fixation, there are results with an array of stems showing difference in results and implant survival. There is a lot of observational data presented in the large National Registry reports which are updated annually (e.g. UK NJR, Australian Registry, Swedish Registry), and have data on important outcomes, even revision rates of hundreds and thousands of patients who have received different variety of prostheses over one decade and more. But still these have shortcomings like delay in reporting, misclassification of outcomes and few missing reports.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Pediatrics

Introduction

Obesity is considered the most common chronic metabolic disease associated co morbidities such as diabetes mellitus and hypertension [1]. The public health problem results primarily from an energy imbalance whereby dietary energy intake exceeds energy expenditure. The dynamics of this imbalance are complex, especially the appetite regulation are not fully understood [2]. Nutrition in early stages of growth may be essential in the development of obesity in adulthood, supporting the concept of “nutritional programming” [3,4]. However, the mechanisms that link nutrition with long-term obesity risk are not well defined. Several systematic reviews have linked breast milk intake with a protective effect against obesity and other metabolic diseases [5,6]. Breastfeeding may play an important role in this “nutritional programming”. Human milk is a source of various growth bioactive factors, namely leptin, adiponectin, ghrelin, resistin and obestatin, which are involved in food intake regulation and energy balance. Some study referred the preventing function due to these bioactive factors’ roles other than nutrition [7,8]. In this review, we talk about bioactive factors contained in human milk and their potential protective effect on obesity.

Breast-Feeding and Childhood Obesity

The rapid growth velocity during the early postnatal period ,especially 0~3 months, have been associated with an increase in the number of adipocytes, a higher ratio of fat mass to lean mass, a greater central fat deposition and insulin resistance and consequently an increased risk of metabolic syndrome, namely obesity, type 2 diabetes et al. Some systematic reviews confirmed that the link between greater growth acceleration and later increased risk of obesity [9,10]. As we known, formula feeding is associated with a greater weight and length gain after birth, according to the breast feeding. Various hypotheses have been proposed to explain how breastfeeding protects against faster weight gain and consequently against later obesity [11]. Breastfeeding babies can self-control the amount of milk they consume, and so they may learn to self-regulate their energy intake better than FF infants [12].

Furthermore, the different of the nutrient composition is an important factor determining a higher risk of later obesity. Another study showed that the mother’s pre pregnant BMI, duration of breast-feeding and timing of complementary food introduction are associated with infant weight gain from birth to 1 year of life [13]. The protective role of breast milk may be attributable not only to its nutritional composition but also to many bioactive factors. These bioactive factors in human milk, such as Leptin, adiponectin, ghrelin that may control nutrient use, protect infants from pathogens and play a role in regulating metabolic pathways [14].

Hormones In Mother’s Milk

I. Leptin: Leptin can be produced by mammary epithelial cells. It exerts an orexigenic effect by signaling satiety and decreasing the sensation of hunger [15]. Breast milk leptin level is higher in colostrum than in transitional milk and is decreased during the first 180 days, showing a significant inverse relation with the ongoing days of lactation [16]. Schuster et al. and Fields et al. [17,18] demonstrated that leptin concentration in milk had been positively correlated with circulating levels of leptin and maternal BMI, suggesting that BF infants nursed by overweight/ obese mothers might be exposed to higher amounts of leptin than infants nursed by lean mothers. Although the mechanisms are unknown, a higher concentration of circulating leptin has been found in infants fed breast milk than in infants fed formula [19]. Maybe the higher level of breast milk leptin could regulate the appetite and exert a long-term effect on energy balance and body weight regulation. Dundar et al. [20] found that SGA infants grew more rapidly during the first postnatal 15 days than AGA and LGA infants, and that human milk leptin levels were significantly lower in the SGA group. However, there is some inconsistent view. Wang et al. [21] found that the leptin level of human milk showed no significant difference between preterm groups with term group, had no correlation with weight, length at 42nd day.

II. Adiponectin: In humans, adiponectin regulates lipid and glucose metabolism, improves insulin sensitivity, inversely related to the degree of adiposity and inhibits hepatic glucose production [8]. It is regulated by factors such as IGF-1 that stimulates its gene expression and secretion [22]. In 2006, it was the first time to report the presence of immuno reactive adiponectin in human breast milk. The authors also found the adiponectin levels in human milk were significantly higher than leptin levels, and decreased with the duration of lactation [23].

This adipokine secreted in human milk can cross the intestinal barrier and may modify infant metabolism. The levels of this hormone in human milk correlate positively with the serum level and inversely with infant weight and anthropometry during the first months of life [14,24]. Andreas et al reported that premature newborns have a lower concentration of adiponectin than term infants [25]. In the study of obese mothers, although serum adiponectin levels were low in obese mothers, their colostrum exhibited high levels of this hormone. Maternal BMI was positively associated with serum adipokine levels and negatively correlated with colostrum adipokine levels [26]. The offspring of obese mothers has the occurrence risk of metabolic syndrome, breast-feeding is a protective method.

III. Ghrelin: Ghrelin is also produced in the mammary gland, it can influence glucose metabolism, energy balance, gastrointestinal motility, gastric acid secretion, and cardiovascular and immune system function [27]. It can stimulate food intake in rats and humans.64 by acting primarily on the accurate nucleus of the hypothalamus [28]. In fact, ghrelin occurs in both term and preterm human breast milk, the level is higher in breast milk than in plasma its levels, higher in whole milk than in skim milk [29]. The ghrelin level increases gradually in colostrum, and in transitional and mature milk [30]. Cesur et al. [31] reported that active ghrelin level in breast milk at the 4th month of lactation significantly and positively correlated with weight gain of the infants. In new borns’ levels of ghrelin were higher in SGA babies than in AGA babies. Reduced ghrelin suppression and higher postprandial ghrelin levels in SGA infants could result in a sustained orexigenic drive and could contribute to postprandial catch-up growth in these infants [32].

Savino et al. [33] observed significantly higher serum ghrelin levels in formula-fed compared to breast-fed infants. They suggested that formula fed infants received a higher amount of ghrelin, thus it was possible that they had a greater feeding stimulus than breast-fed infants and this correlates positively with a greater infant weight gain, possibly with an influence on the growth of the childhood. The resist in human breast milk was first identified in 2008 where the levels in milk decrease throughout lactation [14]. Its physiologic role in humans is still under debate and very little is known in children. Resist in has been shown to be associated with insulin resistance in obese mice [34]. It suggested that resist in could be involved in appetite regulation and in the metabolic development of infants. Moreover, it was advanced that it plays a role in controlling bodyweight through effective regulation of adipogenesis by negative feedback. However, in humans, the role of resist in fetal and infantile growth remains to be elucidated [35]. Other bioactive factors, such as apein, obestatin, nesfatin-1, can identified in breast milk. These substances may be regulate food intake and metabolism, however, the role of these bioactive factors in breast milk affecting the childhood growth is still not reach an agreement.

Conclusion and Perspective

Breast milk contains necessary nutrients and bioactive factors for infant health. The composition of breast milk varies according to stage of lactation and to the nutritional requirements of the infant. It is an advantage that is not comparable by formula feeding. The bioactive factors may represent the link between breast-feeding and protection against obesity in later life, which need large scale long term cohort study to confirm.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on Medical Sciences

Factors Associated with Low Back Pain Among Nurses in Critical Care Units, Hospital Universiti Sains Malaysia

Introduction

Low back pain (LBP) is one of the most serious health problem of tremendous medical and socioeconomic dimension and a major cause of disability. Low back pain can defined a pain localized between the 12th rib and the interior gluteal folds, with or without leg pain. Nurses are known to be a high risk group for occupational low back pain [1,2]. Direct care nursing personnel around the world report high numbers of work-related musculoskeletal disorders. The impact of LBP for nurses includes time off work, increased risk of becoming chronic, as well as associated personal and economic costs [3]. Nurses who suffer from chronic back pain will have an impact on them while standing up from sitting and lifting the patients. For direct care nursing staff, manual handling of patients such as moving or repositioning a patient using their own body strength is the major cause of these injuries [3]. Indeed, 80% of the general active population suffers from LBP at least temporarily [4]. His study with 350 employees shows that common LBP is the first reason of affections limiting professional activities before 45 years and the third after respiratory and traumatic affections between 45 and 65 years. In western countries, many of studies researched on back pain as a common problem for nurses [5]. Statistics in Hospital Universiti Sains Malaysia (HUSM) show that number of patients with back pain including nurses in year 2007 was 37, in year 2008 was 31 and in year 2009 were 26 as in physiotherapy records. Thus, this study intends to identify factors associated with back pain among nurses in critical care unit at HUSM, Kubang Kerian, Kelantan. The general objectives of this study was to identify employment profile of the nursing profession that were associated with LBP; determine personal factors of nurses related to LBP and explore work related factors associated with LBP.

Materials and Method

A cross-sectional study design was used to examine factors that are associated with LBP among nurses using a self-administered questionnaire conducted among nurses working in critical care units (CCUs) in HUSM. The questionnaire used consists of three sections; Section A is on the demographic data consisting of 10 items and Part B consists of 25 items on nursing and LBP and Part C is on treatment options consisting of 10 items. The questionnaire items were adopted from Branney and Newell [1]. The English version of the questionnaire was translated into Bahasa Malaysia and back translated to English by two independent professional translators. After it was back translated, it was found to be similar to the original one. To ensure the validity of items in the questionnaire, a pilot study was done at Hospital Raja Perempuan Zainal II (HRPZ II). A total of 30 nurses participated in this pilot study with informed consent. The questionnaire took approximately 15 to 20minutes to complete. Cronbach’s alpha obtained for this pilot study was 0.75 which indicates a reasonable internal consistency. This study was approved by the Human Research Ethics Committee USM, USMKK/PPP/JEPeM [246.4.(1.4)] and Jawatankuasa Etika & Penyelidikan Perubatan Kementerian Kesihatan Malaysia, (2) dlm KKM/NIHSEC/08/0804/P12-41.

Results

The total population of nurses working in all five wards adds up to 180; 8 Selatan consists of 31 nurses, ICU 51 nurses, Kristal 24 nurses, CCU 25 nurses, and HDU 49 nurses. However, only 110 (81.5%) participated in this study in February 2012. Majority of the participants were female nurse 85 (77.3%) while 25 (22.7%) participants were male nurses. The majority of the participants were Malays 101 (91.8%), there were six Chinese (5.5%) and three Indians (2.7%). Their qualifications differ; two (1.8%) with masters degrees, 11 (10%) with basic degrees, 94 (85.5%) diplomas and three (2.7%) with school certificates. The participants form four age groups. There were 50 (45.5%) nurses aged between 20-30 years, followed by 46 (41.8%) nurses aged between 31-40 years, then nine (8.2%) nurses aged between 41-50 years and lastly five (4.5%) nurses aged between 51-60 years. The majority of the nurses 90 (81.8%) in this study were married while 20 (18.2%) were single. The majority of nurses, 51(46.4%) have a total of 1-3 children, 34 (30.9%) have none, 19 (17.3%) with 4-5 children and lastly six (5.5%) of the nurses have more than five children.

When the nurses were categorized based on their BMI, most of the nurses 57 (51.8%) were overweight, 47 (42.7%) had normal BMI while six (5.5%) were underweight. (Table 1) shows the association between employment profile of nurses and LBP. Results show that working experience in current ward and years of nursing experience were significantly associated with LBP. However, current working ward, working time, total hours of working per week and total patients who need mobilizing were not associated with LBP (Table 1). Table 2 shows the association between individual factors and occurrence of back pain. Crosstab Chi-square tests or tests of independence were carried out to determine individual factors related to LBP among nurses in CCUs. Age, marital status, total number of children, height, weight and BMI, smoking and regular exercise or sport were not significantly associated to LBP among nurses in CCUs (Table 2). After cross tabulation, Pearson Chi- Square test was used to determine the association between work-related factors and occurrence of LBP. Only one factor, frequency standing had significant association with low back pain (p=0.021). However factors such as frequency of lifting patients in bed during shifts, helping patients to get out of the bed during shifts, poor body mechanics during lifting of patients, frequent carrying of heavy medical equipment during shifts, frequent moving of the beds during shifts, too much work to do, staff shortage in ward and stress were not significantly associated with low back pain among nurses (Table 3).

Table 1: Association between Employment Profile and LBP.

Table 2: Association between Individual Factors and LBP.

Table 3: Association between Work Related Factors and LBP.

*Significant difference at p<0.0.

Discussion

There was a significant difference in LBP between pre entering nursing and since entering nursing (p<0.001). This study demonstrates that the prevalence of LBP among the nurses studied increased from 16.4% pre nursing to 68.2% since entering nursing, which is rather close to studies done in western countries. Low back pain is a major problem in the nursing profession and it was reported that 30% and more nurses experienced low back pain during the nursing course of one year [6]. It was reported that only 15.9% nurses had LBP before nursing while 84.5% complained they had LBP after nursing [7]. There was a six percent increased risk of LBP from pre-nursing prevalence while the cumulative lifetime prevalence of LBP increased from 31% at entry to 72% at the end of nursing school [4]. Working experience at current ward and total years of nursing experience were related to LBP among nurses. The findings of this study show nurses with more than 20 years experience reported the highest LBP (32.7%) whereas nurses working less than one year reported the least (4.5%). Occupational back pain and level of seniority were positively related [8].

In another study on those with LBP, 59.5% had more than five years of nursing experience and another 12% had more than 20 years nursing experience [7]. Findings from this study indicate that the nurses working in Intensive Care Unit, ICU (29.1%) and High Dependency Unit, HDU (20.1%) were more likely to report current back pain compared to other units. In both ICU and HDU, most patients are usually dependent, frail and need more help from nurses for their daily activities and transfer compared to those in other wards [9]. Nurses who are working in ICU experienced an increased rate of LBP compared with other CCUs. Similar results were obtained in 65 ICU in 22 South Korean hospitals and among 1345 subjects where 90.3% had back pain [10]. In addition, Nurses with 2-4 years of working experiences in ICU had the greatest probability of back pain and needed treatment. Although most hospitals allow patient’s family to stay in the ward help to take care of the patient, the availability of family members to do this caring in hospital was low in busy city [9]. Report had shown a 65% lifetime and 70% point prevalence for low back pain among nurses working in the orthopaedic unit as well as 58% lifetime and 75% point prevalence of low back pain for those working in the intensive care unit [5]. More nurses with more than 20 years nursing experience including year three student nurses had LBP (40%) compared to nurses with 1-10 years working experience (24.5%). In another study, 12% of nurses with more than 20 years working experience suffered LBP [7].

This study found most nurses working shift had LBP (64.5%). Comparing the back pain prevalence across AM, PM and nightshifts, the staff working on AM shift are more likely to experience back pain (28.3%). However there was a higher prevalence of back pain during the night shift (6.6%) compared to the PM shift (1.5%) [8]. There was a 64% increase in LBP among those who reported staff shortage and working six or more night shifts per month [10]. Nurses working on the AM shift were more likely to experience back pain. Most of patients’ hygiene needs such as bed sponging, assisted baths and treatment procedures are carried out during the AM shift, which involves a lot of patient lifting and transferring. Nurses with total working hours between 31-40 hours per week had higher occurrence of LBP (21.8%) while those working between 10-20 hours per week had less LBP (1.8%). Nurses working more than 50 hours per week had the most LBP (50.0%). Study had shown that nurses working more than 20 hours per week had symptoms of LBP [11]. Nurses who were handling and mobilizing between 1-5 patients per shift had the most LBP (47.3%) while those without such patients the least (5.5%). Study had reported LBP among 58.4% of nurses handling and mobilizing between 1-5 patients but only 6.7% among nurses who did not have to [7].

Younger nurses aged between 20-30 years (37.3%) had the highest LBP while older nurses aged between 51-60 years had least LBP (4.5%). Studies had shown that nurses between the ages of 20 to 30 years had the highest prevalence of occupational back pain [8,12,13]. Junior nurses had higher rate of back pain because they were more involved in manual work, while the senior staffs were assuming more of organisational and managerial roles. Junior nurses were also less knowledgeable in the proper techniques of lifting and body mechanics. Senior nurses could have developed effective coping strategies over time. Younger nurses also had more problems related to job stress than older nurses [8]. Another study showed that nurses aged 50-59 years were most affected by LBP [14]. However, there is the healthy worker effect, that is those who suffers from LBP tend to leave their hospital jobs, whereas the healthy nurses stay [1,9]. Female nurses tend to experience more back pain [1,7,12,15-18]. Results of this study showed that 64.5% of female nurses and 20% of male nurses had LBP. More married women had LBP compared to unmarried women. The finding of this study indicated that 69.1% married women had LBP in their lifetime. Various studies reported that more married women had LBP. As high as 85.8% women who were married had LBP [4]. LBP are more common among nurses with multiple pregnancies. Our study found that 34.5 % nurses with 1-3 children have LBP.

This is similar to the report that 51.5% nurses with multiple pregnancies experienced LBP [4]. Female participants reported that their back pain was attributed to pregnancy and childbirth [19]. Obesity, which is one of the contributing factors for lumbar pain, leads to decreased abdominal muscle strength and increases the level of lumbar-lordosis. This is supported by this study where 46.4% of those who were overweight had LBP. Studies showed that lifting, prior injury, and being overweight were risk factors for workrelated low back injury (WLBI) among nurses [5]. Age, increased BMI and disturbed psychological profile were among other individual factors shown to be related to increased risk of WLBI [4]. Smoking was cited in the literature as having a negative effect on the circulatory system. Cigarette nicotine causes vasoconstriction that reduces the blood flow to the muscles and intervertebral discs. This predisposes smokers to low back injuries [5]. Increased coughing among smokers may be related to increased risk of low back injuries in this group [4]. There was a strong relationship between smoking more than 20 cigarettes a day and having back pain and intervertebral disc degeneration. Study done in Japan indicated that smoking was associated with LBP [7].

There was no significant relationship between smoking and back pain in this study, only 12.7% of the nurses smoked and had LBP while 71.8% had LBP but did not smoke. Smoking can cause other illnesses related to smoking in addition to back pain [20]. A smaller percentage of the nurses who exercised regularly (35.5%) had LBP compared to 49.1% of nurses who did not exercise regularly. Although there was no significant relationship between exercise and back pain, the group of people who did not exercise regularly are at a greater risk for back pain. However exercise or sports did not play a protective role against LBP [7]. Several factors can cloud these results namely, the level of competition, nature of sports activities as well as the volume and the intensity of the exercises. In this study professional factors chosen by nurses as causes of LBP were frequent lifting of patients in one shift (78.2%), helping patient to ambulated (80.0%), poor body mechanics (72.2%), frequency of moving the bed (72.2%), frequent standing (63.6%), too much work (61.8%), shortage of staff (51.8%), and stress (86.7%). The nursing job is more to helping, turning and lifting the patient from chair or bed [9]. This study shows that 78.2% nurses who did frequent lifting and 80% nurses who helped patients had LBP.

Considering that nurses often work 12 hour-shifts, the amount of lifting adds up and the job could be very hard to manage physically [5]. Some studies suggest that positioning patients in bed leads to LBP more often than other manual patient transfer procedures conducted by nurses [21,22]. Nurses who handled patients more frequently have low back pain prevalence rates that were 3.7 times higher [23]. Among nurses who had LBP, 72.7% chose poor body mechanics as the factor. According to National Institute of Occupational Safety and Health (NIOSH) lifting guidelines, the maximum recommended weight to be lifted by women in the 90th percentile of strength is 46 lbs. Nurses were commonly led to believe that the primary way to prevent back injuries was to always use proper body mechanics. However, the fact remains that some tasks were so stressful to the body that even with proper body mechanics, a back injury resulted [22]. Rooms in hospital are often small, and nurses had to move the furniture around so that they can do their jobs. Most of the time nurses are lifting devices that would not even fit in these rooms; these are some causes of LBP [5]. Some patients may also be combative, contracted, or uncooperative. Any unpredictable movement or resistance from the patient may throw the nursing personnel off balance during the transfer, resulting in back injury [5].

In addition, fatigued muscles can no longer serve their protective function and may add to the risk of acute trauma [22]. Workplace guidelines should limit manual handling exposure in general or enable nurses to undertake reduced manual handling activities when in pain [12]. Shortage of staff is also one of the factors contributing to LBP. This study indicated that 51.8% nurses who suffered LBP chose staff shortage in wards as the factor. Surprisingly, low work support, low mood, and boring work tasks were not identified as MSD risk factors in this study. In this study 59.1% of nurses who complaint of stress had LBP while only 25.5% of those who did not complaint of stress had LBP. This is supported by study where 57.3% of stressed nurses suffered from LBP [7]. As such, these results suggest that psychosocial issues are fast becoming increasingly important MSD risk factors for nurses in Asia as elsewhere.

Conclusion

The results of this study demonstrated that the prevalence of low back pain among nurses at HUSM was only 16.4% before entering nursing but 68.2% upon entering nursing. The difference in LBP between pre nursing and since nursing was significant (p=0.001). Furthermore, nursing employment profile such as working experience in current ward (p=0.004) and nursing experience (p=0.038) were significantly related to LBP. Current working ward, working time, total working hours per week and total patient need mobilizing were not associated with LBP. None of the individual factors studied were significantly associated with LBP among the nurses. As for the work related factors, frequency standing during shift was found to be associated with LBP (p=0.021) while other factors were not significantly associated with occurrence of LBP.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on Environmental Sciences

The Late Mangos: Is There Any Doubt Humans Are Inducing Climate Change?

Opinion

We are in the city of Belo Horizonte, Southeastern Brazil, and it is December, but unexpectedly, mango fruits (Mangifera indica) have not yet ripened. As far as we are concerned, we have never experienced a year in which November and December came along without mango fruits ripening in this region. Although native to the Philippines and India, mangos today are a staple and important cultural element in the entire tropical region. Most humans, as well as other animals, above all birds, such as parrots, macaws, and parakeets, appreciate this juicy and fleshy fruit [1]. Mango production depends on climatic stability, and extreme temperatures (above 36°C or below 10°C) can delay fruit development Centro de Producoes Tecnicas [2] (accessed December 3, 2017). Mango trees, which are adapted to warm and rainy weather, also need a marked dry season to reach their optimum production, and therefore, in very rainy regions, fruit development is delayed. The weather in Belo Horizonte during the year of 2017 was rather unusual, not to say very strange. It was the coldest winter in the city since 1975 (Instituto de Metereologia), and there was also some rainfall in this period, which is highly uncommon. This could have been the reason behind the mangos taking so long to ripen, since all of these climatic aspects reduce and slow down fruit ripening and production.

Springtime without mangos reminds us of a few things:

I. Climate is already changing, partially due to anthropogenic activities;

II. As a chaotic system, climate and its changes are nonlinear, which explains why Belo Horizonte experienced its coldest winter in times of global warming;

III. Climate changes naturally and has displayed chaotic behavior long before humans ever existed, nevertheless, not acknowledging the links between human activity and climate change acceleration and intensification is unwise, and may hinder efforts for mitigating and possibly reversing such premature climate change. Over the last decade, there has been an unusual increase in extreme weather events, such as heat waves and precipitation [3].

NASA and NOAA data show 2016 as being the warmest year on record. The World Meteorological Organization, a panel of international climate experts, reported that, in 2016, the earth’s temperature was 1.1°C higher than in the pre-industrial age, glaciers were about 4 million square kilometers smaller than average, and there was a significant rise in sea level, as well as unusual severe droughts and floods [4]. Additionally, global ocean heat was the second highest on record in 2016, contributing to coral bleaching and mortality in tropical waters. In many places, such as California [5] 2017 was the hottest year on record, which explains why Los Angeles is burning in forest and city fires at the very moment these words are being written. However, if the earth’s surface is becoming warmer and dryer, why did Belo Horizonte experience its unusually cold and rainy winter? The climate-warming concept is adequate at larger scales. Increasing overall temperature, as opposed to regular linear warming, may lead to more unpredictable patterns. This could result in both hotter and dryer weather, as well as colder more humid weather, such as Belo Horizonte’s unusual cold and rainy winter in 2017.

According to the Brazilian National Institute of Meteorology [6] from September to November 2017, the mean temperature anomaly was +3°C in the northern regions of the country and -2°C in some of the southeastern regions. Technically, the weather has always been strange. Nevertheless, this does not discard the fact that humans are intensifying and accelerating climatic variations. The idea of chaos and “strange attractors” were born in meteorological studies with Robert Lorenz [7]. As pointed out by Lorenz, climate is a complex deterministic nonlinear system with cyclical patterns that show temporary stability. The flap of a butterfly’s wings in Brazil could set off a Tornado in Texas… the so-called butterfly effect. According to the World Bank database, in 1961, when global atmospheric measurements started, the world emitted three million kilotons of CO2, while in 2014, this concentration increased to twelve kilotons. Emissions of methane, another greenhouse gas, have increased from 5.3 million kilotons (equivalent in CO2) in 1970 to 8 million in 2012. All of these gases have been increasingly accumulating in the atmosphere. As a consequence of these multiple emissions, the atmosphere has suffered significant changes over the last century, and especially over the last few decades.

Pre-industrial levels of carbon dioxide in the atmosphere varied from 260 to 290 ppm (parts per million). In 1958 this value was of 315 ppm, increasing to 410 ppm in 2017, a concentration unseen since 50 million years ago [8], Methane and Nitrogen gas have increased by 8% and 90%, respectively, from the 1600s to the late 1980s [9]. According to the United States Environmental Protection Agency [10], current concentrations of CO2, NH4, and N2 are unprecedented when compared with the past 800,000 years. Nitrogen gas (N2), the dominant ozone-depleting gas emitted by human [11], increased from 280 ppb in the last 800,000 years to 328 ppb in 2015 [10]. N2 is produced mainly through the fertilization of intensive agricultural systems. Finally, considering the well-known relation between these gases and the mean temperature of the planet [12], there is no doubt humans are significantly accelerating and intensifying background and natural climate change. Furthermore, major scientific agencies and specialist panels agree that humans are significantly contributing to climate change, such as the Intergovernmental Panel on Climate Change, the International Assessment of Agricultural Knowledge, Science and Technology for Development, NASA, Scripps Institution of Oceanography, among others.

Such human-induced climate change is declining animal and plant population, such as the golden and harlequin toads [13,14]. In order to deal with climate change, species will have to suffer large distribution shifts towards adequate areas [15]. Many of them are predicted to go extinct before the end of the century due to these changes. Climate change is also changing people’s lives. Food production will fall due to climate change, especially in the already poor and food-insecure regions [16]. According to the Food and Agriculture Organization, after steadily declining for over a decade, global hunger appears to be on the rise again [17], and climate may be contributing to this increase. Despite the consistency of the evidence showing that humans are changing climate locally and globally, 3% of the scientific community doubt humans are inducing climate change, while the other 97% are quite convinced of the opposite. These few “skeptics” are getting more vocal, and policy and decision makers, the general public, and part the mass media are starting to believe that there is not enough evidence to prove humans are inducing climate change.

Some of these scientists are becoming sort of “celebrity skeptics”, such as Professor Ricardo Augusto Felício, a Brazilian Antarctic-Climatology specialist who appeared on Jô Soares, a famous talk-show host on Brazilian television, stating humans are not responsible for the climate shift. In the US, same major media TV channels and news agencies support the idea that science is uncertain about the effects of human activity on climate.

Among those who deny anthropogenic global warming is Roy Spencer, a well-known climate scientist who is funded by institutions linked to large oil companies [18]. In his book Climate Confusion, Roy Spencer states that small natural changes in the atmospheric conditions can have huge impacts on climate through feedback loops.

For example, he points out that a very little decrease in oceanic cloudiness will let in more light, warming up the ocean, and leading to increased temperature and humidity. Another argument often used by those who deny anthropogenic global warming involves the natural variations in the earth’s weather caused by Milankovitch Cycles. Eccentricity is one of these cycles, and describes the changing shape of the earth’s orbit from less to more elliptical, occurring on a cycle of approximately 100 thousand years, causing a greater amount of radiation received at the earth’s surface. Another Milankovitch cycle is axial tilt, the inclination of the earth’s axis in relation to its plane of orbit around the sun, which occurs approximately every 41 thousand years. All of these aspects affect the earth’s climate through glacial and interglacial cycles.

Additionally, most skeptics put forward the fact that climate has always been somewhat unstable, even long before humans ever existed, and therefore, humans cannot be held responsible for these current variations [19]. Although we disagree on Spencer (2010) stating environmentalists are senseless alarmists, we agree that non-linear natural climate dynamics, such as the Milankovitch cycles and the effects of ocean cloudiness, have operated with and without humans. Nevertheless, if these natural and anthropogenic causes are combined, we expect climate to change even faster, leading to more unpredictable extreme events. Is this exactly what we are seeing in 2017? Climate change is inevitable, but human activities have the power to anticipate and intensify these changes. For us, these skeptics’ arguments are analogous to saying there is no need for hospitals, as we are all going to die someday [20].

We should not wait for these 3% of skeptical scientists to be convinced that climate has changed and will change even more with human interferences, before taking strong climate mitigation actions. Meanwhile, humans will have to prepare and adapt to respond to climatic variations and uncertainties, and so will the macaws, parrots, and parakeets, which depend on a large amount of sweet fruits, such as mangos, to reproduce and take good care of the nestlings during this time of year.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Microbiology

Bioremediation

Introduction

Pollutants degradation can be achieved through many physical, chemical, and biological methods. Using biological agents especially microorganisms to achieve this is called bioremediation. So, there are a process catalyzed by a living organism, especially microorganism, (biodegradation) and a specific technology to exploit this process in application (biotechnology). Basics of microbial degradation provide us examples and advances which need well planning strategies to translate the knowledge into acceptable levels of application in different fields related to pollutants degradation and elimination to keep safe environment.

Environmental Biotechnology

Our environment is now polluted by various types of molecules, both natural and man-made [1]. The microbiological science of biodegradation provides a foundation for the biotechnology of environmental cleanup; bioremediation [2]. Millions of natural and synthetic organic chemical substances are present in both soil and aquatic environments. Toxicity and/or persistence determine the polluting principle of these substances. The biological responses to these pollutants include accumulation and degradation [3].

Bioremediation

Bioremediation is a managed process in which biological (especially microbiological) catalysis acts on pollutants and thereby remedies or eliminates environmental contamination. Actually, natural and genetically modified organisms, including microbes (mainly bacteria, but also protozoa, fungi, and algae and even viruses), flora (i.e. large plants), and fauna (e.g. earthworms) degrade pollutants into simpler, less toxic forms. Pollutants degradation using biological agents especially microorganisms is called bioremediation which exploits biodegradation basis in practice. These include some basic understanding for biotechnological and microbiological basis of pollutants degradation through a series of concepts:

i. View origins and major sources of pollutants and behavior of organic compounds in the environment.

ii. View microbiological basis of biodegradation since microorganisms are the main degraders in the environment.

iii. View the different aspects of bioremediation as an effective biotechnology help to sustain the environment. Characteristic aspects included biochemodynamics of bioremediation, different bioremediation technologies; such as biostimulation, bioaugmentation and phytoremediation and techniques; both in situ and ex situ treatment methods.

iv. View plant–bacteria partnerships in remediation of soil and water polluted with hydrocarbons and study role of enzymes involved in biodegradation of toxic organic pollutants especially aromatic hydrocarbons.

v. View recent advances and applications in this field using biodegradation databases and projects to best link between biodegradation and bioremediation.

Conclusion

Bioremediation is an alternative to traditional physicochemical techniques for the remediation of organic pollutants at contaminated sites. Microorganisms with suitable and stable genetic traits, and efficient and effective biodegradation processes would be helpful for clean and green environment.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on Animal and Avian sciences

Estimating and Quantifying the Production Outcomes and Lifestyle Changes for Small-to Medium Sized Dairy Farms When Transitioning From Conventional to Automatic Milking Systems in the Northeast Region: A Case Study Report

Introduction

Primary reasons for lack of expansion of small to mediumsized dairies in the Mid-Atlantic region are the high cost of land, low profits, and labor availability. As herd size continues to increase globally, new technology allowing farmers to remain sustainable is greatly desired. Automatic milking systems (AMS) represent the most recent technology available by offering improved management and production efficiency, quality of life and attractiveness to successors. However, the financial investment is substantial. Although there is growing data on production impacts for European farmers, this technology is fairly new to the U.S. In turn, U.S. farmers lack information from independent sources regarding return on production performance and animal health associated with the transition from conventional to AMS for U.S. dairy operations. Results from a survey to dairy farmers in the Mid-Atlantic region of the U.S. reported that improving herd management and personal flexibility were some of the most important factors regarding their interest in AMS Moyes et al. [1]. Only 18.0% of farmers said they have access to information regarding changes in animal health and personal flexibility. Producers stated that more information on animal health and personal flexibility would be helpful when considering a transition to AMS. The objective of this study was to estimate and quantify the animal health, productivity and lifestyle changes for small-to medium sized dairy farms regarding the transition from conventional to AMS in the Mid-Atlantic region. Economic impact (including cash flow and labor) is not reported here.

Materials and Methods

Four dairy herds (n = 286 ± 154 milking cows/herd) in the mid- Atlantic region were used for this study. A general survey, including geographics, herd management and personal time commitments, for each farm was conducted before and after their transition to AMS. Monthly herd summaries using data from two years relative to transition were used from either the dairy herd improvement association (DHIA) or Ag Source Cooperative services to generate monthly average herd production, reproduction, disease, and culling information (i.e. cull rate and reasons culled) where treatment was either conventional (CON; before transition) or automatic (AMS; after transition) milking systems. All numeric data was analyzed by herds using the PROC MIXED procedure of SAS (SAS/STAT version 9.3; SAS Institute Inc., Cary, NC).

Results and Discussion

Results from the survey indicated that all cows fully transitioned to AMS within a few weeks. One herd was pasture-based whereas all other herds (n = 3) were housed in free stalls with access to a partial mixed ration. One herd did not continue with the monthly DHIA service and therefore results were not used. Herds implemented either the De Laval, Lely or Galaxy-Astrea robots. This decision was primarily based on the location of the dealer service. For all herds, fresh and sick cows where milked using the conventional parlors. Producers observed improved personal flexibility transitioning to AMS (as measured by family and vacation time). Daily robot maintenance was minimal (~1 hour/day). No change in herd size or cull rate was observed. Regarding reproductive traits, calving interval (12.9± 0.15 CON; 13.13± 0.18 months AMS) and number of days open (121 ± 6.0 CON; 128 ± 5.0 days AMS) increased for AMS than CON and maybe partly attributed to more naturally occurring heat detection methods for AMS than CON. Regarding animal health, the reasons for culling shifted towards low milk production being the main reason animals were removed from the herd.

There was no change observed regarding the monthly milk SCC as increases in milk yield were observed when producers transitioned to AMS (Table 1). Milk yield increased in all herds and is most likely attributed to an increase milking frequency that is commonly observed when transitioning to AMS. In conclusion, producers were happy with their decision to transition primarily via the labor reduction (not reported here) and improved personal time. Daily maintenance of robots is minimal. Cull rate does not seem to be impacted when transitioning to AMS. Milk yield, calving interval and days open increased for AMS when compared to CON. Animal health (based on SCC)did not change for all herds enrolled but previous research indicates clinical mastitis can improve when producers transition to AMS Tse et al. [2]. Automated milking systems may improve animal productivity and lifestyle changes but AMS may not impact animal health or reproduction for small-to medium sized dairy farms in the Northeastern region.

Table 1: Monthly milk somatic cell count (SCC1) and milk yield for herds that transitioned to automatic milking systems (AMS; ± 2 years relative to transition).

a) 1SCC based on weighted averages. Data reported as SCC/μL.

b) 2CON = conventional milking system.

c) 3No after AMS data available for this herd.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us