Open Access Journals on Microbiology Research

Nano Packaging and its Potential Application: A Review

Introduction

goods are packed and hygienically transported to guard and conserve them from any unacceptable change in quality before reaching the end consumer. Food packaging continues to evolve along with the brain child in material knowledge and technology as well as in light of consumer’s demand. Presently the contemporary consumers of competitive providence demand for food with natural quality, assured safety, minutest processing, extended shelf life and ready to eat generalization. Innovative packaging systems not only ascertains ride preservation and effective distribution, but also facilitates communication at the consumer statuses. Nanotechnology manipulates and creates nano meter scale stuff, of saleable and scientific pertinence. Importance of nanoscience in food packaging industry has been prioritized due to safety and stability issues. Further nanotechnology predicated packaging intimate’s consumers about the real time quality of food product.

Methods of Manufacturing

Nanomaterials can be produced through two broad approaches known as top-down and jewel- bottom-up. The top-down approach is generally used for recovering inorganic stuff through traditional systems matching as milling, grinding, raising, and chemical responses. Homogenization is a representative of a top-down system that utilizes pressure to reduce the size of stuff matching as fat globules. Milling mechanically reduces the size of stuff to meliorate their functions. The bottom-up approach involves the assembly of lower patches through nature- institute, affecting in the layout of supra-molecular structures which retain unheardof functionalities. Solvent evaporation and scale by scale settlings are representatives of the bottom-up approach, which is normally employed in food uses using members matching as phospholipids.

Different Nanoparticles in Nanopackaging

i) Silver Nanoparticles

Ag nanoparticles can attach to the surface of the cell and damage lipo-polysaccharides, thus forming a quarry within the cell membrane. The combination of Ag nanoparticles into plastic polymers for packaging can be executed in quite a few diverse methods. For example, silver ions can be deposited or trapped in the porous zeolite, and these materials can then be applied to plastics. They have antimicrobial, anti-yeasts, anti-fungal, and antiviral activities as they have a larger surface area per mass compared to micro-scale silver particles or bulk silver content. Active nanocomposite along with the combination of heat treatment has proved to be an effective antimicrobial nanomaterial.

ii) Nano Clay

The pattern of nano clay dispersion in the polymer matrix is classified into three categories: tactoid, intercalated, or exfoliated. It has been described bio-composite films prepared with Agar and Nanoclay (Cloisite-Na+) and demonstrated that nanoclay greatly influences tensile strength, water vapor permeability, and hydrophobic behaviour of agar films. Nano clay enhances the physical properties of plastic packaging preventing the accumulation of gas and moisture inside the packaging. Nano clay composites have an interpolated form, depending on the grade of nanoparticles diffusion in the polymer medium. Exfoliated nanoclay clusters readily lose their tendency to agglomerate and are separated into single flakes, thereby facilitating dispersion of the Nanoparticles in a polymer matrix. The interpolated method varieties only a restrained level of dispersion of clay into the polymer matrix. Therefore, exfoliated nanoparticles exhibit proper blockade phenomenon and strengthening properties.

iii) Nano Zinc Oxide

Zinc oxide-Nanoparticles also exhibit diverse morphologies and shows robust inhibition against growth of broad-spectrum bacterial species. It has been reported that the antibacterial activity of zinc oxide-nanoparticles on Salmonella typhimurium and Staphylococcus aureus in ready-to-eat poultry meat and the potential activity of these nano-zinc oxide-Nanoparticles can protect the food from bacterial contamination. Studies has found that zinc oxide nanoparticles act better against Pseudomonas species, Shewanella putrefaciens Acinetobacter and Moraxella than other metal oxides. Compared to silver Nanoparticles, zinc oxidenanoparticles are predominantly striking for packaging applications because it is more affordable and less toxic to animals and humans. Furthermore, because of the anti-bacterial quality shown by zinc oxide nanoparticles they have the capability to produce significant amount of hydrogen peroxide when exposed to UV irradiation thus resulting in oxidative stress of bacterial cells.

iv) Titanium Nanoparticles

Titanium nanomaterials tends alter the properties of biodegradable films and are considered to be less expensive, light stable and non-hazardous. TiO2 is inert and non-toxic to human, it can oxidize the unsaturated poly-phospholipid component of a microbial cell membrane, thereby resulting in a biocidal effect. Several studies performed on the antimicrobial effects of TiO2 suggested that, under sunlight or ultraviolet light, TiO2 damages the cell membrane of the microorganisms by producing reactive oxygen species (ROS) such as H202, superoxide anions and hydroxyl radicals.

v) Nano Silica

Nano-silica is primarily used during hydrophobic coatings, particularly for materials which are self-cleaning. A non-adhesive covering of nano silica can make the food and food products into a free-flowing material of nature inside the containers. Goods that profit from this technology include alcoholic beverages and instant soup mixes. Growth of super aquaphobic paperboard by covering Aerosil silica nanoparticles exhibited protuberant ability of water repellancy resulting in lotus-like surface created by nanosized silica. Some articles specifically analysed a number of nanomaterials that are currently used or researched for food packaging [1-8].

Conclusion

Through bettered outlook of nanomaterials and the realization of their eventuality in the food diligence the influx of nanotechnology in foods will give results for persisting problems associated with foods and will offer long name profitable benefits. Technically nations will be benefited from increased food productivity with cost effective returns, innovative products with tunable parcels to deliver smarter and healthier foods and inversely intelligent packaging systems having enhanced storehouse parcels for better food protection. In conclusion nanomaterials in foods will have a huge impact on sustainability and will be accompanied by health and environmental benefits if regulated duly.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Medical Research

Assessment of Public Knowledge and Attitude towards Chronic Kidney Disease by Using a Validated Questionnaire: An Observational Study

Introduction

According to Kidney Disease: Improving Global Outcomes, chronic kidney disease is defined as any abnormalities of kidney structure or function, present for greater than 3 months, with implications for health, and requires one of two criteria documented or inferred for greater than 3 months: either GFR <60 ml/min/1.73 m2 or albuminuria level (AER ≥30 mg/24 hours; ACR ≥30 mg/g [≥3 mg/mmol]) which is considered as a marker of kidney damage. Chronic kidney disease, being a condition that affects a large number of populations, the prevalence consistently ranges from 11% to 13% with majority in stage 3 (eGFR 30–59): 7.6% (95% CI; 6·4–8·9). The estimated worldwide gender specific prevalence of CKD is 10.4% in men (95% CI; 9.3 to 11.9) and 11.8% in women (95% CI; 11.2 to 12.6). It is Implicit, that future research must focus on evaluating such strategies that can help to prevent the progression of chronic kidney disease and improve cardiovascular disease outcomes [1]. Chronic kidney disease increases the risk of cardiovascular morbidity and premature mortality among the patients which substantially decrease the quality of life. This risk increases as chronic kidney disease advances to higher stages and worsening of excretory function (usually manifest as declining glomerular filtration rate (GFR) and increasing proteinuria) [2]. Chronic kidney disease is also associated with age-related renal function decline accelerated by hypertension, diabetes, obesity and primary renal disorders [3]. CKD also shares an appearance of glomerulosclerosis, vascular sclerosis and tubulointerstitial fibrosis, which advocates a common final pathway of progressive injury. [4] One of the major challenges regarding chronic kidney disease is, it is asymptomatic in its early stages, its progression to end stage renal disease occurs over a period of several years and delayed diagnosis. Therefore, strategies to reduce the progression to end-stage renal disease require effective methods of screening early in the disease process [5]. Compared with patients who had early evaluation, the risk for death is found to be greater among patients evaluated late [6]. Early screening and detection thus help physicians to structure and implement a treatment strategy that will well fit to reduce the progression of the disease and comorbidities [7]. Early referral resulted in cost savings and improved patient survival along with more life-years free of RRT and fewer hospital inpatient days [8].

According to a report from the National Health Service (NHS) India, treating kidney disease costs more than skin, lung, and breast cancer combined. Early diagnosis of CKD thus helps to minimize the economic burden imparted to the patient [9]. In India, there is one doctor for every 1,445 Indians as per the country’s current population estimate of 135 crores, which is lower than the WHO’s prescribed norm of one doctor for 1,000 people. Public awareness is considered as an important determinant of the uptake of screening programs. However, there is scarcity of the data with respect to public knowledge. Public awareness can play a significant role in the early detection and diagnosis of chronic kidney disease which can save enormous amount of spending on healthcare. In this backdrop, this study focusses to assess the knowledge and attitude towards chronic kidney disease among general public in northern part of India.

Materials and Methods

Study Design

This cross-sectional study was conducted by employing a self-administered questionnaire to assess the knowledge and attitude towards chronic kidney disease among general public. The questionnaire consists of a total 30 questions pertaining to knowledge, and attitude towards chronic kidney disease. The complete assessment took 6 minutes to read and answer. This study was carried out in full compliance with the ethical standards provided by the Indian Council of Medical Research to carry out such study. The experimental protocol had the approval from the ethics committee of the concerned institute. The informed consent has been received prior to enrolling from all the participants.

Data Collection

This study was conducted in full compliance with the ethical standards provided by Indian council of medical research to carry out such study. The experimental protocol had the approval from the institutional ethics committee (IEC/43/2019). The verbal and written explanation of the study was performed; and the written informed document from participants prior to enrolling subjects in the study was obtained. The study was conducted from 22nd April 2020 to 8th April 2021 among Indian adults. The data collection was done through various electronic platforms and offline as well. The questionnaire included an introduction regarding the objectives, choice of participation, declarations of confidentiality and anonymity. The inclusion criteria included Indian nationality, age of 18 years or above, and willingness to participate in the study.

Measures

The questionnaire consists of four sections: questions pertaining to demographics (included age, gender, height (cm), weight (kg), residence, education, marital status, occupation and family history), knowledge, and attitudes towards chronic kidney disease. The knowledge section consists of 13 multiple-choice questions which could be answered as “Yes”, “No” or “I don’t know”. Every correct answer was awarded with 1 mark and an incorrect or unknown answer was assigned with 0 marks. The total knowledge score ranged from 0 to 13, with a higher score denoting a better knowledge of chronic kidney disease and vice-versa. The attitudes section consists of 10 multiple-choice questions which could be answered as “agree”, “disagree” or “I don’t know”. All the responses were recorded and coded accordingly later on. Final data were used to assess internal consistency reliability using Cronbach’s coefficient. The value of Cronbach’s alpha coefficient for the questionnaire was 0.87, which is well above the acceptable threshold for internal consistency. The overall average chronic kidney disease knowledge score, were computed based upon the data received.

Statistical Analysis

Descriptive statistical methods were majorly employed to summarize the data on demographic characteristics and responses to questions concerning knowledge and attitude towards chronic kidney disease. The data was summarized as frequencies (n) and percentages (%) for categorical variables. Knowledge on chronic kidney disease was assessed by calculating total cumulative knowledge score for each participant. A mean chronic kidney disease knowledge score with standard deviation was assigned for each demographic characteristic.

Multiple linear regression analysis was conducted to identify factors associated with knowledge, All the demographic variables and knowledge score were considered as the independent variables and outcome variable respectively. To identify factors associated with attitudes, Multinomial logistic regression analyses were used. Unstandardized regression coefficients (β) and their 95% confidence intervals (CIs) were used to quantify the associations between variables and attitudes. Likewise, to identify factors associated with practices, binary logistic regression analyses were used. Odds ratios (ORs) and their 95% confidence intervals (CIs) were considered to quantify the associations between variables and practices. Factors were selected with a backward selection procedure in a stepwise regression analysis. Data analyses were performed using SPSS (Statistical package for social sciences) version 25.0. The p<0.05 was considered as “statistically significant”.

Results

A total of 507 participants completed the online and offline questionnaire based cross-sectional study. A majority of participants were male (67.7%), lived in the rural areas (53.5%), had either a bachelor’s or master’s degree (77.4%), and did not have a family history of kidney stones (91.9%). The value of Cronbach’s alpha coefficient for the questionnaire was 0.87, which is well above the acceptable threshold for internal consistency. Demographic characteristics and mean chronic kidney disease knowledge scores of participants are shown in Table 1. The mean chronic kidney disease knowledge score of all the study participants was 16.49 (SD = ± 7.0), with scores ranging from 0 to 29. The mean knowledge score of participants from the state of Jammu & Kashmir was nearly 40% less than the participants from other states of India. Also, participants with master’s degree or above had the highest mean chronic kidney disease knowledge score (20.0) among all the demographic variables. Table 2 represents the percentage of correct responses by study participants to individual items on the knowledge section of the questionnaire. Most participants (81.3%) were aware that a person can lead a normal healthy life with only one kidney. While 20.1% of participants believe chronic kidney disease cannot be effectively treated with herbal supplements. Majority of participants knew about the functions of kidneys such as urine formation (83.2%), blood cleansing (78.3%), and maintaining blood pressure (71.4%); however, only few responded that kidneys can keep the bones healthy (35.1%).

biomedres-openaccess-journal-bjstr

Table 1: Demographic characteristics and mean CKD knowledge scores of participants (N=507).

Note: Values in bold are above the overall mean chronic kidney disease knowledge score.
N, number of participants; CKD, chronic kidney disease; SD, standard deviation; ROI, rest of India.

biomedres-openaccess-journal-bjstr

Table 2.

Note: *True items.

Many participants identified that a urine test (85.4%) or a blood test (62.9%) is used to determine the health of the kidneys. Only 6.5% of participants responded that high blood pressure is a risk factor for chronic kidney disease while only 10.84% knew that excess stress is not a risk factor. Whereas diabetes was recognised as a risk factor by 71.2% of participants. Most participants had knowledge about the disease progression of chronic kidney disease (77.5%) and its fatality (77.9%); nearly 80% knew that smoking, alcohol intake and high blood pressure worsen kidney disease in diabetic people. Table 3 shows the percentage of different responses by study participants to individual items on the attitude section of the questionnaire. Two-third of the participants understand that a kidney function test is necessary even when there are no signs of chronic kidney disease. Nearly 42% of participants did not have any idea about the cost of a kidney screening test. According to majority of participants (87.4%), people with diabetes and kidney disease should stringently adhere to medical advice provided to them by doctors. Most participants (83.3%) were found interested in knowing the health status of their kidneys. As Figure 1 suggests, the distribution of CKD knowledge scores among the participants of study, nearly 65% of participants had a knowledge score in the range of 17 to 24.

biomedres-openaccess-journal-bjstr

Table 3: Percentage of different responses to individual items on the attitude section of questionnaire by study participants.

biomedres-openaccess-journal-bjstr

Figure 1: Distribution of CKD knowledge scores among the participants of study (N=507).

This depicts that most participants had a good amount of knowledge regarding chronic kidney disease. A multiple linear regression was run to predict CKD knowledge score from gender, education, participants working outside home, state and a family history of kidney failure. These variables significantly predicted CKD knowledge score, F (8, 498) = 94.787, p < 0.001, R2 = 0.604. All five variables added statistically significantly to the prediction, p < 0.05. Table 4 shows the results of the standard multiple regression analysis between CKD knowledge score and participant characteristics. The multivariate analysis shows higher knowledge scores associated with a higher level of education, such as holding a postgraduate degree, bachelor degree or completing the school. Unemployed participants had significantly lower knowledge level for CKD than those employed. Participants who are single/ never married had significantly higher knowledge scores than participants who are within a relationship (married). Participants from the state of Jammu & Kashmir were found to have significantly lower knowledge score for chronic kidney disease as compared to rest of India. No difference was found in the knowledge scores of participants with and without a family history of kidney stone.

biomedres-openaccess-journal-bjstr

Table 4: Multiple regression analysis between participant characteristics and CKD knowledge score.

Note: aReference; P values in bold are statistically significant P values (p < 0.05).
CKD, chronic kidney disease; OR, odds ratio; CI, confidence interval; ROI, rest of India.

Discussion

Conservative management as a treatment alternative to dialysis and kidney transplant is getting more recognition in the United States. The Kidney Disease Improving Global Outcomes (KDIGO) highly advocates conservative management as a supportive care in chronic kidney disease and also a priority to improve patientcentered care. The results from this study demonstrate that there is a good amount of knowledge regarding chronic kidney disease. Participants with master’s degree or above had the highest mean CKD knowledge score (20.0) among all the demographic variables, similarly our findings were aligned with those of (Stanifer, et al, [10]) and (Khalil, et al, [11]), which also showed, education level is a key determinant for the awareness about chronic kidney disease. The overall knowledge score, about chronic kidney disease and its risk factors was found to be higher in this study as compared to what was found by (Kumela, et al, [12]), in Ethiopian study; however, the sample size in this study was smaller in comparison to our study. This study reported that the kidney’s function of keeping the bones healthy had lower knowledge score, other kidney functions had comparatively better knowledge scores (Table 4). This study also suggests majority of participants believed that people with diabetes and kidney disease should stringently adhere to medical advice provided to them by doctors. Also, most participants were found interested in knowing the health status of their kidneys, which confirms that the attitude towards chronic kidney disease and related outcomes is significantly good. The management of chronic kidney disease is very challenging as patients are mostly asymptomatic during the early stages of disease, its inevitable progression to higher stages and its late diagnosis. Therefore, strategies to reduce the incidence of end-stage renal disease require effective methods of early screening in the disease process [5]. Early detection of chronic kidney disease allows implementation of treatments and strategies that can influence both progression of kidney disease and cardiovascular health [7]. The costs of treatment for chronic kidney disease in India consumes a lot of capital [13]. High blood pressure as a risk factor was associated with a low knowledge score. The economic burden of hypertension imparts in Indian population is quite high and adds to the lifelong expenditure on antihypertensive drugs [14]. The gap in knowledge related to blood pressure was also alike with a study in Saudi Arabia [15]. Earlier detection and improved knowledge about chronic kidney disease and its risk factors will significantly delay the progression of disease and save a lot of out-of-pocket expenditure in India.

Conclusion

To conclude, our findings suggest that Indian adults demonstrated good knowledge and a positive attitude regarding chronic kidney disease, which is important to prevent the disease and early detection. However, knowledge was lower among older adults and less educated groups will improve the outcomes and delay the progression and the costs associated with the treatment. The study findings will help healthcare providers to understand the extent of knowledge and attitude towards CKD and thus provide relevant education to patient and family members. The kidney’s function of keeping the bones healthy had lower knowledge score, other kidney functions had comparatively better knowledge scores. The multivariate analysis found higher knowledge scores associated with a higher level of education.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Microbiology Research

Prevalence of Gastrointestinal Tract Nematodes Parasites in Sheep in Hawasa Town, Southern Ethiopia

Introduction

Sheep and goats are the most numerous of man’s domesticated livestock and are especially important in more extreme climates of the world. Over two-thirds of the total population of sheep and goats occur in the less developed countries where they often provide major contribution to farming enterprises [1]. In Ethiopia, sheep are the second most important livestock species next to cattle and ranks second in Africa and sixth in the world in sheep population [2]. With its great variation in climate and topography, the country possesses one of the largest livestock populations in the world, which is managed by smallholder farmers under extensive low input traditional management system and adjunct to crop production [3]. Ethiopia’s sheep population estimated that 29.3 million heads is found widely distributed across the diverse agro ecological zones of the country [4]. Approximately 75% of the sheep are kept in small scale mixed farms in the highland regions, while the remaining 25% are found in the lowlands. Sheep are traditionally kept in small holdings and are associated with the small-scale resource poor livestock keepers [5].

In Ethiopia sheep and their products provide direct cash in come through the sale of live sheep and skin. Sheep are living bank for their owners and serve as source of immediate cash need and insurance against crop failure especially where land productivity is low and erratic rainfall, sever erosion, frost and water logging problems. Thus sheep in the small holder farming system provide continuous service to the economic stability and effective operation of the crop production system [6]. Under Ethiopian condition livestock are maintained as a principal activity in pastoral systems of productions, as 50% of house hold revenue comes from livestock or more than 20% of house hold food energy is derived directly from livestock. In recent years’ pastoralists have shown an increasing interest in keeping larger numbers of sheep and goats. Milk and meat are the two outputs. Pastoralists, also depend on their livestock not only for their income but also for their survival. Consequently, risk avoidance is very important to the pastoralist. Livestock management therefore directed towards risk minimization, which tends to reduce productivity [7].

Inspite of the numerical and economic importance of sheep in Ethiopia, productivity is generally low due to inadequate nutrition, poor reproductive performance, diseases and poor management. The recurrent loss in productivity and income is often due to parasitic infections particularly helminthes infections, which are common and considered as one of the major problems in small ruminant productions [8]. Gastrointestinal nematodes cause direct loss through mortality in heavily parasitized animals, especially in young animals as well as cost incurred in treating and controlling nematodes infection. It is well established that parasitized animals perform less efficiently feed conversion adversely affected which reduces carcass quality and quantity and wool growth resulting in further financial penalties [9]. Even though, the estimation of economic loss due to nematodes is limited to lack of accurate estimation of disease prevalence, the rough estimate of annual losses due to Endoparasites in Ethiopia is estimated to be $ 83million.

Such losses in sheep production should definitely be minimized through parasitic control programs [10]. In some parts of Ethiopia, surveys have been made on the prevalence of gastrointestinal nematodes parasites which most of the information obtained is from abattoir survey and animals managed on stations. Owing to the basic limitation in scope and coverage of most of the studies conducted in Ethiopia, sound nematodes control strategy has not yet been established in the country. Therefore, the epidemiology of gastro-intestinal (GIT) parasites in livestock varied depending on the local climatic condition, such as humidity, temperature, rainfall, vegetation and management practices. These factors largely determine the incidence and severity of various parasitic diseases [11]. Therefore, it is important to assess the type and level of parasitism in ruminant livestock, in order to determine the significance of parasite infestations and to recommend the most beneficial and economically acceptable control.

To this effect there is no sufficient information on prevalence rate of gastrointestinal nematodes of sheep in Hawasa town and also gastrointestinal nematodes infection still remains an important disease problem of sheep in this area. Therefore, the objectives of this study were to determine the prevalence rate of sheep GIT nematodes in Hawasa town, to identify the GIT nematode types of egg involved in infection and infestation level of sheep in same town and to identify risk factor associated with the occurrence of GIT nematodes.

Methods and Materials

Study Area

The study was conducted in Hawasa town from November 2018 to April 2019, which is the capital of SNNPR, located in the Northern part of the Southern Nations, Nationalities and People’s Region (SNNPR), which is located 275 km south of Addis Ababa, the capital city of Ethiopia. Geographically, the area lies between 7°06° North latitude and 38°48° East longitude. Agricultural production system is of mixed crop-livestock production. In rural areas, mainly local breeds are found, grazing on communal land. The area was selected on the basis of livestock production potential and the presence of different livestock species and breeds. The selected area represents typical mixed crop-livestock production system of the Rift Valley area of Ethiopia. Annual average rainfall of Hawasa town is ranging from 800 to 1000mm and with a mean temperature ranging from 20-25°C and situated at an elevation of 1697 meter above sea level [12].

Description of Study Population

The target population for this study was all sheep population in Hawasa town. The study animals include local breeds of sheep including all age groups and both sexes that were selected from the target population.

Study Design

A cross section study design was conducted from November 2018 to April 2019 in Hawasa town to address the main objective of the study.

Sample Size and Sampling Methods

Systematic sampling was followed to select the animals and collect feces from sheep from their rectum at the field. The sample size was decided based on the prevalence of the disease in the sheep of the town using formula described by [13] with 95% confidence interval and at 5% desired absolute precision and by assuming that the expected prevalence of GIT nematodes is 50% since there was no documentation on its prevalence rate yet in the district. The estimated sample size was 384 according to the following formula:

Study Methodology

Cross sectional study method was employed to collect fecal samples directly from the rectum of each sheep and placed in screw capped glass bottles (universal bottle) and prevent the feces from drying out and then brought to Hawassa University Veterinary Parasitology Laboratory for coprological investigation through appropriate labeling of parameters like sex, age, body condition score, conjunctiva of mucous membrane, management system of sampled animal, date, season of sample collection and place of sample collection were simultaneously recorded while fecal sample taking. The collected samples were subjected to qualitative flotation and quantitative McMaster egg counting parasitological techniques using saturated sodium chloride (specific gravity of 1.2) as flotation fluid. The eggs of parasite species were identified using keys given by [14]. Samples that became positive for gastrointestinal nematode were subjected to EPG counting to determine the number of egg per gram of feces (EPG) and performed according to the procedure described [15]. The degree of infection was categorized as light, moderate and severe (massive) according to their egg per gram of faeces (EPG) counts. Egg counts from 50-799, 800-1200 and over 1200 eggs per gram of feces were considered as light, moderate and (severe) massive infection, respectively [16].

Data Management and Analysis

Data collected from each study animals and study areas were entered into Microsoft Excels spread sheet and then analyzed with STATA-11 statistical software. Chi-square test was used to compare association between independent variables (sex, age, body condition scores, management system, season of sample collection) with the result. Statistically significant association between variable was considered to be exist if the computed P-value is less than 0.05.

Results

Overall Prevalence of Gastrointestinal Nematodes of Sheep

Of the total 384 sheep examined, 197 (51.3%) were found infected with different types of gastro-intestinal nematodes. Of the total positive cases, 139 (36.2%) were infected with strongyles, 10 (2.6%) were infected with Strongyloides species. and 48 (12.5%), were infected with strongyles and strongyloides species (Table 1).

biomedres-openaccess-journal-bjstr

Table 1: Prevalence of gastrointestinal nematodes of sheep encountered in the study area.

Prevalence of the GIT Nematodes Species by Different Age Groups

Of the total 384 of sheep examined coprologically for gastrointestinal nematode eggs, 197(51.3%) were found positive. The prevalence was 63 (43.8%) in young, 83 (52%) in adult and 105(63%) in old. There was no significant difference (P>0.05) in prevalence of gastrointestinal nematode between the different ages (Table 2).

biomedres-openaccess-journal-bjstr

Table 2: Prevalence of GIT nematode based on different risk factors.

Prevalence of GIT Nematodes Species by Sex Groups

Of the total 384 of sheep examined coprologically for gastrointestinal nematode eggs, 197 (51.3%) were found positive. The prevalence was 105(59.7%) in male and 92(44.2%) in female. There was significant difference (P < 0.05) in prevalence of gastrointestinal nematode between the two sexes (Table 2).

Prevalence of the GIT Nematodes Species by Different Body Condition Score

Of the total 384 sheep examined, 105, 202 and 77 were categorized as having poor, medium and good body condition scores, respectively. Infection prevalence was significantly higher in animal with poor body condition when compared to that of medium and good body condition scores (P<0.05). The overall infection prevalence according to body condition grades, 74.2%, 47.5% and 29.9% with poor, medium and good, respectively (Table 2).

Prevalence of GIT Nematodes by Management System

Based on the types of management system; prevalence was 38 (33%) from intensive, 28 (33.7%) semi intensive and 135 (72.6%) from extensive animals. The difference in isolation rate of different nematodes eggs based on management system was statistically significant (P<0.05) (Table 2).

Prevalence of GIT Nematodes Based on Conjunctival Mucous Membrane

Based on Conjunctiva of mucous membrane of animals; prevalence was isolated 137(50.7%) pink and 90(78.9%) for pale. The difference in isolation rate of different nematodes eggs based on Conjunctiva of mucous membrane was not statistically significant (P>0.05) (Table 2).

Prevalence of GIT Nematodes Based on Season

Based on season of sample collection; prevalence was isolated 99(48.8%) rate from dry season and 90(54.1%) wet season. The difference in isolation rate of different nematodes eggs based on season of sample collection was not statistically significant (P>0.05) (Table 2).

The results of quantitative faecal examination using the modified Mc Master technique for GIT nematodes of 197 infected sheep were 73 (37.1%), 57 (28.9%) and 67 (34 %) for light, moderate and heavy infection, respectively. Most of the infected sheep had a faecal egg count in a range of 50 to 800 EPG and more (Table 3).

biomedres-openaccess-journal-bjstr

Table 3: Level of infestation GIT nematode.

Discussion

The gastrointestinal nematodes of sheep are one of the important parasitic diseases that obviously result in reduced productivity of sheep raised by smallholders using traditional husbandry management system in and around Hawasssa. The coprological examination done for this study using direct faecal floatation method revealed an overall gastrointestinal infection prevalence of GIT nematodes of sheep in Hawassa district was 51.3%. This result was lower than that found in Asella 68.1% [17] and in Meskan district, Gurage zone 76.3% [18]. These relative differences in prevalence of nematode parasites may arise due to existence of different climatic or environmental factors that could support survival and development of infective larval stage of most nematodes [19]. The present study shows 36.2% strongyles, 2.6% for strongyloides and 12.5% for both strongyles and strongyloides species. This finding was more or less agrees with the report of previous study conducted in Dembia district, Northwest part of Ethiopia 41.9% strongyle species [20]. However, this finding disagrees with previous studies Bedelle 66.6% strongyles type and 3.3% Trichuris species [21], 70.2% strongyles type and 4.5% Trichuris species in Western Oromia [22] and also this report was much lower than 97.03% strongyles type, 45.22% strongyloides and 30.25% Trichuris species in eastern part of Ethiopia [23]. The current study has shown the presence of mixed infection characterized by the presence of two or more nematodes genera in sheep and this is in agreement with the findings of other researchers in some parts of the country [19,22-26].

This study showed that strongyles having direct life cycle were the most prominent among those that were higher prevalence in gastrointestinal nematode parasites of animals. In this study, a significant difference was observed in nematodes infection in relation to body condition where a higher prevalence of nematodes was recorded in poor body condition animals when compared to other groups. This agrees with [27] this poor body condition might be due the current parasitic infection which lead to poor immunological response to infective stage of the parasites. Difference in body condition score was statistically significant (P< 0.05) with gastrointestinal nematode infection such that shedding of nematodes eggs increased with poor body condition (74.2%) than in good body condition (29.9%). This finding agrees with [28] who suggest that well-fed animals develop good immunity that suppresses the fecundity of the parasites. In the present study, when infection rate on age was subjected to analysis, animals with old age seems to have slightly higher prevalence of nematodes, which could be related to their higher susceptibility to infection than young age animals but it was not statistically significant (p>0.05).

This can be attributed to the failure of acquired immunity in old age animals. A similar finding of higher intensity of infection in older sheep has also been reported from a study in semi-arid parts of Kenya [29]. On the contrary, [30] have documented that adult and old animals develop acquired immunity against helminth infections as they get mature due to repeated exposure. The overall prevalence of nematodes was higher in males than females in sheep and it was statistically significant (p>0.05). This might be due to male sheep move a long distance to search female for mating or searching feeds which expose them to larval infected area and contact feaces. It is assumed that sex is a determinant factor influencing prevalence of parasitism [31]. However, In this present study did not agrees with previous report prevalence of nematode was higher in females than in males because females are more prone to parasitism during pregnancy and per-parturient period due to stress and decreased immune status [15].

The overall prevalence of nematodes in different season was 98 (54.1%) from 181 sample in wet season and 99 (48.8%) from 203 sample in dry season. In this present study agrees with previous reports the high prevalence of parasites in wet season around the world [32-35]. The wet season was made the environmental conditions more favorable for the development and survival of parasitic stages and led to increased availability of infective larvae in the rainy and post rainy season. It is well documented that gastrointestinal parasitism in grazing animals is directly related to the availability of larvae on pasture and seasonal pasture contamination [36]. The overall prevalence of nematodes in different grazing system was higher in extensive (72.6%) than in intensive (33%) management system. It was significant for prevalence (p<0.05) of gastrointestinal nematode infections. The higher prevalence nematode parasites found in extensive management system could indicate that animals have frequent exposure to communal grazing pasture that has been contaminated by feaces infected animals.

In this study disagrees with report with [37] in open grazing system; the low prevalence observed could indicate that animals freely grazed in the extensive grazing field had less exposure to infective larvae on the pasture. From studied animals 37.1% lightly, 28.9% moderately and 34% were severely infected. The present finding showed heavily infected sheep during the rainy season and in most traditional system, where animals are kept extensively, fecal contaminations and infective stages are spread over a large area and heavy infections occurred. This study agrees with the idea reported by [38].

Conclusion and Recommendations

In general, high prevalence of strongyles and stronyloides species of nematodes parasites were found in the study area. Those high gastrointestinal nematodes parasite indicates to be important health problem of sheep. Strongyles and stronyloides species combining with the prominent risk factors like management system, season, sex and body condition leads to major problems that hampered efficient utilization of the available sheep resources, manifested by reduced rate of reproduction as well as severe economic impact due to slower growth rate or death of infected individuals.

Based on the above conclusive remarks, the following recommendations are forwarded:

a) Strategic anthelminthic deworming should be given at the beginning of wet season and at the end of dry season.

b) Adoption of intensive management system should be introduced in order to avoiding communal grazing.

c) Definitive diagnosis should be conducted by clearly isolating and identifying parasitic infection through coprological examination in the study area.

d) Further study should be carried out on the efficacy and the anthelminthic resistance.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Medical Research

Molecular Diagnosis of Theileriosis in Kundhi Buffaloes in Hyderabad, Sindh- Pakistan

Introduction

Ticks are a big problem for buffaloes in tropical nations like Pakistan because they create health problems and act as a vector for the spread of haemoprotozoan diseases such theileriosis, babesiosis, and anaplasmosis. Exotic (Bos taurus) and crossbred cattle and buffaloes are known to be extremely susceptible, but indigenous cattle have built-in disease resistance [1]. In Pakistan, cross-breeding programmes were implemented to improve the genetics of livestock, but the downside was a reduction in resistance to ticks and tick-borne diseases [2]. [3] Estimated a global loss of US $ 800 million per year owing to tropical theileriosis. In Pakistan, the annual cost of T. annulata infection is estimated to be 384.3 million US dollars [4]. Theileriosis causes significant economic loss due to decreased productivity and mortality [5]. Theileria are small parasitic parasites with round, ovoid, irregular, or bacilliform shapes that belong to the Phylum Apicomplexa, Subclass Piroplasmorina, Order Piroplasmorina, and Family Theileriidae. Clinically, theileriosis has been linked to symptoms that range from mild to deadly [6]. Clinical symptoms and microscopic inspection of stained thin blood smears are used to diagnose theileriosis in acute cases [7]. However, both native and treated animals turn out to be long-term carriers, with just a small percentage of contaminated erythrocytes [8], making parasite detection in blood smear challenging [9].

Long-term carriers are the primary source of infection transmitted by ticks [10]. Disease outbreaks can occur when carrier livestock are transported to non-endemic areas [11]. As a result, detecting piroplasms in carrier animals becomes more difficult. The lack of a strong physical distinction between the several Theileria species in schizonts and piroplasms hampers species classification on blood slides [12]. Furthermore, the smear approach is linked to false negative results and has little sensitivity when it comes to detecting carrier livestock [13]. The introduction of molecular diagnostic procedures such as polymerase chain reaction (PCR) has paved the door for more effective diagnosis than previous methods [14]. Based on research conducted on a variety of parasites, it has been determined that PCR is more sensitive than traditional approaches [15]. For the identification of Theileria species, [16] developed PCR, reverse line blot assay, and DNA probes. T. annulata was detected using four diagnostic tests: blood and lymphnode biopsy smear examination, PCR of blood and lymphnode biopsy sample, and PCR of blood and lymphnode biopsy sample. The PCR assay was shown to be more sensitive and accurate than microscopic examination among them [17]. Theileriosis can be diagnosed more accurately with the use of PCR [18]. The goal of the study was to use PCR to assess the existence of theileriosis in Kundhi Buffaloes and to reliably diagnose the disease.

Materials and Methods

Collection of Blood Samples

A total of n = 200 blood samples from Kundhi Buffaloes were collected from animals with clinical signs of theileriosis, such as anorexia, pyrexia, decreased milk production, tick infestation, lymphnode enlargement, pale mucous membrane, suspended rumination, bilateral nasal discharge, lacrimation, and others, as well as from animals that appeared to be healthy. The blood smears were made using blood taken from the jugular vein according to [18]. 3ml of blood were taken in EDTA-coated vacutainers for the PCR.

Giemsa Staining

Giemsa stain was used to prepare thin blood smears and stain them. The parasites were identified using [19] characters as a guide.

Isolation Of DNA from Blood Samples

A DNA extraction kit (HiPura, Himedia) was used to extract genomic DNA from 200 μl of whole blood according to the manufacturer’s instructions. The extracted DNA was spectrophotometrically measured before being run on a 0.8 % agarose gel. Extracted DNA aliquots were kept at -20 0C until needed.

Polymerase Chain Reaction

The primers were created using the coding sequence of T. annulata’s primary merozoite surface antigen (Tams 1 gene). 50 -CCAG GACCACCCTCAAGTTC-30 and 50 -GCATCTAGTTCCTTGGCGGA-30 are the forward and reverse primer sequences, respectively. In a total volume of 15 μl the PCR reaction contained 30 ng of template DNA, 7.5 μl of Fermentas 29 Master mix, 0.5 μl of each forward and reverse primer (10 pmol/ μl), and 5.5 μl of nuclease-free water. In a thermal cycler, reactions were started at 95°C for 5 minutes, followed by 37 cycles of 95°C for 30 seconds, 55°C for 30 seconds, 72°C for 30 seconds, and a final elongation step at 72°C for 5 minutes, with a final hold at 4°C (Veriti). Each amplification run comprised a negative control (sterile water) and a positive control DNA from a T. annulata.

Agarose gel Electrophoresis

Electrophoresis (120 V/208 mA) in a 1.5 % agarose gel was used to examine amplified samples. Along with each round of amplifications, positive and negative controls were run. A UV transilluminator was used to observe the gel, which was stained with ethidium bromide (0.5 μg/ml) (Gel Doc, Syngene).

Statistics Analysis

The connection between blood smear examination and PCR was investigated using the Chi square test.

Results

Blood Smear Examination

The stained blood films revealed the presence of Theileria piroplasms, forms with diameter of 0.5–1.5 micrometer in 25 samples (12.50 %) of Kundhi buffaloes (Figure 1).

biomedres-openaccess-journal-bjstr

Figure 1: Piroplasmic forms of T. annulata in a microscopic field by Giemsa staining method Polymerase Chain reaction.

Polymerase Chain Reaction

PCR-based screening utilizing specific primers from the Theileria annulata (Tams1) gene’s main merozoite-piroplasm surface antigen sequence revealed 115 samples (57.50%) positive for T. annulata were found to be positive by PCR.

The prevalence of disease in Kundhi buffaloes by age when screened by PCR. The highest incidence was found in buffaloes over the age of 5 years (75.58%), while the lowest prevalence was found in calves under the age of 1 year (20.83%) (Table 1). In T. annulata positive samples, the desired product size of 430 bp was reached (Figure 2). (Table 2) shows the 2×2 contingency table for blood smear examination and PCR. (Figure 2) PCR products run on 1.5 % agarose gel positive samples showing amplified product of 430 bp. Lane 1 Sample 8 (positive), lane 2 sample 11 (positive), lane 3 sample 13 (positive), Lane 4 sample 25 (positive), lane 5 sample 41 (positive), lane 6 sample 55 (positive), Lane 7 sample 63 (positive), lane 8 positive control, lane 10 DNA ladder (mass ruler low range), Lane 11 negative control.

biomedres-openaccess-journal-bjstr

Figure 2: PCR products run on 1.5 % agarose gel positive samples showing amplified product of 430 bp.

biomedres-openaccess-journal-bjstr

Table 1: The age wise prevalence in cattle by PCR.

biomedres-openaccess-journal-bjstr

Table 2: 2 x 2 Contingency table for blood smear and PCR.

Note: aTrue Positive

bTrue Negative

Sensitivity of PCR = 100%

Specificity of PCR = 42.58%

Statistics Analysis

Chi Square was found to have a value of 8.76. Because this value is more than 3.841, the Chi square value is significant at the 5% threshold of significance, and so it provides adequate grounds to assume that PCR is more effective in diagnosing theileriosis than microscopic testing.

Discussion

Tropical theileriosis frequently results in a severe and deadly infection. Anorexia, emaciation, decreased rumination, lacrimation, corneal opacity, nasal discharge, diarrhea, terminal dyspnea, and frothy nasal discharge are all symptoms of theileriosis infection in Kundhi buffaloes [20]. Because it is simple and inexpensive, clinical indicators and microscopic inspection are commonly used to diagnose piroplasmic infection. However, a lack of sensitivity and precision in the staining process could result in a mistaken diagnosis. Although serological tests are used to diagnose latent infection, there is a potential that false positive and negative results will occur [21]. In comparison to Immunosorbent Assays (ELISA), Immunofluorescent Antibody Test (IFAT), and Indirect Haemagglutination Assay (IHA), as well as morphological identification of piroplasmic forms, the PCR approach is more accurate [22]. In a study conducted by [23], the prevalence of Theileria infection by PCR assay was found to be 70% (21 out of 30), while the prevalence by Giemsa staining method was 30% (9 out of 30). [24] also reported that the number of positive cases of theileriosis by PCR and smear method were 22 (44%) and 8 (16%), respectively, out of 50 blood samples of native buffaloes. According to [25], PCR revealed that 68 of 150 carrier cattle (45.33 %) were positive. Our findings show that PCR is more effective than conventional staining in detecting theileriosis, which is consistent with earlier research [26,27] and [28]. [29] created a multiplex PCR assay for simultaneous detection of T. annulata, Babesia bovis, and Anaplasma marginale, which they found to be a useful diagnostic tool.

The Tams1 gene, which encodes the 30 kDa main T. annulata merozoite surface antigen, was amplified with the primers employed in this investigation [30]. [31] employed the 30 kDa merozoite surface protein gene as a detective target sequence, developing specialized primers to amplify just T. annulata, which is similar to our target. T. annulata was detected utilizing Tams1 genespecific primers, and the PCR technique was shown to be sensitive and specific [32]. [33] investigated the efficacy of two primer sets, primer set one (N516/N517) and primer set two (Tams1F/ Tspm1R), in amplifying the 30 kDa major merozoite surface antigen gene for the diagnosis of T. annulata infection in buffaloes and concluded that primer set Tams1F/Tspm1R is the primer set of choice for T. annulata diagnosis. This explains why we used Tams1- specific primers in our investigation, as well as demonstrating the efficiency of the PCR approach for theileriosis confirmation. [34] found a higher incidence of T. annulata in buffaloes aged 4–6 years, which is consistent with the current study, which found the highest prevalence in calves aged above 5 years. Physiological variables such as oestrus, pregnancy, and lactation cause transient immune reduction, which leads to an increase in disease occurrence in adult animals, as stated by [35].

Antibodies to sporozoites, schizonts, and piroplasms have been found in immune cows’ colostrum and calves’ serum, which protects the calves from theileriosis. This could explain the low prevalence of theileriosis in calves younger than a year in our study [36]. The present investigation found that young animals were more resistant to T. annulata than adults, which was consistent with [37-39] findings that young cows were more resistant than older buffaloes. [32] Demonstrated age-related resistance in young cattle to most tick-borne protozoan and [31]. The Giemsa-stained blood smears in our investigation showed false negative in visual examination under a light microscope, indicating that this test has a limited sensitivity. It could be due to a variety of factors, including visual errors made during slide examination, very low parasitaemia, hemolysis-induced destruction of piroplasmic forms in red blood cells, thickness, dirtiness, or inadequate blood smear staining [15]. Furthermore, piroplasms could not be detected microscopically in samples that were negative on PCR tests. This fact demonstrates that PCR is superior to blood smear examination. At the 5% level, a statistical comparison of blood smear and PCR revealed a significant difference.

The sensitivity of the PCR method was shown to be 100 % when using blood smear examination as the gold standard assay. The results of this study accord with those of [2], who found that PCR had higher sensitivity and accuracy than blood smear examination in detecting T. annulata. In epidemiological studies, [29] confirmed the efficacy of this technology and compared it to conventional diagnostic procedures, finding that PCR was more sensitive for identification of T. annulata infections. Drug therapy lowers parasitaemia levels, making correct diagnosis difficult [10]. PCR assays can be used to detect low parasitaemias in the blood of carrier animals under these situations. In the diagnosis of field difficulties of theileriosis, [31] revealed that the PCR test had the highest overall comparative efficacy of 31.6 %, followed by microscopic lymph node smear examination 8.25 %, and microscopic blood smear examination 6 %. Thus, compared to microscopic blood and lymph node smear testing, the PCR test is more sensitive in detecting low-grade illnesses in carrier animals, making it more suitable for epidemiological surveys. [1] reported a similar observation, claiming that tick carriers are substantial contributors to the infection. Molecular technologies such as PCR overcome the challenges of conventional approaches in detecting and distinguishing Theileria piroplasms. According to [3], the great effectiveness and sensitivity of PCR makes it an appealing method for diagnosing tick-borne illnesses [9]. As a result, our research clearly demonstrates that PCR may be used to accurately diagnose theileriosis, as well as to discover carrier animals, which can serve as a possible source of infection to healthy populations via infected ticks.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Biomedical Research

Evaluation of Biliary Cholesterol, Biliary Ferritin and Expression of Apo B-100 Gene in Patients of Cholelithiasis

Introduction

Gall stone disease is a common disorder around the world. Choleliths are bile precipitates resulting from abnormalities in the metabolism of cholesterol, bilirubin and bile acids. [1]. The prevalence of cholelithiasis in India ranges from 2-29%, being higher in North Indians and females [1-3]. Gall Stones are classified according to their chemical composition into Cholesterol stones, Mixed stones and Pigment stones [4]. In western populations, stones that are rich in cholesterol are the predominant type (80%) but lower frequencies of cholesterol stones and higher frequencies of pigment stones are seen in many parts of Asia [5]. The pathogenesis of cholesterol stones primarily involves supersaturation of bile with cholesterol accentuated by accelerated nucleation and impaired gallbladder emptying being other factors. Relative concentration of cholesterol, bile salts and phospholipids in GB bile determines precipitation [5]. Anaemia due to iron deficiency is associated with gall stone disease. Serum ferritin, a marker of iron stores, exhibits low levels in iron deficient patients. In a study done on patients with cholelithiasis by Aggarwal et al, in 2019, it was seen that serum ferritin levels were lower in cases of cholelithiasis as compared to those in the normal population [6]. Ferritin is also released in bile in hemochromatosis.

In this study biliary iron concentration (per millimole of bile acid), was increased about twofold (p < 0.05) and biliary ferritin concentration about fivefold (p < 0.001) [7]. The levels of biliary ferritin in gallstone disease have not been studied so far. Genetic variation in cholesterol metabolism can be brought about by different isoforms of lipid transport and receptor molecules in the general population. Apo E and apo B-100 are prominent transporters that exist in polymorphic states in humans whose certain isoforms help in better uptake and delivery of cholesterol [8]. Apo B-100 serves as a ligand for receptor-mediated endocytosis of LDL. The gene is located on chromosome 2. High frequency of X+ allele in apo B-100 genes has been reported in gallstone patients from China [9]. Heterozygosity (X+/X-) of apo B-100 may be associated with gallstone disease and gallbladder cancer in India [10,11]. In a study in transgenic rabbits, it was found that Apo B-100 expression is associated with higher serum cholesterol and triglyceride levels [12]. In another study, rats fed on iron rich diet had a higher expression of Apo B-100 gene and had higher serum cholesterol and triglyceride levels [13]. Hence this study was undertaken to determine the levels of biliary ferritin and biliary cholesterol in patients of cholelithiasis and to see the association of expression of Apo-B gene in them.

Methodology

This observational cross-sectional study was conducted in the Department of Surgery and the Department of Biochemistry at UCMS & GTB Hospital, Delhi from November 2018 to March 2020 with a sample size of 40 cases and equal number of controls for gene purposes. The research committee of the institute approved the study protocol. All consenting patients, except those with coexisting systemic and inflammatory illness and immunocompromise, diagnosed with symptomatic cholelithiasis willing to undergo laparoscopic/open cholecystectomy were included. For Apo B-100 expression, an equal number of healthy volunteers were included after ultrasound to exclude cholelithiasis in them. All patients of cholelithiasis and controls underwent history taking, radiological and biochemical investigations. A fasting blood sample of patients was taken for hematological evaluation, lipid profile, serum ferritin, and Apo B-100 gene expression. In the controls, only Apo B-100 gene expression was measured. Bile was obtained from the gall bladder of patients immediately after cholecystectomy for evaluation of biliary cholesterol and ferritin levels. Cholesterol and ferritin levels in both serum and bile were measured by ELISA. The reference range for serum ferritin was taken to be 29-248 ng/ml for males and 10-150 ng/ml for females [14].

Serum cholesterol levels less than 200 mg/dl were considered desirable, 200-239 mg/dl as borderline high and >240 mg/dl as high. Gene Expression comprised of 3 steps:
a) Extraction of RNA,
b) Synthesis of complementary DNA (cDNA) and
c) Quantification of Apo B-100 gene expression by quantitative Real Time PCR.

Statistical analysis was done using SPSS version 20.0. p-values were calculated using student t-test for normally distributed data. For non-normal distribution, non-parametric tests were used. Correlation was calculated by using Pearson test. Pearson’s coefficient (rho) values between 0 and 0.3 (0 and -0.3) indicate a weak positive (negative) linear relationship via a shaky linear rule. Values between 0.3 and 0.7 (-0.3 and -0.7) indicate a moderate positive (negative) linear relationship.

Results

Patient Profile

The mean age of the 40 patients included in the study was 34.65 with most patients belonging to the age group of 21-30 years (16/40). The mean age of controls was 34.20 ± 10.38 years and did not vary significantly. (p = 0.897). Females formed the majority of participants in both the groups, 37(92.5%) in cases and 33(82.5%) in controls. The mean BMI (Kg/m2) was 22.28 ± 2.81. with the majority (65%) falling in the normal range. The duration of illness ranged from 3 months to 60 months with a mean of 17.2 months.

Haemoglobin

The mean haemoglobin level of the patients was 11.46 g/dl. On the basis of WHO classification, patients were divided into non anaemic, mildly anaemic and moderately anaemic. Most (72.5%) patients were anaemic (Table 1). The mean age of patients with moderate anaemia was significantly higher than those with no or mild anaemia (p=0.045) (Table 1).

Lipid Profile

Out of the 40 patients, 33 (82.5%) had raised serum cholesterol levels. The mean of T. Cholesterol (mg/dL) was 302.46. The mean HDL level of the patients was 40.70 mg/dl. Based on the American Heart Association criteria, 32 patients had low HDL levels (i.e. <40 for males and <50 for females) [15]. The mean triglyceride level of the patients was 122.53 mg/dl with 11/40 having raised triglyceride levels (>150 mg/dl) [16]. There was a significant moderate negative correlation between Triglycerides (mg/dL) and HDL (mg/dL) (rho = -0.4, p = 0.010).

T. Cholesterol and Anaemia

The mean total cholesterol levels in the patients grouped on basis of hemoglobin levels did not vary significantly (p=0.222), though patients without anaemia had a higher mean serum cholesterol level than those with anaemia (Table 1).

biomedres-openaccess-journal-bjstr

Table 1: Associations of severity of anaemia.

Biliary Cholesterol

The Biliary cholesterol (mg/dL) levels ranged from 17.01 to 311.68 with a mean of 121.29 mg/dL.

1. There was a significant moderate positive correlation between HDL (mg/dL) and biliary cholesterol (mg/dL) (rho = 0.38, p = 0.017). For every 1 unit increase in HDL (mg/dL), the biliary cholesterol (mg/dL) was seen to increase by 3.18 units.
2. There was a significant moderate positive correlation between duration of illness (months) and Biliary cholesterol (mg/dL) (rho = 0.49, p = 0.001).For every 1 unit increase in duration of illness (months), the biliary cholesterol (mg/dL) was seen to increase by 2.22 units.

Serum Ferritin

The mean serum ferritin (ng/mL) of the patients was 12.85. Out of the 40 patients enrolled, 25 (62.5%) patients had serum ferritin within normal range and 15(37.5%) patients had low serum ferritin levels [14].

1. Serum ferritin and anemia
There was a weak positive correlation between hemoglobin (g/dL) and serum ferritin (ng/mL), and this correlation was not statistically significant (rho = 0.07, p = 0.673). There was a visible decrease in numerical value of serum ferritin as severity of anaemia increased but this was not statistically significant (p = 0.967) (Figure 1).
2. There was a significant strong positive correlation between T. Cholesterol (mg/dL) and serum ferritin (ng/mL) (rho = 0.67, p = <0.001). For every 1 unit increase in T. Cholesterol (mg/dL), the serum ferritin (ng/mL) increases by 0.04 units.
3. There was a weak negative correlation between biliary cholesterol (mg/dL) and serum ferritin (ng/mL), and this correlation was not statistically significant (rho = -0.19, p = 0.231). Though insignificant (p=0.783), higher biliary cholesterol level was noted with low serum ferritin level (mean biliary cholesterol = 123.16 mg/dL) in comparison to patients with normal serum ferritin (120.16 mg.dL).

Biliary Ferritin

The mean biliary ferritin (ng/mL) of the 40 patients was 5.99 with a minimum value of 1.41 ng/mL and a maximum value of 13.65 ng/mL.

1. There was a weak negative correlation between haemoglobin (g/dL) and biliary ferritin (ng/mL), and this correlation was not statistically significant (rho = -0.19, p = 0.249).

Biliary ferritin levels were higher in patients with moderate anaemia as compared to mildly and non-anaemic patients but this trend was not statistically significant (p=0.456). While the mean values of serum ferritin were seen to decrease with increasing severity of anaemia, the mean values of biliary ferritin were seen to increase (Figure 1).

2. There was a weak negative correlation between serum ferritin (ng/mL) and biliary ferritin (ng/mL), and this was not statistically significant (rho = 0, p = 0.992). Mean serum ferritin (12.85 ng/mL) was found to be twice the mean biliary ferritin (5.99 mg/dL).

3. There was a moderate positive correlation between biliary cholesterol (mg/dL) and biliary ferritin (ng/mL), and this correlation was statistically significant (rho = 0.52, p = <0.001). For every 1 unit increase in biliary cholesterol (mg/dL), the biliary ferritin (ng/mL) increases by 0.03 units.

biomedres-openaccess-journal-bjstr

Figure 1: Levels and trends of serum and biliary ferritin in each anaemia group.

Apo B-100 Gene

1. On taking all 40 patients as cases and all 40 healthy volunteers as a control group, a 1.001 fold change upregulation of Apo B-100 gene was observed. This difference was not significant (p=0.740) (Figure 2a).
2. On taking the mean delta ct value of controls and evaluating the expression of Apo B-100 gene in each patient against it, the result showed that the gene was upregulated in 20 patients and downregulated in 20 patients exhibiting equal distribution on the individual level (Figure 2b).
3. The mean T. Cholesterol (mg/dL) in patients with upregulation of Apo B-100 gene was 328.04 while that in patients with downregulation was 276.88 (Table 2). 18/20 (90%) patients with upregulation of Apo B-100 gene had high cholesterol levels and 12/20 (60%) with downregulation had high cholesterol levels.

biomedres-openaccess-journal-bjstr

Figure 2: a. Fold change (Apo B-100 gene expression).
b. Individual fold change (Apo B-100 gene expression) of all patients.

biomedres-openaccess-journal-bjstr

Table 2: Mean value of parameters with gene expression.

Discussion

Cholelithiasis is a multifactorial disease having a complex interplay of metabolic, genetic and environmental factors that affect the disease process. In this study, the mean age of the 40 patients included was 34.65 years, with most patients in their 3rd and 4th decades of life. This age distribution was similar to the study conducted by Prakash, et al. [3] and Kanth, et al. [2]. Most other studies which aimed at studying the epidemiology of cholelithiasis found the disease to be more common in a relatively higher age group [1,17,18]. We also found the mean age to be significantly higher in the moderately anaemic group suggesting that anaemia increases with increasing age. Since anaemia and increasing age have both been described as independent risk factors for cholelithiasis [1,6], an additive effect of both seems to be at work in causation of the disease. A majority of the patients in this study were females (37/40, 92.5%). It is a well-established fact in literature that cholelithiasis occurs more commonly in the female population. Most of the patients in our study (26/40, 65%) had a BMI in the normal range suggesting that cholelithiasis is not just a disease of the obese; also seen by Kharga, et al. [19].

Lipid Profile

As found by Miquel, et al. [20] and Loria, et al. [21], in this study also, a majority (75%) of patients had raised total cholesterol and 7.5% patients had borderline raised cholesterol. Additionally, majority (32/40, 80%) of the patients had low HDL levels while only 11 out of the 40 patients (27.5%) had raised triglyceride levels. We found a significant negative correlation (rho = -0.4, p = 0.010) between HDL and triglyceride levels indicating an increase in triglyceride levels as HDL levels fall. Thijs, et al. [22] and Hayat, et al. [23] described an association of low HDL and raised triglyceride levels with gallstone disease but did not find any significant association of serum cholesterol levels with cholelithiasis. Alexander, et al. [24] conducted a study on 80 patients of cholelithiasis and found that only 12 (15%) patients had hypercholesterolemia while 79 (98.75%) patients had low HDL levels, and 23 (28.7%) patients had raised triglyceride levels. In our study, we found raised cholesterol levels (82.5%) and low HDL levels (80%) in a majority of the patients.

These findings lead us to believe that while serum cholesterol plays a role in pathogenesis of gallstone disease, low HDL levels may be a more relevant indicator for the disease. Da Luz, et al. observed in patients of coronary artery disease that the ratio of TG/ HDL and T. cholesterol/HDL are better indicators of disease than individual lipid values in serum [25]. Jiang, et al. found a strong positive association of CAD with cholelithiasis [26]. In the present study, we also found that as serum HDL levels decreased, there was an increase in triglyceride levels. Based on these findings we would like to postulate that it is the ratio of lipid parameters and not the individual levels of each parameter that may serve as better indicators for cholelithiasis. The alterations in ratios can identify individuals prone to cholelithiasis and an attempt can be made at disease prevention through lifestyle modification and management of dyslipidemia.

Biliary Cholesterol

In our study, there was a weakly negative correlation between serum and biliary cholesterol (rho = -0.09) and this was not statistically significant (p=0.592, (Table 3)); in contrast to Kanwar, et al. [27] who found a positive correlation between serum cholesterol and biliary cholesterol (rho = 0.61, p=0.001). We expected higher biliary cholesterol levels based on the prevalent concept of cholesterol supersaturation leading to gallstone formation, but our findings were contrary to this. It has been shown by Nagi and Arora [28] that the cholesterol stones (13.5%) are much less common in North Indian population than mixed stones (80.7%). In a study conducted by Chandran, et al. [29], amongst 200 patients of cholelithiasis, 52 (26%) had cholesterol stones, 76 (38%) had mixed stones and 72 (36%) had pigment stones. Such high proportion of pigment and mixed stones may be able to explain the lower levels of biliary cholesterol in our population as pigment stones are known to mainly form due to unconjugated bilirubin [30]. Since these studies had a small sample size, further evaluation with larger sample size along with stone analysis is warranted. We also found a significant moderate positive correlation between HDL and biliary cholesterol (rho = 0.38, p = 0.017, (Table 3)). HDL carries cholesterol in circulation and transports excess peripheral cholesterol to the liver for secretion into bile.

biomedres-openaccess-journal-bjstr

Table 3: Correlation of biochemical parameters (rho value, p value).

This process is termed reverse cholesterol transport. Biliary cholesterol can be derived from plasma lipoproteins like HDL and LDL or it can be synthesized in the liver. The more relevant source of cholesterol secreted into the bile is cholesterol derived from plasma lipoproteins. Of these, high density lipoprotein (HDL) appears to be the preferential contributor for cholesterol secreted into bile [31,32]. We found a significant moderate positive correlation between duration of illness and biliary cholesterol (rho = 0.49, p = 0.001). Supersaturation of bile with cholesterol is an ongoing process and does not stop after stone formation with progressively increasing biliary cholesterol levels over the course of time [33,34]. This also corresponds to the fact that increased chances of gallstone development occur as time progresses in predisposed individuals.

Serum Ferritin

In the present study, serum ferritin showed a numerical decrease with increase in severity of anaemia (p = 0.967 (Figure 1)). A significant, strong positive correlation was seen between T. cholesterol and serum ferritin (rho = 0.67, p = <0.001, (Table 3)). A similar significant positive correlation was also found by Kim, et al. [35]. While we found no significant association between serum cholesterol and anemia, the mean serum cholesterol level in non anaemics (347.18 mg/dL) was higher than that found in anaemics (285.41 mg/dL). This was similar to the findings of Chowta et al. [36] and Choi et al. [37] who studied the relationship of anaemia with lipid profile and Kumar, et al. [38] who also found lower serum cholesterol levels in anaemic patients of cholelithiasis. This was explainable through multiple factors like increased erythropoiesis requiring moe cholesterol, plasma dilution and increased uptake of cholesterol by the reticuloendothelial system in anaemics [36]. Thus, anaemics tend to have lower ferritin levels and lower cholesterol levels, supporting the findings of the present study. Iron deficiency anaemia, low serum ferritin levels and raised serum cholesterol levels were expected to be associated with cholelithiasis [6,39,40] We also found that as serum cholesterol levels increased, there was an increase in serum ferritin levels.

This contradiction suggests that there is a complex interplay of ferritin, cholesterol and anaemia in cholelithiasis and cant be simply explained by the individual levels of these factors. There was a higher biliary cholesterol level in patients with low serum ferritin than in patients with normal serum ferritin, though not significant (p=0.783), similar to a study conducted by Sahu, et al. [41]. Iron deficiency causes altered motility of gall bladder leading to biliary stasis leading to an increase in cholesterol saturation of bile [42,43].

Biliary Ferritin

There was a weak negative correlation between serum ferritin and biliary ferritin (rho = 0, p = 0.992, (Table 3)). While the mean values of serum ferritin were seen to decrease with increasing severity of anaemia, the mean values of biliary ferritin were seen to increase. There is a lack of adequate literature available on excretion of ferritin in bile. We could not find any studies which studied the role of biliary ferritin in cholelithiasis. However, Hultcrantz, et al. [7], in 1989, conducted a study in 10 patients of idiopathic hemochromatosis and 2 groups of controls (14 patients of gallstone disease and 16 healthy individuals). They found that hepatic iron concentration was increased 8 fold, biliary iron concentration was increased twofold and biliary ferritin concentration was increased about fivefold in patients of hemochromatosis as compared to the controls. They found higher levels of biliary ferritin in gallstone patients than in healthy individuals. Hultcrantz, et al. also concluded that hepatocytes secrete ferritin into bile since very little ferritin was detected in bile duct cells on electron microscopy making them an unlikely source of biliary ferritin. Hepatocyte excretion of ferritin into bile has also been established in animal models earlier [44]. Verma, et al. [45] found that the levels of biliary iron were higher than those of serum iron in cases of cholelithiasis as well as controls.

A significantly higher amount of iron in pigment stones than in cholesterol stones was also found. Chandran, et al. [29] found a significantly higher amount of iron in mixed stones than in pigment and cholesterol stones. While the amount of iron was higher in pigment stones than cholesterol stones, this difference was not significant. An increase in biliary ferritin as anaemia increases and relatively lower levels of biliary cholesterol suggest a more frequent occurrence of pigment and mixed stones in our population. Since we found an increased biliary ferritin with increase in anaemia (which is an iron deficient state) in patients of cholelithiasis, it suggests that the relationship between cholelithiasis and iron deficiency anaemia is more complex than thought previously. Ferritin may have a larger role to play in the pathogenesis of cholelithiasis than to just be a marker of iron deficiency anaemia and it is not just a decrease in gallbladder motility and resulting biliary stasis due to the deficiency of iron which cholelithiasis [46]. It is possible that the biliary route of excretion of ferritin is an incidental finding with no serious implications on gallstone formation. The concentration of ferritin in bile may also be due to fluid absorption from the gall bladder leading to concentration of its contents along with biliary stasis secondary to iron deficiency.

Based on our findings, we would like to hypothesize that ferritin gets concentrated in bile in the gall bladder after excretion from hepatocytes and delivers iron into the bile which will play a role in the formation of gallstones. In our study, there was a significant moderate positive correlation between biliary cholesterol and biliary ferritin (rho = 0.52, p = <0.001, (Table 3)). It is known that cations in bile such as iron, calcium, copper and zinc promote cholesterol crystallization [45]. We would like to postulate that in patients with cholelithiasis, with the biliary cholesterol increase, there is a tendency for these cations to increase as well and lead to crystallization of cholesterol and formation of gallstones. An increase in biliary ferritin levels suggests its role in delivering iron which serves as a cation in the biliary cholesterol rich bile leading to cholelithiasis.

Apo B-100 Gene

On taking all 40 patients as cases and all 40 healthy volunteers as a control group, a 1.001 fold change upregulation of Apo B-100 gene was observed (p=0.740). On assessing individual cases, equal distribution of up and downregulation in patients of cholelithiasis was seen (Figure 2b). In 2005, Wang, et al. assessed intestinal cells of mice and found that absence of expression of intestinal Apo B-48, but not Apo B-100, reduces biliary cholesterol secretion and cholelithogenesis, possibly by decreasing intestinal absorption and hepatic bioavailability. While the findings of Apo B-100 expression were similar to the present study, they studied the gene expression in intestinal cells of mice while this was studied in blood from humans in our study. Since we found a relatively higher expression of Apo B-100 in the patient group, though not statistically significant, it suggests Apo B-100 can have a role in etiology of cholelithiasis.

Apo B-100 and Lipid Profile: In our study we found that the levels of total cholesterol and serum triglyceride levels were much higher in patients with upregulation of Apo B-100 gene than patients who showed downregulation (Table 2). Even though our findings were not significant, they were in concordance with the findings of Fan, et al. [12], who found a threefold increase in levels of serum triglyceride and serum cholesterol levels in the transgenic rabbits with human Apo B-100 gene as compared to the non-transgenic controls. 90% patients with upregulation of the gene had raised serum cholesterol levels, while only 60% in the downregulation group had raised serum cholesterol levels. While this difference is not significant (p value = 0.081), it points to a higher frequency of raised cholesterol levels in the upregulation group than in the downregulation group and justifies the association of raised serum cholesterol levels with increased incidence of cholelithiasis. Since Apo B-100 upregulation is associated with higher cholesterol and triglyceride levels, assessment of gene expression may predict patients prone to develop hypercholesterolemia and hypertriglyceridemia and further, an increased risk of gallstone formation.

Apo B-100 and Ferritin: In our study we found higher levels of serum ferritin and biliary ferritin in patients with upregulation of Apo B-100 gene than in patients with downregulation (Table 2). Silva, et al. [13] studied the effects of dietary iron on lipid metabolism in rats. They found that rats on iron dextran diet had a higher expression of Apo B-100 and higher levels of serum cholesterol and triglycerides, than rats who did not have iron in their diet. Since ferritin stores iron, and higher levels of serum and biliary ferritin were found in patients with upregulation of Apo B-100 gene, it is possible that the higher levels of ferritin led to the upregulation of Apo B-100 gene which further led to an increase in serum cholesterol and triglyceride levels leading to an enhanced risk of cholelilthiasis.

Conclusion

Findings in the present study indicate that elevated serum cholesterol and anaemia play a role in the pathogenesis of cholelithiasis. HDL is a major source of cholesterol in bile. HDL levels and ratios of lipid parameters may be better indicators of cholelithiasis. We found a positive correlation between biliary cholesterol and biliary ferritin which seemed to correspond with a positive correlation between serum cholesterol and serum ferritin. We also found an increase in levels of biliary ferritin with increased severity of anaemia in patients of cholelithiasis. These findings suggest a role of ferritin in delivering iron to bile leading to an increased risk of gallstone formation. The levels of biliary cholesterol were relatively less and biliary ferritin levels increased with increase in anaemia. Iron levels are known to be higher in pigment stones. Thus, we can conclude this to be the reason for pigment stones being more common in our study population. We found a net upregulation of Apo B-100 gene in patients of cholelithiasis; also, serum cholesterol and triglyceride levels in Apo B-100 upregulated patients were raised, suggesting a possible role of Apo B-100 in cholelithiasis. The higher levels of serum and biliary ferritin could have led to the upregulation of Apo B-100 gene. This relationship between iron metabolism, lipid metabolism and cholelithiasis, which exists even at the genetic level requires further studies for evaluation.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Degenerative Medicine

Evaluation of Biliary Cholesterol, Biliary Ferritin and Expression of Apo B-100 Gene in Patients of
Cholelithiasis

Introduction

Gall stone disease is a common disorder around the world. Choleliths are bile precipitates resulting from abnormalities in the metabolism of cholesterol, bilirubin and bile acids. [1]. The prevalence of cholelithiasis in India ranges from 2-29%, being higher in North Indians and females [1-3]. Gall Stones are classified according to their chemical composition into Cholesterol stones, Mixed stones and Pigment stones [4]. In western populations, stones that are rich in cholesterol are the predominant type (80%) but lower frequencies of cholesterol stones and higher frequencies of pigment stones are seen in many parts of Asia [5]. The pathogenesis of cholesterol stones primarily involves supersaturation of bile with cholesterol accentuated by accelerated nucleation and impaired gallbladder emptying being other factors. Relative concentration of cholesterol, bile salts and phospholipids in GB bile determines precipitation [5]. Anaemia due to iron deficiency is associated with gall stone disease. Serum ferritin, a marker of iron stores, exhibits low levels in iron deficient patients. In a study done on patients with cholelithiasis by Aggarwal et al, in 2019, it was seen that serum ferritin levels were lower in cases of cholelithiasis as compared to those in the normal population [6]. Ferritin is also released in bile in hemochromatosis.

In this study biliary iron concentration (per millimole of bile acid), was increased about twofold (p < 0.05) and biliary ferritin concentration about fivefold (p < 0.001) [7]. The levels of biliary ferritin in gallstone disease have not been studied so far. Genetic variation in cholesterol metabolism can be brought about by different isoforms of lipid transport and receptor molecules in the general population. Apo E and apo B-100 are prominent transporters that exist in polymorphic states in humans whose certain isoforms help in better uptake and delivery of cholesterol [8]. Apo B-100 serves as a ligand for receptor-mediated endocytosis of LDL. The gene is located on chromosome 2. High frequency of X+ allele in apo B-100 genes has been reported in gallstone patients from China [9]. Heterozygosity (X+/X-) of apo B-100 may be associated with gallstone disease and gallbladder cancer in India [10,11]. In a study in transgenic rabbits, it was found that Apo B-100 expression is associated with higher serum cholesterol and triglyceride levels [12]. In another study, rats fed on iron rich diet had a higher expression of Apo B-100 gene and had higher serum cholesterol and triglyceride levels [13]. Hence this study was undertaken to determine the levels of biliary ferritin and biliary cholesterol in patients of cholelithiasis and to see the association of expression of Apo-B gene in them.

Methodology

This observational cross-sectional study was conducted in the Department of Surgery and the Department of Biochemistry at UCMS & GTB Hospital, Delhi from November 2018 to March 2020 with a sample size of 40 cases and equal number of controls for gene purposes. The research committee of the institute approved the study protocol. All consenting patients, except those with coexisting systemic and inflammatory illness and immunocompromise, diagnosed with symptomatic cholelithiasis willing to undergo laparoscopic/open cholecystectomy were included. For Apo B-100 expression, an equal number of healthy volunteers were included after ultrasound to exclude cholelithiasis in them. All patients of cholelithiasis and controls underwent history taking, radiological and biochemical investigations. A fasting blood sample of patients was taken for hematological evaluation, lipid profile, serum ferritin, and Apo B-100 gene expression. In the controls, only Apo B-100 gene expression was measured. Bile was obtained from the gall bladder of patients immediately after cholecystectomy for evaluation of biliary cholesterol and ferritin levels. Cholesterol and ferritin levels in both serum and bile were measured by ELISA. The reference range for serum ferritin was taken to be 29-248 ng/ml for males and 10-150 ng/ml for females [14].

Serum cholesterol levels less than 200 mg/dl were considered desirable, 200-239 mg/dl as borderline high and >240 mg/dl as high. Gene Expression comprised of 3 steps:
a) Extraction of RNA,
b) Synthesis of complementary DNA (cDNA) and
c) Quantification of Apo B-100 gene expression by quantitative Real Time PCR.

Statistical analysis was done using SPSS version 20.0. p-values were calculated using student t-test for normally distributed data. For non-normal distribution, non-parametric tests were used. Correlation was calculated by using Pearson test. Pearson’s coefficient (rho) values between 0 and 0.3 (0 and -0.3) indicate a weak positive (negative) linear relationship via a shaky linear rule. Values between 0.3 and 0.7 (-0.3 and -0.7) indicate a moderate positive (negative) linear relationship.

Results

Patient Profile

The mean age of the 40 patients included in the study was 34.65 with most patients belonging to the age group of 21-30 years (16/40). The mean age of controls was 34.20 ± 10.38 years and did not vary significantly. (p = 0.897). Females formed the majority of participants in both the groups, 37(92.5%) in cases and 33(82.5%) in controls. The mean BMI (Kg/m2) was 22.28 ± 2.81. with the majority (65%) falling in the normal range. The duration of illness ranged from 3 months to 60 months with a mean of 17.2 months.

Haemoglobin

The mean haemoglobin level of the patients was 11.46 g/dl. On the basis of WHO classification, patients were divided into non anaemic, mildly anaemic and moderately anaemic. Most (72.5%) patients were anaemic (Table 1). The mean age of patients with moderate anaemia was significantly higher than those with no or mild anaemia (p=0.045) (Table 1).

Lipid Profile

Out of the 40 patients, 33 (82.5%) had raised serum cholesterol levels. The mean of T. Cholesterol (mg/dL) was 302.46. The mean HDL level of the patients was 40.70 mg/dl. Based on the American Heart Association criteria, 32 patients had low HDL levels (i.e. <40 for males and <50 for females) [15]. The mean triglyceride level of the patients was 122.53 mg/dl with 11/40 having raised triglyceride levels (>150 mg/dl) [16]. There was a significant moderate negative correlation between Triglycerides (mg/dL) and HDL (mg/dL) (rho = -0.4, p = 0.010).

T. Cholesterol and Anaemia

The mean total cholesterol levels in the patients grouped on basis of hemoglobin levels did not vary significantly (p=0.222), though patients without anaemia had a higher mean serum cholesterol level than those with anaemia (Table 1).

biomedres-openaccess-journal-bjstr

Table 1: Associations of severity of anaemia.

Biliary Cholesterol

The Biliary cholesterol (mg/dL) levels ranged from 17.01 to 311.68 with a mean of 121.29 mg/dL.

1. There was a significant moderate positive correlation between HDL (mg/dL) and biliary cholesterol (mg/dL) (rho = 0.38, p = 0.017). For every 1 unit increase in HDL (mg/dL), the biliary cholesterol (mg/dL) was seen to increase by 3.18 units.
2. There was a significant moderate positive correlation between duration of illness (months) and Biliary cholesterol (mg/dL) (rho = 0.49, p = 0.001).For every 1 unit increase in duration of illness (months), the biliary cholesterol (mg/dL) was seen to increase by 2.22 units.

Serum Ferritin

The mean serum ferritin (ng/mL) of the patients was 12.85. Out of the 40 patients enrolled, 25 (62.5%) patients had serum ferritin within normal range and 15(37.5%) patients had low serum ferritin levels [14].

1. Serum ferritin and anemia
There was a weak positive correlation between hemoglobin (g/dL) and serum ferritin (ng/mL), and this correlation was not statistically significant (rho = 0.07, p = 0.673). There was a visible decrease in numerical value of serum ferritin as severity of anaemia increased but this was not statistically significant (p = 0.967) (Figure 1).
2. There was a significant strong positive correlation between T. Cholesterol (mg/dL) and serum ferritin (ng/mL) (rho = 0.67, p = <0.001). For every 1 unit increase in T. Cholesterol (mg/dL), the serum ferritin (ng/mL) increases by 0.04 units.
3. There was a weak negative correlation between biliary cholesterol (mg/dL) and serum ferritin (ng/mL), and this correlation was not statistically significant (rho = -0.19, p = 0.231). Though insignificant (p=0.783), higher biliary cholesterol level was noted with low serum ferritin level (mean biliary cholesterol = 123.16 mg/dL) in comparison to patients with normal serum ferritin (120.16 mg.dL).

Biliary Ferritin

The mean biliary ferritin (ng/mL) of the 40 patients was 5.99 with a minimum value of 1.41 ng/mL and a maximum value of 13.65 ng/mL.

1. There was a weak negative correlation between haemoglobin (g/dL) and biliary ferritin (ng/mL), and this correlation was not statistically significant (rho = -0.19, p = 0.249).

Biliary ferritin levels were higher in patients with moderate anaemia as compared to mildly and non-anaemic patients but this trend was not statistically significant (p=0.456). While the mean values of serum ferritin were seen to decrease with increasing severity of anaemia, the mean values of biliary ferritin were seen to increase (Figure 1).

2. There was a weak negative correlation between serum ferritin (ng/mL) and biliary ferritin (ng/mL), and this was not statistically significant (rho = 0, p = 0.992). Mean serum ferritin (12.85 ng/mL) was found to be twice the mean biliary ferritin (5.99 mg/dL).

3. There was a moderate positive correlation between biliary cholesterol (mg/dL) and biliary ferritin (ng/mL), and this correlation was statistically significant (rho = 0.52, p = <0.001). For every 1 unit increase in biliary cholesterol (mg/dL), the biliary ferritin (ng/mL) increases by 0.03 units.

biomedres-openaccess-journal-bjstr

Figure 1: Levels and trends of serum and biliary ferritin in each anaemia group.

Apo B-100 Gene

1. On taking all 40 patients as cases and all 40 healthy volunteers as a control group, a 1.001 fold change upregulation of Apo B-100 gene was observed. This difference was not significant (p=0.740) (Figure 2a).
2. On taking the mean delta ct value of controls and evaluating the expression of Apo B-100 gene in each patient against it, the result showed that the gene was upregulated in 20 patients and downregulated in 20 patients exhibiting equal distribution on the individual level (Figure 2b).
3. The mean T. Cholesterol (mg/dL) in patients with upregulation of Apo B-100 gene was 328.04 while that in patients with downregulation was 276.88 (Table 2). 18/20 (90%) patients with upregulation of Apo B-100 gene had high cholesterol levels and 12/20 (60%) with downregulation had high cholesterol levels.

biomedres-openaccess-journal-bjstr

Figure 2: a. Fold change (Apo B-100 gene expression).
b. Individual fold change (Apo B-100 gene expression) of all patients.

biomedres-openaccess-journal-bjstr

Table 2: Mean value of parameters with gene expression.

Discussion

Cholelithiasis is a multifactorial disease having a complex interplay of metabolic, genetic and environmental factors that affect the disease process. In this study, the mean age of the 40 patients included was 34.65 years, with most patients in their 3rd and 4th decades of life. This age distribution was similar to the study conducted by Prakash, et al. [3] and Kanth, et al. [2]. Most other studies which aimed at studying the epidemiology of cholelithiasis found the disease to be more common in a relatively higher age group [1,17,18]. We also found the mean age to be significantly higher in the moderately anaemic group suggesting that anaemia increases with increasing age. Since anaemia and increasing age have both been described as independent risk factors for cholelithiasis [1,6], an additive effect of both seems to be at work in causation of the disease. A majority of the patients in this study were females (37/40, 92.5%). It is a well-established fact in literature that cholelithiasis occurs more commonly in the female population. Most of the patients in our study (26/40, 65%) had a BMI in the normal range suggesting that cholelithiasis is not just a disease of the obese; also seen by Kharga, et al. [19].

Lipid Profile

As found by Miquel, et al. [20] and Loria, et al. [21], in this study also, a majority (75%) of patients had raised total cholesterol and 7.5% patients had borderline raised cholesterol. Additionally, majority (32/40, 80%) of the patients had low HDL levels while only 11 out of the 40 patients (27.5%) had raised triglyceride levels. We found a significant negative correlation (rho = -0.4, p = 0.010) between HDL and triglyceride levels indicating an increase in triglyceride levels as HDL levels fall. Thijs, et al. [22] and Hayat, et al. [23] described an association of low HDL and raised triglyceride levels with gallstone disease but did not find any significant association of serum cholesterol levels with cholelithiasis. Alexander, et al. [24] conducted a study on 80 patients of cholelithiasis and found that only 12 (15%) patients had hypercholesterolemia while 79 (98.75%) patients had low HDL levels, and 23 (28.7%) patients had raised triglyceride levels. In our study, we found raised cholesterol levels (82.5%) and low HDL levels (80%) in a majority of the patients.

These findings lead us to believe that while serum cholesterol plays a role in pathogenesis of gallstone disease, low HDL levels may be a more relevant indicator for the disease. Da Luz, et al. observed in patients of coronary artery disease that the ratio of TG/ HDL and T. cholesterol/HDL are better indicators of disease than individual lipid values in serum [25]. Jiang, et al. found a strong positive association of CAD with cholelithiasis [26]. In the present study, we also found that as serum HDL levels decreased, there was an increase in triglyceride levels. Based on these findings we would like to postulate that it is the ratio of lipid parameters and not the individual levels of each parameter that may serve as better indicators for cholelithiasis. The alterations in ratios can identify individuals prone to cholelithiasis and an attempt can be made at disease prevention through lifestyle modification and management of dyslipidemia.

Biliary Cholesterol

In our study, there was a weakly negative correlation between serum and biliary cholesterol (rho = -0.09) and this was not statistically significant (p=0.592, (Table 3)); in contrast to Kanwar, et al. [27] who found a positive correlation between serum cholesterol and biliary cholesterol (rho = 0.61, p=0.001). We expected higher biliary cholesterol levels based on the prevalent concept of cholesterol supersaturation leading to gallstone formation, but our findings were contrary to this. It has been shown by Nagi and Arora [28] that the cholesterol stones (13.5%) are much less common in North Indian population than mixed stones (80.7%). In a study conducted by Chandran, et al. [29], amongst 200 patients of cholelithiasis, 52 (26%) had cholesterol stones, 76 (38%) had mixed stones and 72 (36%) had pigment stones. Such high proportion of pigment and mixed stones may be able to explain the lower levels of biliary cholesterol in our population as pigment stones are known to mainly form due to unconjugated bilirubin [30]. Since these studies had a small sample size, further evaluation with larger sample size along with stone analysis is warranted. We also found a significant moderate positive correlation between HDL and biliary cholesterol (rho = 0.38, p = 0.017, (Table 3)). HDL carries cholesterol in circulation and transports excess peripheral cholesterol to the liver for secretion into bile.

biomedres-openaccess-journal-bjstr

Table 3: Correlation of biochemical parameters (rho value, p value).

This process is termed reverse cholesterol transport. Biliary cholesterol can be derived from plasma lipoproteins like HDL and LDL or it can be synthesized in the liver. The more relevant source of cholesterol secreted into the bile is cholesterol derived from plasma lipoproteins. Of these, high density lipoprotein (HDL) appears to be the preferential contributor for cholesterol secreted into bile [31,32]. We found a significant moderate positive correlation between duration of illness and biliary cholesterol (rho = 0.49, p = 0.001). Supersaturation of bile with cholesterol is an ongoing process and does not stop after stone formation with progressively increasing biliary cholesterol levels over the course of time [33,34]. This also corresponds to the fact that increased chances of gallstone development occur as time progresses in predisposed individuals.

Serum Ferritin

In the present study, serum ferritin showed a numerical decrease with increase in severity of anaemia (p = 0.967 (Figure 1)). A significant, strong positive correlation was seen between T. cholesterol and serum ferritin (rho = 0.67, p = <0.001, (Table 3)). A similar significant positive correlation was also found by Kim, et al. [35]. While we found no significant association between serum cholesterol and anemia, the mean serum cholesterol level in non anaemics (347.18 mg/dL) was higher than that found in anaemics (285.41 mg/dL). This was similar to the findings of Chowta et al. [36] and Choi et al. [37] who studied the relationship of anaemia with lipid profile and Kumar, et al. [38] who also found lower serum cholesterol levels in anaemic patients of cholelithiasis. This was explainable through multiple factors like increased erythropoiesis requiring moe cholesterol, plasma dilution and increased uptake of cholesterol by the reticuloendothelial system in anaemics [36]. Thus, anaemics tend to have lower ferritin levels and lower cholesterol levels, supporting the findings of the present study. Iron deficiency anaemia, low serum ferritin levels and raised serum cholesterol levels were expected to be associated with cholelithiasis [6,39,40] We also found that as serum cholesterol levels increased, there was an increase in serum ferritin levels.

This contradiction suggests that there is a complex interplay of ferritin, cholesterol and anaemia in cholelithiasis and cant be simply explained by the individual levels of these factors. There was a higher biliary cholesterol level in patients with low serum ferritin than in patients with normal serum ferritin, though not significant (p=0.783), similar to a study conducted by Sahu, et al. [41]. Iron deficiency causes altered motility of gall bladder leading to biliary stasis leading to an increase in cholesterol saturation of bile [42,43].

Biliary Ferritin

There was a weak negative correlation between serum ferritin and biliary ferritin (rho = 0, p = 0.992, (Table 3)). While the mean values of serum ferritin were seen to decrease with increasing severity of anaemia, the mean values of biliary ferritin were seen to increase. There is a lack of adequate literature available on excretion of ferritin in bile. We could not find any studies which studied the role of biliary ferritin in cholelithiasis. However, Hultcrantz, et al. [7], in 1989, conducted a study in 10 patients of idiopathic hemochromatosis and 2 groups of controls (14 patients of gallstone disease and 16 healthy individuals). They found that hepatic iron concentration was increased 8 fold, biliary iron concentration was increased twofold and biliary ferritin concentration was increased about fivefold in patients of hemochromatosis as compared to the controls. They found higher levels of biliary ferritin in gallstone patients than in healthy individuals. Hultcrantz, et al. also concluded that hepatocytes secrete ferritin into bile since very little ferritin was detected in bile duct cells on electron microscopy making them an unlikely source of biliary ferritin. Hepatocyte excretion of ferritin into bile has also been established in animal models earlier [44]. Verma, et al. [45] found that the levels of biliary iron were higher than those of serum iron in cases of cholelithiasis as well as controls.

A significantly higher amount of iron in pigment stones than in cholesterol stones was also found. Chandran, et al. [29] found a significantly higher amount of iron in mixed stones than in pigment and cholesterol stones. While the amount of iron was higher in pigment stones than cholesterol stones, this difference was not significant. An increase in biliary ferritin as anaemia increases and relatively lower levels of biliary cholesterol suggest a more frequent occurrence of pigment and mixed stones in our population. Since we found an increased biliary ferritin with increase in anaemia (which is an iron deficient state) in patients of cholelithiasis, it suggests that the relationship between cholelithiasis and iron deficiency anaemia is more complex than thought previously. Ferritin may have a larger role to play in the pathogenesis of cholelithiasis than to just be a marker of iron deficiency anaemia and it is not just a decrease in gallbladder motility and resulting biliary stasis due to the deficiency of iron which cholelithiasis [46]. It is possible that the biliary route of excretion of ferritin is an incidental finding with no serious implications on gallstone formation. The concentration of ferritin in bile may also be due to fluid absorption from the gall bladder leading to concentration of its contents along with biliary stasis secondary to iron deficiency.

Based on our findings, we would like to hypothesize that ferritin gets concentrated in bile in the gall bladder after excretion from hepatocytes and delivers iron into the bile which will play a role in the formation of gallstones. In our study, there was a significant moderate positive correlation between biliary cholesterol and biliary ferritin (rho = 0.52, p = <0.001, (Table 3)). It is known that cations in bile such as iron, calcium, copper and zinc promote cholesterol crystallization [45]. We would like to postulate that in patients with cholelithiasis, with the biliary cholesterol increase, there is a tendency for these cations to increase as well and lead to crystallization of cholesterol and formation of gallstones. An increase in biliary ferritin levels suggests its role in delivering iron which serves as a cation in the biliary cholesterol rich bile leading to cholelithiasis.

Apo B-100 Gene

On taking all 40 patients as cases and all 40 healthy volunteers as a control group, a 1.001 fold change upregulation of Apo B-100 gene was observed (p=0.740). On assessing individual cases, equal distribution of up and downregulation in patients of cholelithiasis was seen (Figure 2b). In 2005, Wang, et al. assessed intestinal cells of mice and found that absence of expression of intestinal Apo B-48, but not Apo B-100, reduces biliary cholesterol secretion and cholelithogenesis, possibly by decreasing intestinal absorption and hepatic bioavailability. While the findings of Apo B-100 expression were similar to the present study, they studied the gene expression in intestinal cells of mice while this was studied in blood from humans in our study. Since we found a relatively higher expression of Apo B-100 in the patient group, though not statistically significant, it suggests Apo B-100 can have a role in etiology of cholelithiasis.

Apo B-100 and Lipid Profile: In our study we found that the levels of total cholesterol and serum triglyceride levels were much higher in patients with upregulation of Apo B-100 gene than patients who showed downregulation (Table 2). Even though our findings were not significant, they were in concordance with the findings of Fan, et al. [12], who found a threefold increase in levels of serum triglyceride and serum cholesterol levels in the transgenic rabbits with human Apo B-100 gene as compared to the non-transgenic controls. 90% patients with upregulation of the gene had raised serum cholesterol levels, while only 60% in the downregulation group had raised serum cholesterol levels. While this difference is not significant (p value = 0.081), it points to a higher frequency of raised cholesterol levels in the upregulation group than in the downregulation group and justifies the association of raised serum cholesterol levels with increased incidence of cholelithiasis. Since Apo B-100 upregulation is associated with higher cholesterol and triglyceride levels, assessment of gene expression may predict patients prone to develop hypercholesterolemia and hypertriglyceridemia and further, an increased risk of gallstone formation.

Apo B-100 and Ferritin: In our study we found higher levels of serum ferritin and biliary ferritin in patients with upregulation of Apo B-100 gene than in patients with downregulation (Table 2). Silva, et al. [13] studied the effects of dietary iron on lipid metabolism in rats. They found that rats on iron dextran diet had a higher expression of Apo B-100 and higher levels of serum cholesterol and triglycerides, than rats who did not have iron in their diet. Since ferritin stores iron, and higher levels of serum and biliary ferritin were found in patients with upregulation of Apo B-100 gene, it is possible that the higher levels of ferritin led to the upregulation of Apo B-100 gene which further led to an increase in serum cholesterol and triglyceride levels leading to an enhanced risk of cholelilthiasis.

Conclusion

Findings in the present study indicate that elevated serum cholesterol and anaemia play a role in the pathogenesis of cholelithiasis. HDL is a major source of cholesterol in bile. HDL levels and ratios of lipid parameters may be better indicators of cholelithiasis. We found a positive correlation between biliary cholesterol and biliary ferritin which seemed to correspond with a positive correlation between serum cholesterol and serum ferritin. We also found an increase in levels of biliary ferritin with increased severity of anaemia in patients of cholelithiasis. These findings suggest a role of ferritin in delivering iron to bile leading to an increased risk of gallstone formation. The levels of biliary cholesterol were relatively less and biliary ferritin levels increased with increase in anaemia. Iron levels are known to be higher in pigment stones. Thus, we can conclude this to be the reason for pigment stones being more common in our study population. We found a net upregulation of Apo B-100 gene in patients of cholelithiasis; also, serum cholesterol and triglyceride levels in Apo B-100 upregulated patients were raised, suggesting a possible role of Apo B-100 in cholelithiasis. The higher levels of serum and biliary ferritin could have led to the upregulation of Apo B-100 gene. This relationship between iron metabolism, lipid metabolism and cholelithiasis, which exists even at the genetic level requires further studies for evaluation.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Microbiology Research

The Importance of Involving Student Volunteers and Nursing Students During the COVID-19 Pandemic

Covid-19 infection is spreading rapidly and has consequences for the health of the population, as well as burdening the health system of the country and the social well-being of its citizens. Was declared a global pandemic in March 2020 [1]. Significant changes were evident during the Covid-19 pandemic in both the education, employment, and health sectors [2]. As the workload of health workers is great, I believe it is necessary to include as many volunteers as possible who can help take care of patients in the information sector and support to families and patients, especially the vulnerable population. The education of nurses in Montenegro takes place at the Medical Faculty of the University of Montenegro, and the education of health technicians in vocational medical schools. From March 2020, students do not stay in hospitals for exercises and practical classes, but do so in school offices and classrooms, and students of nurses in the faculty office and through “Online distance learning” [3].

I believe that it is necessary to adopt a strategy of involving medical students and nursing students in helping to vaccinate the population and other activities where extensive experience is not needed to reduce the pressure on the health sector and where there is supervision by experienced nurses. School and faculty management should devise a strategy to involve students in volunteer work in a way that motivates them, but never forces them to work, because young people can make a great contribution in the fight against the Covid-19 pandemic. Since the beginning of the Covid-19 pandemic, a total of 185 health workers with secondary education have volunteered at the Clinical Center of Montenegro, but that number is not enough because it refers to only one health institution, the Clinical Center of Montenegro in Podgorica. Not all Covid Center hospitals in our country, but volunteers could relieve the staff of nurses and make their work easier in those hospitals from which they were mobilized to help in Covid centers in other hospitals in the area.

It is important to emphasize that the Clinical Center does not volunteer Nursing students, but graduate health professionals who have not yet been employed, so volunteer work will help them gain new knowledge and experience and get great recommendations to help them find employment and conditions. For taking the internship exam. We hope that the Covid-19 pandemic will end soon and that we will draw conclusions from it in both professional and private life.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Biomedical Research

Aggregation-Induced Emission Characteristics in Carbon Dots for Detection of Nitroaromatic Explosives

Prospective

Detection of nitroaromatic explosives such as Trinitrotoluene (TNT), Dinitrotoluene (DNT) and Picric Acid (PA) is of great concern for homeland security, battlefield protection, industrial and environmental safety control [1,2]. These nitroaromatic explosives are primary constituents of many unexploded land mines worldwide [3]. Among them, TNT is a widely used explosive and found to be poisonous, carcinogenic and can adversely affect male fertility [4-6]. Soil and ground water of war zone can contain toxic levels of these NACs as well as their degradation products. Thus, these nitroaromatic explosives are found to be environmental contaminants and toxic to living organisms. On the other hand, the explosive power of nitroaromatic compound PA is higher than that of TNT and found to be highly reactive as it complexes with metals to generate shock sensitive explosive metal salts [7]. The use of PA in the manufacture of rocket fuel, fireworks, and matches releases a large quantity of it into the environment [8,9]. In view of this, detection of explosives is very important in combating terrorism, maintaining national security and providing environmental safety. Various methods are available for the detection of these nitroaromatics, such as GC-MS, Ion-Mobility Spectroscopy (IMS) and surface enhanced Raman spectroscopy and various other spectroscopic techniques [10]. However, these methods cannot be used in the field due to their high cost, lack of selectivity and sensitivity.

In this context, fluorescence signaling is one of the first choices due to its high detection sensitivity and selectivity [11]. However, emission of various fluorescent probes is often quenched at high concentrations or in an aggregate state, which is known as Aggregation-Caused Quenching (ACQ). In contrast to ACQ, there are some molecules that behave differently as they are nonluminescent in the solution state but become strongly emissive when aggregated and these molecules are termed as ‘‘Aggregation- Induced Emission’’ (AIE) active materials [12]. Fluorescent Carbon Quantum Dots (CDs) are found to be environment friendly, easy synthesis, easy to modify by doping, which, make them an excellent candidate for application in optoelectronics, sensors, solar cells, bioimaging and so on [13]. These properties provide CDs an edge over other fluorescent materials [14-16]. Recently, AIE property in CDs has provided a new extent to their optical properties. CDs having AIE characteristic was first reported by Gao et al. in 2013 [17] where adenosine-5-triphosphate brings the aggregation of CDs. Since then, AIE in CDs becomes the scientist’s first choice due to their sensitivity to pH change, large Stokes shift, photostability and biocompatibility, which can be fruitful for various applications [18]. From various experiments, it was observed that aggregation in CDs lead to structural rigidity, which relaxes the non-radiative path, hence, improving the fluorescence intensity. Aggregation- Induced Emission (AIE) phenomenon in Carbon Quantum Dots (CDs) has been found to improve the optical properties and cracked a new research area for the potential application of materials. The AIE in CDs has been utilised for various applications same as of AIE macromolecules. However, no research so far has been done using AIE in CDs for the detection of nitroaromatic explosives. In view of this, we believe that AIE coupled with CDs may aid a new pathway for the detection of nitroaromatic explosives.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Medical Research

For a Critical Reading of the COVID Age. Biopower, Nosopolitic and “War-Like” Logic Supporting the Crisis

Introduction

The critical sociologic theory, the marxian thought and even the motto of the English Royal Society “nullius in verba” invite us to “not trust anyone’s words”, that is to say that nothing is what it appears and we must not to believe everything that who govern tell us. That means media representations must always be ascertained with skepticism and critical spirit as occulting logics, dynamics, dispositifs functional to the system in term of management of conflicting implosive forces, crises and critical limits that could it to collapse [1]. In a nutshell we now feel more than ever the absolute need for a social critique that demonstrates again how policy and science collaborate to literally incorporate the consumer values on which the system is founded. In the case, by riding the covid, we see how the current populist ultraliberist policy tries to implement new totalizing nosopolitics focused on wealth, health and body performances fully functional to self-regulation automated mechanisms of reproduction and resilience of an absurd system condemned to grow indefinitely [2]. We can also detect how an ontogenetically unfair profit-centered system makes use of war-like strategies to impose advanced biopower forms, as well as exploiting the collateral side effects of all kinds of crises to derive commercial benefits and promote the recovery. In short, I try to approach the covid from an unconventional sociopolitical perspective, by limiting myself to delineating some relevant themes of a much broader problematic discourse, aimed to stimulate attention on underlying systemic logycs whose collateral damages risk to vaporize di liquid society and to opening dystopian scenarios, including a new era of totalitarianism [3].

Covid, Nothing Else

Despite the evidence confirm all its limits and paradoxes, the system tries to overcome own crises with elastic strategies of domination of the masses assisted by technical progress. In the case the covid crisis is being coped through politics consisting in a mix of anti-social measures and ultra-precautionary hygienic norms, including virtualization and de-humanization of work, medical assistance and education, affecting deeply the ordinary life, behaviors and relations [4]. But, in my opinion, the debate (indeed rather the total absence of public contradictory) about the appropriateness and legal justification of the current nosopolitics seems somewhat tainted by a passive acceptance of the official narration. In fact, leaving out the various hypotheses about the genesis of a new beta Coronaviridae mutant widespread from Whuan, let’s briefly review what we are led to believe about the drama of the covid. In two-year period 2020-2021 the WHO has registered 133.931 deaths out of 5.043.620 of cases. We can roughly deduce that the average lethality rate of the covid is 2.7% and then in Italy the mortality rate in the general population results of 0.02%.

According to the italian ISS, the average age of deceased patients is 82 years; among 67.7% of them suffering from at least three chronic debilitating diseases. Reported letality rate of the covid under the age of 50 is 1.2%; hospitalization rate under the age of 19 is negligible; mortality quite nothing [5]. Although the infection is usually inapparent and the effective contagiousness of the infected asymptomatic people questionable, the silent infection is soon considered as the real crux of the problem and consequently quarantines isolate mostly healthy people and the natural immunization is legally abolished. In parallel unspecified amounts of mass screening tests without scrutiny, false diagnostic tests, “covid per caso”-cases (hospitalized patients discovered accidentally covid-positive), manipulation of Health performance parameters, keep on inflating the daily war bulletins and supporting war-like biopolitics [6].

Biopower and Nosopolitic Logic: Some Example from the Past

We can use the AIDS epidemic due to HIV, a viral agent identified in 1983, as a good model to bring out occulted biopower logics [7]. Initially believed involving only restrict risk groups, the public authorities however adopted massive nosopolitics aimed to protect the healthcare system, as in parallel a terrorist media campaign invaded public space and the sindemic side effects of the diseases were finally unmasked. While the panic exploded, homosexuals were stigmatized and confined, right-wing, homophobic and racist movements rode a political crisis (while curiously nowadays these groups protest against the liberticide measures) putting a strain on the founding values of democracy [8]. Subsequently, in 2003 – while the scandal of transfusions with infected blood was appropriately covered up, just like in 1976 the iatrogenic epidemic of Guillan-Barre syndrome and the President Gerald Ford had himself photographed in public while undergoing the antiflu vaccination- there was a new excess of mortality and a healthcare system collapse due this time to an exceptionally hot summer [9].

On the contrary in 1918 american authorities decided to channel collective emotions about the world war, so that the public crisis dued, this time, to a new flu virus was minimized. A decade before the war a draconian quarantine (a measure adopted in 1300 in Italy to cope the “black death” consistent in 40 days long forced confinement of ships, crews, passengers and goods in the harbors) was performed to persecute the Chinese community of San Francisco [10]. In times of covid, biopower again attacks migrants and coloured people, again targeted by unprecedented police violence. The biological and healthy motivations that justified the nazi-fascist racial hate towards jews, omoxesuals, gypsies, slavs and even communists are sadly known [11]. In the 1968, against the unconventional deideologized pacifism of the beat generatiion, in Italy the police was used to spray disinfectant, in order to prevent the spread of a moral infection, after clearing the fields occupied by the hippies.

But let’s remember also the crucial role played by the epidemics in the rise of capitalism, as the germs introduced by the Columbus’s landings allowed the conquistadores to quickly annihilate the Aztec (1521) and Inca (1526) empires.

Later, joined with the african slaves intended to replace the indigenous exterminated by the European germs, also yellow fever and malaria invaded the Americas [12]. Summarizing, in comparison with previous pandemic (AIDS, H1N1, SARS, MERS, Ebola, Spanish flu, Black Death), based on what is reported by Lhetinen and Brunila, the covid seems to be managed “as a direct function of nationalism”, while never have been registered such a war-like massive autoritharian measures, “state sponsored and state-controlled mitigation” [13].

Towards a Political Economy of the Bare Being

In simple terms, following the foucauldian theoric framework, we can consider biopower as a political strategy of social control practiced by mean of a strict regulation of the bodily functioning and medicalization of all the dimensions of the human life. From this perspective the strict control of Health and Welfare through demographic, epidemiologic, life expectancy, longevity, differential vulnerability and other vital statistics about population combined with wealth markers, plays a crucial role for the current ultraliberals policy. In the covid age, new nosopolitics powered by advancements in the fields of sensors and computational, data and web sciences, allow the system to survive through the pervasive sanitization of bodies, institutions and organizations. Rather, we can safely say that medicalization of the system and mcdonaldization of the Health form now a whole confused reality.

On the other side, although the State sows panic to make fearful people docile and obedient, paradoxically the masses willingly support a regime making them feel more protected, as in the liquid world the body is experienced as the last resource to be exploited to affirm own existence. In addition a “manage of affects through the political ontology of war” is perfectly compatible with the ongoing Political Economy of the life strategies based on nosopolitics that promise the masses biosecurity in exchange for obedience, surveillance, omologation, national identification [14]. But these two years of a “war” apparently still far from being won – despite armies of heroes, discriminating green immunity passports, suspension of the basic rights, state-of-war unitary governments headed by war-time president (unelected, in Italy), bombing terrorist and patriotic propaganda, strategic overestimation of casualities and deaths, censorship of counterinformation, suspension of salary, impossibility to work, monetary sanctions for those sparse fringers of deserters, traitors, rebels, heretics who refuse to be vaccinated – it seems to me that the “invisible” enemy is become even more and more changeable and invincible.

Conclusion

Unsucessfull national and distrectual quarantines, business shutdowns, closure of borders, smartphone-assisted epidemic monitoring make us confirmg that the “white flag principle” gives much more benefits than the victory on the virus [15], even because a broader conceptualization of health comprising a fluctuating condition between well-being and virtual disease to be tested perpetually opens precious big data markets just in the healthcare sector [16]. Ultimately even the benefits of a lost war serve to recover economy, while letting society quickly shrink to swarms of anhesthetized videoaddicted bodies all differently similar [17]. In addition from unpredictable reactions between nosopolitic and technical advances (including sensory assemblage, embedded chips, autonomous self-learning drones and robots, idrugs, artificial intelligence, wearables biometric devices, cryptology, 5G powered web applications), we can expect nothing more than a de-humanized automated condition within somewhat dystopian environmental scenarios [18,19]. But since every form of domination involves also specific modes of resistance, History even entrusts to minorities who, armed with torches, bravely dig into the facts, “reject the horror of unlike and the prospect of a bare vitality, after being humiliated”, the opportunity to promote radical changes [20].

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Immunological Diseases

How to Optimize Glucose Dynamics Via Chrono-Eating?

Philosophy and Innovation

The anterior part of the hypothalamus hosts a bilateral structure known as suprachiasmatic nucleus (SCN). This is a central pacemaker of the circadian system that regulates circadian rhythms, and thus, is central clock alongside peripheral clocks in liver. These clocks are involved in the regulation of circadian rhythms of glucose metabolic dynamics [1,2]. Glucose tolerance decreases in the evening and overnight simply and logically because less nutrients are needed nocturnally [3]. As such, eating large meals and overeating sugary and starchy meals during evening and overnight must be avoided to reduce diabetes and obesity risks. Circadian rhythms of glucose dynamics and metabolism require reduced insulin sensitivity and pancreatic β-cells function in the evening vs. morning. This suggests that glucose dynamics possess internal biological rhythms [2]. Whole body insulin resistance may be an initial step in type 2 diabetes (T2D) development. Nonetheless, hyperglycemia and associated diabetes are dependent on rather hepatic than muscle insulin resistance [4]. Furthermore, muscle insulin resistance and energy over-intake may lead to hepatic lipogenesis, further increasing hepatic insulin resistance and reduced insulin effects on hepatic glucose dynamics [4]. As such, nocturnal glucose over-supply by extensive night eating increases insulin resistance and could ultimately lead to increased risk of diabetes and obesity [5].
As stated before, glucose dynamics are strictly orchestrated by the circadian physiology. Hence, shifting eating time from the morning to evening may elevate blood glucose and cause related metabolic issues. In accordance with ‘choronotype’ concept, people are either morning or evening eaters. The evening eaters have the undesirable habit of eating more extensively overnight. Chronotype, thus, impacts on and interferes with chrono-eating. It has been reported that night-eaters and shift-workers exhibit reduced glucose tolerance, and are thus, at greater risks of diabetes. This is basically due to altered eating time and disturbed circadian rhythms of glucose dynamics [6]. Moreover, greater blood melatonin has been demonstrated in night eaters, also suggesting damaged glucose dynamics by the disturbed circadian biology [2]. Apparently, melatonin possesses a role in circadian rhythms regulation of glucose dynamics based on the sleep-wake cycle. As such, the discovery of MTNRIB, melatonin receptor 1b gene, has fueled interests in researching possible melatonin effects on circadian glucose dynamics.
Elevated melatonin levels following eating might harm glucose dynamics by decreasing glucose tolerance. Melatonin is at maximum overnight which accords with reduced glucose tolerance circadianwise [7]. Thus, the greater risk of diabetes in shift-working night eaters may be caused by a disturbance in the circadian rhythmicity of glucose dynamics. An accepted hypothesis about melatonin action is its inhibitory effect on insulin secretion [7]. Chrono-eating entails that energy-dense foods should be eaten preferably in the morning when physical and brain works are being initiated and increased [5]. Breakfast-eating is highly encouraged whereas nighteating is greatly discouraged [8]. To support, greater postprandial glucose levels have been demonstrated in breakfast skippers who instead consumed large later meals [9]. In addition to eating time, for optimized nutrient/waste metabolism, eating and exercise ought to be coordinated for optimal cell function. This will require more research to be well clarified. Chrono-eating as a growing science/practice offers a simple workable strategy to help optimize glucose dynamics and to likely prevent diabetes [10]. Consequently, chrono-eating is postulated to have influential roles in improving glucose dynamics and life quality in today’s stressful times.

Conclusion

Glucose dynamics are orchestrated by the circadian physiological rhythms. Glucose tolerance decreases in the evening and overnight. As such, nocturnal eaters display hyperglycemia overnight and possibly during day as well. The elevated blood glucose along with the augmented insulin resistance may most probably lead to obesity and T2D. Chrono-eating, hence, is a rising science/practice that must receive deserving public awareness and attention towards improving glucose dynamics and preventing metabolic complexities such as diabetes in today’s stressful life.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us