Open Access Journals on Zoology

Fungal Skin Diseases and Related Factors in Outpatients of Three Tertiary Care Hospitals of Dhaka, an Urban City of Bangladesh: Cross-Sectional Study

Introduction

Globally, fungal skin diseases are very common in human. As a densely populated developing country and having poor hygiene, sanitation practice, Bangladesh is no different to fungal skin infections. The skin protects us from microbes and the elements of skin help in regulating body temperature and permit the sensations of touch, heat, and cold. As it interfaces with the environment, skin plays an important immunity role in protecting the body against pathogens. It is subject to a wide range of medical conditions and infections ranging from simple manifestations to complicated ones like skin cancer. Symptoms and severity of skin disorders vary greatly. They can be temporary or permanent and may be painless or painful. Some have situational causes, while others may be genetic. Some skin conditions are minor, and others can be life-threatening. However, fungal, bacterial, parasitic and viral infections are very common in the healthy people. Several types of parasitic, bacterial and fungal infections are found which causes negligible mortality but most of the diseases have chronic course and sufferings [1].
The skin is the body’s initial defense against parasites, fungi, bacteria, viruses and other microbes. But skin and venereal diseases cause a large part of illness. About 50% of people in Bangladesh suffer from skin disorders in their lifetime. Incidence of infection on skin is very frequent due to some environmental, natural, occupational and individual habitat variations. It increases when people are herded together and facilities for washing the body and clothing are reduced. Recurrence, excessive use of chemicals and cosmetics, environmental pollution, delayed marriage etc is the major leading factors for the initiation and transmission of the diseases.
About 80% of population in Bangladesh live in the rural areas, where poverty, literacy, ignorance, high family members, disease and disasters are the constant companion of them. Increasing population, socio economic conditions have become poor and due to this population explosion, all the reversible socio-demographic conditions go in favor of disease occurrence, recurrence, and complications. In addition, overcrowding, urbanization, industrialization, migration, excessive use of chemicals and cosmetics, environmental pollution, greenhouse effect, education, delayed marriage and use of multiple partners are also the major leading factors for inflation and transmission of diseases.
Skin and venereal diseases are one of the major public health problems in developing countries. Though it occurs in all class of society but people living in insanitary and poor housings conditions suffer more from the disease, poverty-stricken people with poor hygienic habits and unclean clothing are the usual victims of these diseases. Symptoms of infection depends on the type of organisms that has caused the infection and both symptom and appearance also depend on the part of the body infected. In many studies it has been shown that 30-40% of our population is suffering from skin diseases. Of which about 80% are scabies and pyogenic infections.
Children are the worst sufferers from these diseases (Khanum and Alam 2010). The relation between the skin and venereal diseases of the diabetic patients of different age group and sociodemographic characteristics is very complicated. The sociodemographic aspects are very important to know because in different societies and social groups explain the causes of illness, the type of treatment they believe and to whom they turn if they go get ill (Khanum et al. 2007).
In human anatomy, the largest outer organ, covering throughout the whole body is skin. Skin performs a very significant role in immunization by defending against outer microbes and pathogens. Moreover, the elements of skin help the body to regulate the temperature throughout the body and create the feelings of heat, cold and touch. However, this important organ of the body has been exposed to a variety of infections and medical sufferings varying from simple acne to very intricating skin cancer types. Worldwide, among human diseases, the most common is skin disease. It can affect individuals anytime during their lifetime [1], can strike at any age, can spread over all societies and cultures. In time skin disease can lead to systematic disorders. Its damaging effects lead to physical disability even death [2].
In 2010, the global burden of disease [GBD] published that skin diseases ranked fourth as the prominent reason for non-fatal disease burden affecting both high- and low-income countries [3]. In 2013, GBD published that skin diseases are responsible for 39 million years lived with disability [YLDs] and in case of disabilityadjusted life years [DALYs] sit has attributed 1.79% to the global burden of diseases [4].

Fungal Disease: Ringworm (Dermatophytosis)

Ringworm, also known as dermatophytosis or Tinea, is a fungal infection of the skin. The name “ringworm” is a misnomer, since the infection is caused by a fungus, not a worm. Ringworm infection can affect both humans and animals. The infection initially presents with red patches on affected areas of the skin and later spreads to other parts of the body. The infection may affect the skin of the scalp, feet, groin, beard, or other areas. Ringworm can go by different names depending on the part of the body affected.
1. Tinea capitis [Ringworm of the scalp] is a fungal infection affecting on scalp.
2. Tinea corporis [Ringworm of the body] is a fungal infection that affects the skin of body.
3. Tinea cruris [Jock itch] is a fungal infection that affects the warm and moist area such as buttocks, groin, inner thighs etc.
4. Tinea pedis [Athlete’s foot] is a fungal infection that affects the skin of feet.
5. Tineaunguium [Onychomycosis] is a fungal infection that affects either the fingernails or toenails.
6. Tinea facie is a fungal infection that affects the face.
7. Tinea barbae is a fungal infection that affects the beard area of men.
8. Tinea mannum is a fungal infection that affects the area of hands.
9. Tinea versicoloris a fungal infection that affects the whole body as the form of discolored patches of skin.
Dermatophytosis tends to get worse during summer, with symptoms alleviating during the winter. The disease can be transmitted between animals and humans [zoonotic disease]. Three different types of fungi can cause this infection. They are called Trichophyton, Microsporum and Epidermophyton. It’s possible that these fungi may live for an extended period as spores in soil. Humans and animals can contract ringworm after direct contact with this soil. The infection can also spread through contact with infected animals or humans. The infection is commonly spread among children and by sharing items that may not be clean. Fungi thrive in moist, warm areas, such as locker rooms, tanning beds, swimming pools and in skin folds. It can be spread by sharing sport goods, towels, and clothing.
Symptoms and severity of skin disorders vary greatly. The consequence of this problem is serious for the patient as well as for the society. Among skin diseases, fungal, bacterial, parasitic and viral infections are very common. The distributional pattern of skin diseases varies widely from country to country, even within the country itself [1]. Although they are attributable to a very insignificant mortality rate but most of the skin diseases comes with a possibility of prolonged sufferings thus raising public health concerns in developing countries.
Bangladesh is a densely populated country with 164.69 million population and 24% of people live under the poverty line [5] and the majority of the population suffer from different infections and contagious diseases. Study conducted by Khanum and Alam, it has been shown that 30-40% of our population is suffering from skin diseases [6]. Approximately, 40% of people live in urban cities and the highest 10.3 million people live in Dhaka city [7]. Several papers have studied common skin and venereal diseases in Bangladesh [8-14] but our paper is specifically concerned about fungal skin diseases and their associated factors in three tertiary care hospitals of an urban city, Dhaka, Bangladesh.
According to the 2010 GBD, fungal skin infections were among the top 10 most dominant diseases globally [3]. According to the 2013 GBD, 0.15% of DALYs of the global burden of skin diseases are contributed by fungal skin diseases [4]. In rural areas of Bangladesh fungal skin infections are very common [15]. A study on the common skin diseases revealed that out of 440 patients 13% had fungal infections [11]. Other studies of Bangladesh showed prevalence ranging from 15.5%- 26.7% [12-14]. India, neighboring country to Bangladesh also reported that Fungal diseases were the highest group of all skin diseases with 18.74% prevalence [16] and second highest with 17.19% prevalence [17]. In Pakistan, a study conducted in 2017 showed 34.80% prevalence of fungal skin infections out of 95983 patients in a tertiary care hospital of Karachi [18]. A community-based survey studying the skin diseases of South Asian Americans found that fungal had 11% prevalence after Acne and Eczema [19].
Numerous factors can influence the prevalence of skin infections mentioning geographical and cultural factors [20-21], educational status, nutritional status, socio-economic status, as well as seasons, overcrowding, unhygienic habits, and environments are significant factors of defining the distribution of skin diseases in developing countries [1,22-24]. The socio-demographic aspects are very significant to know because in different societies and social clusters rationalize the reasons of illness, what types of treatments and whom they believe in case of their treatments [5].

Materials and Methods

This research study was performed at the Dermatology Department of the Bangladesh Institute of Research and Rehabilitation in Diabetes, Endocrine and Metabolic Disorders [BIRDEM], Dhaka Medical College and Hospital [DMCH] and Uttara Adhunik Medical College and Hospital [UAMCH]. The study was undertaken from 25th March 2018 to 10th February 2019. A total of 800 outdoor patients were randomly selected of all genders, ages, sexes, with different occupations irrespective of their skin problems during the data collection period of BIRDEM, DMCH, and UAMCH. The present study was conducted in two steps, firstly collecting samples and data through personal interviews and secondly laboratory confirmation of the diseases and their pathogens. A literature review was carried out about the factors relating to skin diseases before a structured questionnaire was prepared for interviewing the patients about their demographics and socio-economical aspects.

Statistical Analysis

Analysis of the data has been achieved by using the statistical software SPSS [version-20.0] and the results were presented in percentages. We have matched our results with comparable studies of other cities of the country and nearby countries through similar hospital attendance-based studies.

Ethical Approval

We informed each and every patient about our study aims, methods as well as we assured them about their privacy and confidentiality at any stage of the study [at the time of data, sample collection and laboratory diagnosis] before including them into our study. We also made it flexible to the patients to enter the study and also to withdraw their consent.

Results

In the present observation cross-sectional study has been outlined to determine the prevalence of the fungal skin diseases of tertiary care hospitals in an urban city. The present study also provides a descriptive profile of factors related to the fungal skin diseases including demographical, personal hygiene aspect and socio-economic status of the outpatients attending the Dermatology Department of major three tertiary care hospitals in Dhaka city, Bangladesh.
There were a combination of skin infections including fungal, viral, bacterial, parasitic, sexually transmitted diseases [STD] but maximum patients had fungal skin infections. Among the 800 patients, 310 patients were infected with fungal infections [38.75%]. It was observed, of those 310 patients 183 [59%] were male patients and 127 [41%] were female patients. Out of 310 fungal infected patients, most of the patients, were infected by ringworm [81.61%] and the lowest prevalence was found in case of Oral thrush [2.9%] (Table 1). Besides, ringworm patients were infected by Pityriasis versicolor, Seborrhoeic dermatitis. Among 253 patients of ringworm patients the highest prevalence was found in case of Onchomycosis [21.94%] and the lowest prevalence was found in case of Tinea capitis [0.97%] (Figure 1).

biomedres-openaccess-journal-bjstr

Table 1: Prevalence of fungal skin infections of skin among the patients.

biomedres-openaccess-journal-bjstr

Figure 1: Prevalence of ringworm causing agents among the patients.

Among the 183 male patients highest 66.67% were infected by Oral thrush/ Candidiasis and lowest 42.86% were infected by Seborrhoeic dermatitis whereas, among the 127 female patients highest 57.14% were infected by Seborrhoeic dermatitis and 33.33% were infected by Oral thrush/ Candidiasis] (Table 2). Moreover, in ringworm causing agents highest 67.65% male were infected by Tinea pedis and lowest 20% males were infected by Tinea facie while in female group highest 80% were infected by Tinea facie and lowest 32.35% were infected by Tinea pedis (Table 3).

biomedres-openaccess-journal-bjstr

Table 2: Prevalence of fungal skin diseases according to the gender of patients.

biomedres-openaccess-journal-bjstr

Table 3: Prevalence of ringworm causing agents according to gender of patients.

It was also observed that out of total 310 fungal infected patients, the highest burden of fungal infections was present among the patients of age group of 31-45 [32.26%] and the lowest burden of infections was belonged to the patients of age group of 0-15 [6.13%] (Table 4). This was also similar for the prevalence of the specific ringworm causing agents. Age group of 31-45 years had highest prevalence [32.81%] and 0-15 years group had lowest prevalence [4.74%] (Table 5). Finally, we observed the factors from the personal interviews of the 310 patients mentioning marital status, socio-economic status, educational status, monthly income, occupation, seasons, religions, sources of water, residence location, regular bath, regular types of clothes, personal items sharing, history of recurrent infections, times of recurrent infections, overcrowding of family (Table 6).

biomedres-openaccess-journal-bjstr

Table 4: Prevalence of fungal infections in different age groups.

biomedres-openaccess-journal-bjstr

Table 5: Prevalence of ringworm causing agents in different age groups.

biomedres-openaccess-journal-bjstr

Table 6: Prevalence of fungal infections according to considered factors.

Discussion

In the present investigation, out of total 800 patients, 310 patients had fungal infections with the highest prevalence [38.75%] followed by other fungal skin problems. Out of fungal infections ringworm had highest prevalence [81.61%] followed by Pityriasis versicolor, Seborrhoeic dermatitis and Oral thrush/ Candidiasis. Among the ringworm, onchomycosis [27.42%], Tinea corporis [21.94%], Tinea cruris [16.45%] had the highest prevalence. It was also observed were male patients had high prevalence [59%] than female patients [41%]. In case of age group patients contained among the age group of 31-45 had the highest [32.26%] and the lowest prevalence of patients belonged to the age group of 0-15 [6.13%]. Outcomes of this study are similar to results of some studies while contradicts to some.
In 1993, a study performed by Hossain [25] found that fungal infection [20.19%], and seborrhoeic dermatitis [8.80%] were most common among the skin diseases [25]. In 1995, Bahmadan et al. [22] reported that in Abha city from Saudi Arabia among the fungal disease developing pathogens, Tinea capitis [9.6%] and Tinea pedis [1.9%] were most common [22] but we found Tinea corporis [21.94%], Tinea cruris [16.45%] had the highest prevalence. In 2011, a study conducted by Rahman et al. found Tinea corporis [22.63%] was the most frequent infection as well as males were mostly infected with fungal infections which is similar to the results of this present study [15].
In 2007, study by Khanam et al. informed that among the fungal infected patient’s majority [42.7%] were infected by ringworm, 45.36% by Pityrious versicolor and lowest [12%] were infected by Candidiasis. Khanum also reported that the prevalence of fungal infection was in highest in 40-49 age group [25.33%] and less in 20-29 age group [14.66%] and prevalence in male was highest [61.33%] than female [38.66%] [8]. In 2012, one study from a Dhamrai area near Dhaka performed by Nafiza et al. had reported that among the patients with cutaneous skin diseases, fungal infections were the commonest and highest [22.9%] and males had high prevalence [63.4%] than females [36.6%] [12]. In 2017, Haque et al. revealed among the 504 patients who were surveyed from Rajshahi, an unbar city of Bangladesh with different types of skin disease, male had highest prevalence of fungal infections [26].
In this present study we had explored not only the demographical and socio-economic aspects but also seasonal aspect and the hygiene habits of the patients to better understand the factors related to the fungal skin diseases. It has been witnessed in. this study, that among the fungal infected patients who were married [71.93%], had secondary education [36.45%], earned 12000-20000tk monthly [38.06%] and had upper-middle class status [38.06%] had higher prevalence. Moreover, patients who were Muslims [86.13%], had businesses [39.73%], lived in urban areas [69.35%], used tap water as the source of water [69.35%] also had higher prevalence of fungal infections of skin. In case of personal hygiene of the patients, who wears cotton clothes regularly [27.74%], baths regularly [60%], shares personal items [63.87%], had recurrent infections [62.9%] and had overcrowding of family [66.13%] had higher prevalence. Additionally, in summer season fungal infections had higher prevalence [59.68%]. This study had found high prevalence in Muslims as the study was conducted in an Islamic country.
There are several studies conducted in Bangladesh had found different results than ours. According to them, the prevalence was higher is rural areas [15], among students [10], patients from low socio-economic status [9], among illiterate patients [9,10], in rainy season [8]. According to Khanum et al. 52.16% of the patients with low socio-economic status showed a high reoccurrence of skin disease which contradicts our study result [8]. From these observations it can be said that skin infections in patients is very frequent in urban regions even if the urban cities of the country have improved standard of living, hygiene and sanitation, better quality healthcare facilities, education, and nutritious food to lessen the fungal skin diseases rather than the rural part of country. So, the present study has tried to give an approximate fungal skin disease prevalence scenario with related factors of the whole country.

Conclusion

Present cross-sectional study has provided some unique results and findings which would add to the scientific literature and health policies as it is first of its kind. No other research work has evaluated the prevalence of fungal skin diseases of an urban city with associated factors in Bangladesh. Moreover, this work can also be scaled up to other pathogens of skin diseases. However, there is no vaccine against skin diseases it is very difficult to control its transmission so to control this disease is to improve socioeconomic condition, change the personal hygiene behaviour and taking appropriate preventive measures.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Microbiology

Hyper Prevalence of Malnutrition in Nigerian Context

Introduction

Diet is the number one risk factor for disease in the world; carrying a superior risk of ill health than smoking or drinking alcohol Mills, et al. [1]. According to the World Health Organization (WHO), 462 million adults are underweight, while 1.9 billion adults are overweight and obese. In children under 5 years of age, 155 million are stunted, 52 million are wasted, 17 million are severely wasted and 41 million are overweight or obese [2]. The importance of nutrition cannot be over emphasized in any country of the world, be it developed, developing or under developed. This is because nutrition determines the social, economic, intellectual and technological advancement of any nation. While the significance of nutrition for growth, development and advancement is globally recognized, universal efforts in battling hunger and malnutrition have not really been achieved on a global scale [3,4]. Globally, there is hunger and malnutrition ravaging the world with a current estimated value of 1 in 9 people out of the 820 million people who are hungry or undernourished. A study conducted by [4] states that there has been a perpetual increase in these figures since 2015, especially in Africa, West Asia and Latin America. Similarly, approximately 113 million people across 53 countries experience acute hunger, as a result of conflict and food insecurity, climate shocks and economic instability [5]. However, more than onethird of the world’s adult population is overweight or obese, with growing trends over the past twenty years Ng, et al. [6].
The 2020 Global Nutrition Report presents the latest data and evidence on the state of global nutrition. Among children under 5 years of age, 149.0 million are stunted, 49.5 million are wasted and 40.1 million are overweight, and there are 677.6 million obese adults. It further states that there is now an increased global recognition that poor diet and resultant malnutrition are among the greatest health and societal challenges of our time. In addition, malnutrition continues at unacceptably high levels on a universal scale despite the little improvements that has been made to combat it. [7] emphasises that countries affected by conflict or other forms of fragility are at a higher risk for malnutrition. Moreover, it further illustrates that in 2016, 1.8 billion people (24% of the world’s population) were living in fragile or extremely delicate countries. This digit is projected to grow to 2.3 billion people by 2030 and 3.3 billion by 2050. International Food Policy Research Institute, [8] notes that the prevalence of stunting or restricted growth among children under five years reduced to 23.8% from 36.9% between 1990 and 2015. Nonetheless, the Food and Agricultural Organization of the United Nations [9] indicated that in 2017, the number of undernourished people increased from 777 million to 815 million between 2015 and in 2016, about 155 million children below the age of 5 were too short for their age. Furthermore, approximately 52 million did not weigh enough for their height while about 41 million was overweight. Previous researches like Black, et al. [8,10], and [11] indicated that malnutrition is connected to nearly half of all deaths among children under the age of five.
On the contrary, more than 28 million adults and children in the United Kingdom (UK) are overweight or obese, which is catalysing a diet related health problem with escalating rates of noncommunicable diseases, including type 2 diabetes, cardiovascular disease and certain forms of cancer [12]. The treatment of obesity and its consequences in England alone currently costs the NHS £16 billion every year, the majority of which is spent on type 2 diabetes, [13]. This is more than the £13.6 billion per year spent on the fire and police services combined. The wider economic toll of obesity and related conditions is estimated to be the equivalent of 3% of the GDP Dobbs, et al. [14]. The most common form of malnutrition in the developing countries is under nutrition whilepresently Nigeria is one of the African countries listed among the 20 countries responsible for 80% of global malnutrition. Out of the sum of 233 million undernourished people in Africa, 220 million are from the Sub Saharan. Whereas South Sudan is lacking of globally comparable data, estimates show that the food and nutritional shortfalls are dreadful. For example, by January-March 2019, 5.2 million South Sudanese (49% of the total population) continued to face acute food insecurity Black, et al. [15]. Within this context, this paper seeks to evaluate hyper prevalence of malnutrition in Nigerian context.
The Basic Tools of Scientific Enquiry
1. What are the factors or causes of hyper prevalence of malnutrition in Nigeria?
2. What are the mental and intellectual effects of hyper prevalence of malnutrition in Nigeria of under five children?
3. What are the impacts of hyper prevalence of malnutrition on the future of the Nigeria economy?

Literature Review

A report by the Food and Agriculture Organization of the United Nations [16] indicates that more than 14% of the population in developing countries were undernourished in the period between 2011 and 2013. Malnutrition includes both nutrient deficiencies and excesses and is defined by the World Food Programme (WFP) as “a state in which the physical function of an individual is diminished or weakened to the point where the person can no longer maintain normal or adequate bodily performance processes such as growth, pregnancy, lactation, physical work, and resistance to and recovering from disease” [17]. Additionally, [18] states that malnutrition frequently begins at conception, and child malnutrition is connected to poverty, low levels of education, and poor access to health services, including reproductive health and family planning. Furthermore, the World Health Organization, [2] states that malnutrition occurs due to an imbalance in the body, whereby the nutrients required by the body and the amount used by the body do not balance. Additionally, it stipulated that there are several forms of malnutrition and these include two broad categories namely under nutrition and over nutrition. Under nutrition manifests as wasting or low weight for height (acute malnutrition), stunting or low height for age (chronic malnutrition), underweight or low weight for age, and mineral and vitamin deficiencies or excessiveness. While over nutrition includes overweight, obesity and dietrelated non-communicable diseases (NCDs) such as diabetes mellitus, heart disease, some forms of cancer and stroke.
In the 21st century, malnutrition in children has three main strands. The first is the continuing plague of undernutrition. Despite its reduction in many parts of the world, undernutrition is still depriving many children of the energy and nutrients they need to grow well and is connected to the deaths of children from 6 months to under 5 of age each year [19]. The second strand is hidden hunger. This is as a result of the deficiencies in essential vitamins and minerals such as vitamins A and B, iron and zinc. It is often unseen, and often ignored; hidden hunger robs children of their health and vitality and even their lives. The third strand is overweight, which is also called obesity in its more severe form. It was formally regarded as a condition of the rich, overweight now afflicts more and more children, even in underdeveloped and developing countries. It is has also been considered as a threat to stimulating a rise in diet-related noncommunicable diseases (NCDs) later in life; such as heart disease, which is the leading cause of death worldwide [20]. World Health Organization (WHO) reported that 462 million adults are underweight, while 1.9 billion adults are overweight or obese. In children under 5 years of age, 155 million are stunted, 52 million are wasted, 17 million are severely wasted and 41 million are overweight or obese [2]. There is a diversemanifestation of malnutrition, but the pathways to addressing prevention are important and include exclusive breastfeeding for the first two years of life, diverse and nutritious foods during childhood, healthy environments, access to basic services such as water, hygiene, health and sanitation, as well as pregnant and lactating women having proper maternal nutrition before, during and after the respective phases (before pregnancy and after delivery) [21].
The smallest or least advantaged are the most likely to suffer from malnutrition and its longstanding consequences. A research report by Hancock, et al. [22] states that the most deprived white children measured across England in 2012-2013 were on average more than a centimetre shorter in height by the age of 10 years than the least deprived children. These children are not likely to catch up growth losses from their early years. Obese children in England are more than twice as likely to live in the most deprived areas compared with comfortable areas and this gap is increasing over time [23]. Poor children are also more likely than wealthy children to suffer from poor health as a result of food insecurity. In addition, over 60% of paediatricians surveyed throughout the UK in late 2016 said that food insecurity contributed to the unpleasant health among children they treat [24]. Currently, nearly one in three people in the world suffers from at least one form of malnutrition, including obesity, under nutrition or vitamin and mineral deficiencies. Due to the rise in obesity, high income countries are presently contributing to the greatest number of malnourished people, but low income and middle income countries are meeting up fast. Hence, in Africa, the number of children who are overweight or obese has nearly doubled from 5.4 million in 1990 to 10.6 million in 2014 (Global Panel on Agriculture and Food Systems for Nutrition, 2016). Despite this rise in figure, other forms of malnutrition have not gone away, as deficiencies in vitamins and minerals continue to affect billions of people worldwide.

Perspectives on Malnutrition in Nigeria

Over the years, two main types of malnutrition have been identified in Nigerian children: (1) protein-energy malnutrition and (2) micronutrient malnutrition. The protein-energy malnutrition is common among the preschool children and constitutes a major public health problem in the country. “Stunting” is usually defined as low height for age, however, it is a deficit of linear growth and failure to reach genetic potential that reflects long term and cumulative effects of inadequate dietary intake and poor health conditions [25]. Succinctly, underweight is low weight for age, stunting (low height for age) and wasting (low weight for height) are all manifestations of under nutrition. All these expose the child to health risks and in their severe forms; they constitute a threat to the child’s survival [26]. In 1983–1984, the National Health and Nutrition Survey (HANS) conducted by the Federal Ministry of Health estimated the prevalence of wasting to be around 20% (FGN 1983–1984). A research by the Demographic and Health Survey (DHS) in 1986 shows that children between the ages of 6–36 months in Ondo State (Southwestern Nigeria) suffered 6.8% prevalence of wasting, underweight of 28.1%, and stunting of 32.4%.
In February 1990, an anthropometric survey of preschool children (2–5 years old) in seven states was conducted and found underweight prevalence ranging from 15% in Akure (Ondo State) to 52% in Kaduna (Kaduna State) while stunting prevalence ranged from 14% in Iyero-Ekiti (Ondo State) to 46% in Kaduna. Similarly, the 1990 DHS conducted by the Federal Office of Statistics estimated the prevalence of wasting at 9%, underweight at 36%, and stunting at 43% among the preschool children in Nigeria. These figures show a decline compared to the figures published in 1994 by UNICEF-Nigeria from a 1992 survey conducted among women and children in 10 states; the UNICEF reported the prevalence of wasting among women and children at 10.1%, underweight 28.3%, and stunting 52.3%. Furthermore, there was a decrease in prevalence of stunting in the 2003 NDHS with 11% of children wasting, 24% underweight, and 42% of children stunted [27]. As at 2008, prevalence of underweight had declined to 23% and stunting had reduced to 41% but wasting increased to 14% (NDHS, 2008).
Similar trends were reported by the 2001–2003 Nigerian Food Consumption and Nutrition Survey (NFCNS). The study reported 9% wasting, 25% underweight, and 42% stunting, with significant variations across rural and urban areas, geopolitical zones, and agro-ecological zones Maziya-Dixon, et al. [28]. The study also shows that the prevalence of stunting was lowest in the southeast at 16%; it reached 18% in the south and 55% in the northwest of Nigeria. The result further shows that among the states, stunting was highest among children in Kebbi (61%). The 2003 report of NDHS also indicates that among the rural children (43% stunted) were disadvantaged compared to urban children (29% stunted). Children living in the Northwest geopolitical zone stood out as being particularly underprivileged at 55% compared to 43 % in the Northeast zone, 31% in North Central, 25% in the Southwest, 21% in the South-South, and 20 % in the Southeast. The Multiple Indicator Cluster Survey (MICS), reported that there was a decrease in the prevalence of malnutrition in Nigeria with 34 % of children under five stunted, 31 % underweight, and 16% wasted, while about 15% of children had low birth (at less than 2,500 grams at birth) [29]. It is obvious from the 2013 NDHS that the proportion of children who are stunted has been decreasing over the years. However, the degree of wasting has worsened, indicating a more recent nutritional deficiency among children in the country. Prevalence of stunting decreased to 37%, with a higher concentration among rural children (43%) than urban (26%). Nevertheless, there has been an increase in the proportion of children underweight (29 %) and the proportion wasting (18%) [30]. It is graphically clear based on the data from these different studies that, malnutrition of children under five has been a consistent problem in Nigeria over time, with just little improvement reported despite its escalation in the country. Malnutrition contributed to 53% of deaths among children under five in Nigeria, and levels of wasting and stunting are still very high [31].

Empirical Review

Malnutrition is a global public health problem in both children and adults universally [2]. Annually, malnutrition claims the lives of 3 million children under age five and costs the global economy billions of dollars in lost productivity and health care costs. However those losses are almost entirely preventable. A large body of scientific evidence like [32-34] show that improving nutrition during the critical 1,000 day period from a woman’s pregnancy to her child’s second birthday has the potential to save lives, help millions of children to fully develop and deliver greater economic prosperity. Furthermore, Shrimpton, et al. [35] stated that malnutrition is currently an important global problem; as it affects all people despite the geography, socioeconomic status, sex and gender, overlapping households, communities and countries. In addition, anyone can experience malnutrition but the most susceptible groups affected are children, adolescents, women, as well as people who are immunecompromised, or facing the challenges of poverty.
Young malnourished children are affected by compromised immune systems by yielding to infectious diseases and are prone to cognitive development delays; damaging long term psychological and intellectual development effects, as well as mental and physical development that are compromised due to stunting [10,36]. A malnutrition cycle exists in populations experiencing chronic under nutrition and in this cycle, the nutritional requirements are not met in pregnant women. Thus, infants born to these mothers are of low birth weight, are unable to reach their full growth potential and may therefore be stunted, susceptible to infections, illness, and mortality early in life. The cycle is worsened when low birth weight females grow into malnourished children and adults, and are therefore more likely to give birth to infants of low birth weight as well [37]. Malnutrition is not just a health issue but also affects the global burden of malnutrition socially, economically, developmentally and medically, affecting individuals, their families and communities with serious and long lasting consequences [2].
It is very significant that malnutrition is addressed in children as it manifestations and symptoms begin to appear in the first 2 years of life [35]. Overlapping with the mental development and growth periods in children, protein energy malnutrition (PEM) is said to be a problem at the ages of 6 months to 2 years. Therefore, this age and period is considered a window period during which it is essential to prevent or manage acute and chronic malnutrition [38]. Furthermore, children less than 5 years of age have a disease burden of 35% Black, et al. [10]. In 2008, 8.8 million global deaths in children less than 5 years old were due to underweight, of which 93% occurred in Africa and Asia. Walton, et al. [39] stated that approximately one in every seven children faces mortality before their fifth birthday in Sub Saharan Africa (SSA) due to malnutrition. Nigeria is the most populous nation in Africa and has a population of almost 186 million people in 2016 UNICEF [40]. With a high fertility rate of 5.38 children per woman, the population is growing at an annual rate of 2.6 percent, escalating and worsening overcrowded conditions. Hence, by 2050, Nigeria’s population is expected to grow to an astounding 440 million, which will make it the third most populous country in the world, after India and China (Population Reference Bureau, 2013). A report by the Nigeria Federal Ministry of Health [41] states that scarcity of resources and land in rural areas has resulted in Nigeria having one of the highest urban growth rates in the world at 4.1 percent. Furthermore, out of the 157 countries in progress toward meeting the Sustainable Development Goals (SDGs), Nigeria ranks 145th Sachs, et al. [42].
Malnutrition in childhood and pregnancy has many adverse consequences for child survival and longstanding wellbeing. It also has extensive consequences for human capital, economic productivity, and national development generally. These consequences of malnutrition should be a significant concern for policy makers in Nigeria, which has the highest number of children under 5 years with chronic malnutrition (stunting or low height for age) in SubSaharan Africa at more than 11.7 million, according to the Demographic and Health Survey National Population Commission and ICF International [43]. According to the World Bank [44], Nigeria’s economy is the largest in Africa and is well positioned to play a leading role in the global economy. However, despite the strong economic growth over the last decade, poverty has remained significantly high, with increasing inequality and provincial disparities. In addition, it is estimated that 69 percent of Nigerians live below the relative poverty line (US$1.25 per day), compared to the 27 percent in 1980.

Theoretical Framework

This study is anchored on two theories, which include the Theory of Reasoned Action (TRA) and the Theory of Planned Behaviour (TBP). Theory of Reasoned Action was formulated by Martin Fishbein and IcekAjzen towards the end of the 1960s. On the other hand, IcerkAjzen proposed the Theory of Planned Behaviour in 1985; which was an extension from the TRA. The Theory of Reasoned Action and Theory of Behaviour Planned combine two sets of belief variables, which are ‘behavioural attitudes’ and ‘the subjective norms’. The behavioural attitudes are defined as the multiplicative sum of the individual’s relevant likelihood and evaluation related to behavioural beliefs. On the other hand, subjective norms are referent beliefs about what behaviours others expect and the degree to which the individual wants to comply with others’ expectations. The summary of the two theories suggest that a person’s health behavior is determined by their intention to perform a behavior (behavioural intention) it also is predicated by a person’s attitude toward the behavior, and the subjective norms regarding the behavior.
The Theory of Reasoned Action has been criticised because it is said to ignore the social nature of human action Kippax, et al. [45]. These behavioral and normative beliefs are derived from individuals’ perceptions of the social world they inhabit, and are hence likely to reflect the ways in which economic or other external factors shape behavioral choices or decisions. In addition, there is a compelling logical case to the effect that the model is inherently biased towards individualistic, rationalistic, interpretations of human behavior. Its focus on subjective perception does not essentially permit it to take meaningful account of social realities. However individuals’ beliefs about such issues are unlikely going to reflect the accurate potential and observable social facts. As such, the Theory of Planned Behavior updated the Theory of Reasoned Action to include a component of perceived behavioral control, which brings about one’s perceived ability to enact the target behavior. Actually, perceived behavioral control was added to the model to extend its applicability beyond purely volitional behaviors. Previous to this addition, the model was relatively unsuccessful at predicting behaviors that were not mainly under volitional control. Therefore, the Theory of Planned Behavior proposed that the primary determinants of behavior are an individual’s behavioral intention and perceived behavioral control.
A constructive use of the TRA and TBP in research and public health intervention programmes might well contribute valuably to understanding issues related to health inequalities and the roles that other environmental factors have in determining health behaviors and outcomes. In spite of the criticism, the general theoretical framework of the TRA and TPB has been widely used in the retrospective analysis of health behaviors and to a lesser extent in predictive investigations and the design of health interventions Hardeman, et al. [46]. This is why there is a connection between the study and the theory; since it is health related within theoretical postulations.

Methodology

The study uses secondary data such as significant texts, journals, newspapers, official publications, historical documents and the Internet. However, the research was strictly limited to available or recorded information about malnutrition, its prevalence, effects and impacts on the Nigeria economy that can be found in scholarly journals, books and the internet. The study adopts content analysis as its method of analysis, whereby the existing literature will be considered for the analysis.

Findings and Discussion

Based on the stated research questions, the findings and discussions are purely based on the research questions. The questions are discussed as follows:

RQ1: What are the Factors or Causes of Hyper Prevalence of Malnutrition in Nigeria?

The causes of malnutrition and food insecurity in Nigeria are multidimensional and include very poor infant and young child breastfeeding or feeding practices, which contribute to high rates of illness and poor nutrition among children under 2 years; lack of access to healthcare, water, and sanitation; armed conflict, mainly in the north; irregular rainfall and climate change; hyper unemployment level; and poverty Nigeria Federal Ministry of Health, Family Health Department [41]. While chronic and seasonal food insecurity occurs throughout the country, and is worsened by volatile and rising food prices, the impact of conflict and other shocks has resulted in acute levels of food insecurity in the North East zone FEWSNET [47]. Furthermore, an approximated 3.1 million people in the states of Borno, Yobe, and Adamawa received emergency food assistance and cash transfers in the first half of 2017 but, the numbers who need assistance is likely far bigger because much of the North East zone has been inaccessible to humanitarian or aid agencies FEWSNET [47].
World Bank [44] stated that the current administration, led by President Muhammadu Buhari, identifies fighting corruption, increasing security, tackling unemployment, diversifying the economy, enhancing climate resilience, and boosting the living standards of Nigerians as its core policy priorities. On the contrary, the country is seriously facing a major challenge of threat in the northeast because of the militant Islamic group, Boko Haram, which is destroying infrastructure and conducting assassinations and abductions. As of August 2017, conflict in northeastern Nigeria had displaced more than 1.7 million people within the country and forced nearly 205,000 people to flee into neighboring Cameroon, Chad, and Niger Republic, making it difficult to access food resources in the regions. In addition, violence has interrupted agricultural and income generating activities, reducing household purchasing power and access to food. Furthermore, populations in the regions of northeastern Nigeria are inaccessible to humanitarian assistance and markets are in terrible conditions USAID [48]. Hence, diet related non communicable diseases are also on the increase in Nigeria due to globalization, urbanization, lifestyle transition, socio cultural factors, and poor maternal, fetal, and infant nutrition Nigeria Federal Ministry of Health, Family Health Department [41].
Other factors include, those related to women’s empowerment, such as mothers’ working status, control over resources and educational attainment. In rural areas, children of working mothers are significantly less possible to be undernourished than children living in households where mothers do not work (Ajieroh, 2009). Hence, in Nigeria, children from the poorest households are almost 3 times more likely to be stunted and almost 4.3 times more likely to be severely stunted compared to children from the richest households. Similarly, according to NPC and ICF International (2014) the findings of a study of factors affecting Nigerian children’s nutritional status suggest that households’ economic status is significantly associated with their nutritional status. This is because the very poor and the poor constitute 74% of the population and cannot afford a nutritious diet.
Furthermore, the analyses of regional differences in child malnutrition reveal important spatial inequalities. The prevalence of underweight, stunting and wasting is generally higher in the northern than the southern states. The highest proportions of malnourished children were found mainly in Bauchi, Jigawa, Kaduna, Katsina, Kebbi, Sokoto and Zamfara states. In all these states the occurrence of stunting exceeds 50%. In other states, such as Gombe, Taraba, Yobe and Kano, the prevalence of stunting exceeds 40%. All the states in the North West (except Jigawa and Zamfara) show higher figure than the national average prevalence of acute malnutrition (wasting). In addition, the North-Eastern states of Bauchi, Borno and Yobe have excessively high burden of wasting, with Kano State showing more than twice the national average (39.7%). Severe acute malnutritionis highest in Kaduna (27.6%) and Kano (25.1%) and lowest in Bayelsa (1.3%).
Consequently, the UN Office for the Coordination of Humanitarian Affairs (2014) stated that Nigeria has the second highest acute malnutrition burden in the world, with an estimated 3.78 million children suffering from wasting.

RQ2: What are the Mental and Intellectual Effects of Hyper Prevalence of Malnutrition in Nigeria of Under Five Children?

The growth of the brain, including neurodevelopment begins in the womb within one week of conception. During this period of rapid growth, protein and energy (from carbohydrates and fat sources) are extremely important. A lack of these nutrients can have very damaging effects. Fuglestad, et al. [49] showed a higher occurrence of brain abnormalities at two years of age among children affected by foetal under nutrition. Furthermore, studies of young children with protein energy malnutrition alsoindicated brain atrophy; a shrinking of brain cells due to a lack of nutrients Blaack, et al. [10]. In addition, inadequate calories have continue to affect children’s brain growth and enlargement immediately in the first months after birth, which was supposed to be a time of fast neurodevelopment, including the establishment of the parts of the brain fundamental for memory (the hippocampal-prefrontal connections Fuglestad, et al. [49].
The deficiency of iron also complicates the growth period of a child. Iron deficiency before two to three years of age may results in intense and possibly permanent myelin (fatty lipids and lipoproteins, which surround the axon of a nerve) changes Fuglestad, et al. [48]. Iron also facilitates the production of neurotransmitters – the chemicals that pass messages between neurons, and it is involved in the function of neuroreceptors, which receive the neurotransmitters’ messages Jukes, et al. [50]. According to Allen [51], emergent evidence suggests that maternal iron deficiency in pregnancy reduces foetal iron stores, perhaps into the first year of life. This leads to greater risk of damages in future mental and physical development.
Furthermore, the deficiency of iron is a strong risk factor for both short and long terms cognitive, motor and socio emotional deterioration Prado & Dewey [52]. Besides, longitudinal study like Grantham-McGregor, et al. [53] have indicated that children who are anaemic during infancy have poorer cognition, lower school achievement and are more likely to have behaviour problems in later childhood; an effect that could occur as a result of direct biological processes or as a consequence of the impact of anaemia on children’s education experiences. Iron deficiency is pervasive. Virtually half of children in low and middleincome countries, that is 47% of under 5 are affected by anaemia, and half of these cases are due to iron deficiency World Health Organization [54]. According to the World Health Organization (WHO), 42% of pregnant women (56 million) suffer from anaemia Goonewardene, et al. [55].
Iodine deficiency is known to be the world’s single greatest cause of preventable mental retardation. In 2007, WHO estimated that nearly 2 billion people had deficient iodine intake, and one third of them are children of school age The Lancet [56]. Iodine is indispensable to the production of thyroid hormones, which are essential for the development of the central nervous system. Serious iodine deficiency before and during pregnancy can lead to underproduction of thyroid hormones in the mother and cretinism (a condition of severely stunted and mental growth due to birth deficiency of thyroid hormones) in the child Prado, et al. [51]. Cretinism is characterized by mental retardation, deaf mutism (a psychological disorder in which it is difficult for the individual to speak in certain situations), partial deafness, facial deformities and cruelly stunted growth. It can lead on average to a loss of 10–15 intelligent quotient (IQ) points Morgane, et al. [57]. In addition, Fuglestad, et al. [48] stated that mild iodine deficiency can decrease motor skills.
Zinc plays an important role in brain development and is known to be vital for efficiency of communication between neurons in the hippocampus, where learning and memory processes occur Duke University Medical Center [58]. It is also fundamental to other biological processes that affect brain development, including DNA and RNA synthesis and the metabolism of protein, carbohydrates and fat Prado, et al. [51]. Additionally, Hamadani, et al. [59] stated that although the results of studies on the impact of zinc supplementation on cognitive outcomes are inconsistent, there appears to be a relationship between zinc deficiency and children’s cognitive and motor development, including among low birth weight children Folate is prerequisite during initial foetal development to prevent neural tube defects and make sure that the neural tube forms accurately to create the brain and spinal cord. Iron folate supplementation is also significant for pregnant and breastfeeding mothers to prevent iron deficiency anaemia Black, [10]. Vitamin B12 and folate works together to produce red blood cells. Black [10] further stated that the deficiencies in both could affect brain development in infants. Like iron, vitamin B12 is also essential to the myelination process. Neurological symptoms of vitamin B12 deficiency appear to affect the central nervous system and in severe cases cause brain atrophy.

RQ3: What are the Impacts of Hyper Prevalence of Malnutrition on the Future of the Nigeria Economy?

According to Save the Child [60] the benefits of good nutrition do not stop with better educational results. By improving cognitive abilities, health, physical strength and stature, good nutrition in the early years can lead to greater wages in adulthood and hence promote the economic development of an entire country. In addition, Save the Child [60] presented evidence that stunted children earn as much as 20% less than their counterparts, and uses this to estimate that today’s malnutrition could potentially cost the global economy $125 billion when children born now reach working age. Hence, the interrelation between improved nutrition and economic growth is of great importance for human and economic development. It is a two way relationship. On the one hand, inclusive economic growth can contribute towards reductions in the prevalence of malnutrition. On the other hand, declines in malnutrition can have a transformative effect on the economic ability of individuals and the whole societies. Thus, by means of its impact both on children’s cognitive development and on their physical health and development, malnutrition can have momentous effects on an individual’s economic wellbeing in future. The World Bank (2006) suggests that malnutrition results in10% lower lifetime earnings, whilestudy like Save the Child [60], that modeled the impact of malnutrition in the first 2-5 years of life placed this figure at 20%.
Lancet Series (2008) reviewed cohort studies from Brazil, Guatemala, India, the Philippines and South Africa that all monitored children into adulthood, and established that stunting is associated with reduced earnings in later life. Similarly, Victoria, et al. [61] stated that the same review discovered that less severe stunting in Brazil and Guatemala was associated with higher adult incomes among both men and women. Furthermore, models using proof from across these longitudinal studies, combined with evidence on the relationship between education and earnings taken from 51 countries, have estimated that children who are stunted at age five earn 22% less than their non stunted counterparts. In addition, data from the same study has been used to evaluate that individuals who were not stunted in early childhood were more likely, by 28 percentage points to work in higher paying skilled labour or white collar work and earned as much as 66% more as adults Hoddinott, et al. [62].
Part of the impact of malnutrition on earnings may be because of the influence on children’s physical development. Study like Morganeet, et al. [56] has confirmed the correlation between adult height and wages.For example, a large cross sectional study in Brazil found that a 1% increase in adults’ height was associated with a 2.4% increase in earnings. Francis and Iyare (2006) and Islam et al., (2006) stated that there is a flawless association between education levels, and individuals’ subsequent earnings. Very importantly, the latest evidence suggests that it is actual learning and the acquisition of skills that matter most, not just the number of years spent in school Hanushek, et al. [63]. This is another reason why early childhood development, boosted by good nutrition, is very vital. Hence, children need to start school ready to learn, rather than struggling to understand what the teacher is trying to teach and impart. Therefore, according to Currie, et al. [64] given the significance of cognitive and educational outcomes on wages, this is likely to be a key pathway that links nutrition to later economic wellbeing.
In actual fact, nutrition’s relationships with cognitive and educational development may be the most important pathway in terms of its impacts on wages. Save the Child [59] reported that the economic impacts of malnutrition are larger for those working in more skilled jobs than for those in manual jobs. Baird, et al. [65] showed that among those working for wages or operating small businesses as adults, those who had received an intervention to improve nutrition as children worked on average five extra hours per week, and earned 20% more than those who didn’t. These impacts were much larger than increases seen for farm workers. Hence, nutrition is not only significant for increasing economic outcomes of individuals; it is important for nations’ economic development. Malnutrition also affects national economies by increasing healthcare costs, as people who were malnourished as children are more likely to fall ill to diseases Currie, et al. [64-78].

Conclusion

Diet is the number one risk factor for disease in the world; carrying a superior risk of ill health than smoking or drinking alcohol. According to the World Health Organization, 462 million adults are underweight, while 1.9 billion adults are overweight and obese. In children under 5 years of age, 155 million are stunted, 52 million are wasted, 17 million are severely wasted and 41 million are obese. Globally, there is hunger and malnutrition ravaging the world with a current estimated value of 1 in 9 people out of the 820 million people who are hungry or undernourished. Thus, the study found that presently Nigeria is one of the African countries listed among the 20 countries responsible for 80% of global malnutrition. The finding of the study also revealed that over the years, two main types of malnutrition have been identified in Nigerian children: protein-energy malnutrition and micronutrient malnutrition. The study discovered that the causes of malnutrition and food insecurity in Nigeria are multidimensional and include very poor infant and young child breastfeeding or feeding practices, which contribute to high rates of illness and poor nutrition among children under 2 years. The study discovered that young children with protein energy malnutrition suffer from brain atrophy; a shrinking of brain cells due to a lack of nutrients. The findings of the study revealed that stunted children earn as much as 20% less than their counterparts, and that today’s malnutrition could potentially cost the global economy $125 billion. The study concludes that nutrition is not only significant for increasing economic outcomes of individuals; it is important for nations’ economic development, especially for a developing country like Nigeria.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Chemistry

Effect of Operating Parameters and Particle Size of Calcium Carbonate on the Physical Properties of Latex Paint

Introduction

Paint is a liquid which spreads over a substrate in the form of thin layer and it is transformed into a solid adherent film [1]. There are two major functions of paint. One protection and other is decoration. The earliest known use of paints dates back more than 30,000 years to cave paintings in Spain [2]. These paints were simply mixtures of colored earth, soot, grease, and other natural substances. The ancient Greeks, Romans, and Egyptians used natural resins and raw materials to decorate and identify statues, tools, vessels, and buildings [2,3]. These natural ingredients include vegetable gums, starches, and amber. In China and India, shellac resins and beeswax were used over 2000 years ago as a decorative coating which also doubled as a protective function [4]. The earliest paint formulation dates back roughly 900 years to a German goldsmith and monk, Rodgerus von Helmershausen [2,3]. His formulation described the manufacturing of paint by mixing linseed oil and amber, referred to as paint boiling, which was further refined and developed into the Industrial Revolution [2,3]. Synthetic polymer chemistry also developed at this time with Carothers and others in the 1920s [5]. Paint is used to protect and color the substrate. Components of paint are solvent, pigment, filler, additives and binder. A coating is a product based on organic binders, which provides a cohesive, non-absorbent, protective film [2]. Differences in the composition of the various coatings systems are presented in (Tables 1 & 2).

biomedres-openaccess-journal-bjstr

Table 1: Typical composition of various coating systems.

biomedres-openaccess-journal-bjstr

Table 2: Differences between step-growth and chain-growth polymerization.

Common to all three coating systems are the resin and additive. Clear coats are optically inactive; therefore pigments and fillers are not present. Powder coatings are not in a liquid medium; therefore a solvent is not present. Paints are liquid materials that are optically opaque coatings that form when applied by brushing, rolling or spraying [2,3]. The technical definition of a binder is the non-volatile part of a paint excluding the pigments and filler, which includes the non-volatile additives [2]. Binder forms the film of the paint. Binder is a polymer which has impact on important properties of the paint like adhesion to the substrate, sheen, application properties, color acceptance, durability and flexibility. There are different types of binders like synthetic binders and natural binders. Water based paints binders include poly vinyl acetates, poly vinyl acrylic, styrene acrylic, pure acrylics, etc. Oil-based paints include alkyd resins, polyurethane resins, melamine resins, etc. Natural oils or fatty oils were important film forming agents which were able to convert a low viscosity liquid into a solid [3]. Synthetic resins came about in the 1920s with the advancements in polymer chemistry. The primary benefits of synthetic resins are that products can be tailored with specific properties with nearly unlimited availability. The different resin systems are mentioned above, all of which are either step-growth or chain-growth polymerization [6].

Chain-growth reactions typically have three reactions – initiation, propagation, and termination. Step-growth polymerizations are reactions between functional or multifunctional monomers without an initiation or termination step. Characteristics common in pigments include extreme optical characteristics, particles smaller than 10 μm, being insoluble in water and most organic solvents, and being chemically inert or chemically stable [7]. A comparison between the organic and inorganic pigments is presented in (Table 3) [2]. Colored inorganic pigments are typically variants of iron oxides [3]. Pigments are used for giving color contribution in paints. There are different types of pigments like natural or synthetic. Pigments give opacity to the paint film. There are many kinds of pigments like titanium dioxide, phthalocynine blue, pthalocyninered, iron oxide, etc. Common filler materials include carbonates, silicon dioxide, silicic acids, silicates, and sulfates [2,3,6]. Fillers are used to give toughness and lower the cost of the paint by increasing the density of the paint. The examples of natural fillers are grounded calcium carbonate, magnesium silicate, etc. The examples of synthetic fillers are precipitated calcium carbonate, aluminum silicate, etc. These pigments and extenders are commercially available in solid or slurry form. Slurry is an aqueous solution that contains dispersed pigments or extenders. In a paint application, the pigment and extenders are eventually dispersed in some sort of medium.
The dispersion process involves 3 steps – wetting, separation, and stabilization [6]. The refractive index can be described as the degree of bending of light as it passes through a material. This value is a dimensionless value and is typically referenced to light traveling in a vacuum. Larger refractive indexes reflect a greater degree of bending of light. The refractive index of pigments and film formers are presented in (Table 4). TiO2 is not used as a biocide, but has some antimicrobial properties due to the photo catalytic reaction mentioned earlier [8,9]. Filler or extender particles such as calcium carbonate (CaCO3) primary serve as replacement for the binder material. Reasons include the lower cost of filler materials or formulation above critical pigment volume concentration. Pigment volume concentration (PVC) is the most widely accepted quantitative description of paint film composition [1]. PVC is expressed as volume percentage of the pigments and fillers to that of the volume of the dry film expressed as a whole number. Volume is used rather than weight because pigments scatter based on volume. PVC values are quantifiable between 0 – 100. Solvent is used to form a homogenized mixture by dissolving the polymer and pigments. Solvent also adjusts the viscosity of the paint. Solvent is a volatile part of the paint. Functions of solvent also include flow control flow, stability of paint liquid and improving application properties.

biomedres-openaccess-journal-bjstr

Table 3: Comparison between inorganic and organic pigments.

biomedres-openaccess-journal-bjstr

Table 4: Refractive Index (R.I.) of Common Materials in Paint.

The main solvent for water based paints is water. The solvent for oil-based paints can be white spirit, mineral turpentine oil, alcohols, ketones, etc. Additives are liquids which gives dramatic effect on paint quality. There are different functions of different additives. Some additives change the surface tension of the paint film; some additives enhance the flow pattern of the paint or improve the appearance of the paint. Different additives have different impact on the liquid paint or paint film like changing wet edge, increasing stability of the pigments used, ant freezing effect, low foaming, less skinning, etc. There are different types of paint additives like gelling agents, hydroxyethyl cellulose, emulsifiers, different biocides, UV stabilizers, etc. Emulsion paints are waterbased paints containing water, binder, additives and pigments. Curing of Emulsion latex paints is done by coalescence. Coalescence is a process in which the coalescing solvent draws together and the binder particles are soften to bind them together into irreversibly bound networked structures. Alkyd enamel are paints that cure by oxidative crosslinking. These paints need drier additives like cobalt naphthenate, calcium naphthenate and lead naphthenate to start oxidation process for drying. Some paints are one or two package coatings. These paints dry through a chemical reaction.

Materials and Methods

The following Instruments and Analyzers Were used to Analyze Various Properties of Paint Samples

1. Conventional Agitator, (laboratory mixer manufactured by BEVS Industrial Co. Ltd., China. Model: BEVS 2501/1)
2. Brookfield DV2T viscometer
3. Nano grinding machine (nano mill manufactured by Dongguan Longley Machinery Co. Ltd. China. Model no. NT-1L)
4. Spectrophotometer, model data color 110
5. Grind Gauge, sheen UK, range 0-100 μm
6. Hiding Power Charts, sheen UK, Coated, 255 x 140 mm
7. Automatic film applicator (manufactured by BEVS Industrial Co. Ltd., China. Model Number :BEVS1811/2)
8. Tri-Glossmaster , sheen UK, angles 20-60-85°
9. Wet abrasion scrub tester
10. Stop watch
11. Cryptometer , sheen UK, with K007 plates
12. Pyknometer, sheen UK
13. Malvern Mastersizer, Malvern Instruments Ltd. UK.
14. Brookfield KU-1+ viscometer
15. High speed agitator, rpm 1400

The Following Chemicals were used in the Preparation of Paint Samples

1. Water
2. Dispersant, solution of an ammonium salt of an acrylic polymer in water
3. Calcium carbonate, 400 mesh particle size
4. Hydroxyethyl cellulose thickener powder
5. Ammonium hydroxide solution (25% actives)
6. Latex binder, which is ter polymer of vinyl acrylic emulsion
7. Biocide A, a water based combination of chloromethyl-/ methylisothiazolone (CMI/ MI) and O-formal
8. Biocide B, a combination of two isothiazalone derivatives that can provide broad-spectrum micro-organism control in waterbased coatings

Sample Preparation

Determination of Dispersant Demand

The first step in sample preparation was to determine dispersant demand for current 400 mesh calcium carbonate sample. The complete covering of the surface is an indispensable prerequisite to achieve an ideal stabilization of the dispersed pigments. The fact that the viscosity of pigment slurry reaches a minimum when the pigment surface is completely covered with a dispersant is used to determine the dispersant demand. The dispersant is added in portions to the stirred pigment slurry. After the addition and mixing the viscosity is measured at low shear rates (e.g. with a Brookfield viscometer). The dispersant is added until a minimum of viscosity or constant viscosity is obtained in the viscosity measurements [10].

Procedure for Dispersant Determination

440 gm water was taken in 2500 mL agitated tank of laboratory mixer (Figure 1). Mixing at 500 rpm was started and 1500 gm calcium carbonate of particle size 400 mesh was added in mixing tank. Dispersion of calcium carbonate slurry was done for 05 minutes under 1000 rpm. Viscosity was measured at 25C using Brookfield KU-1+ viscometer following the standard ASTM D562. i.e. “Standard Test Method for Consistency of Paints Measuring Krebs Unit (KU) Viscosity Using a Stormer-Type Viscometer”. The effect of successive addition of 2 gm dispersant on the viscosity of the sample was observed under the same operating conditions as shown in the (Table 5). The procedure was continued till no significant change in viscosity was observed. (Figure 2) shows that optimum dispersant demand was 22 gm for 1500 gm CaCO3 of particle size 400 mesh after which no significant change in viscosity was observed.

biomedres-openaccess-journal-bjstr

Table 5: Dispersant requirement.

biomedres-openaccess-journal-bjstr

Figure 1: BEVS laboratory mixer.

biomedres-openaccess-journal-bjstr

Figure 2: Viscosity versus Dispersant quantity.

Calcium Carbonate Slurries Preparation

The composition of calcium carbonate slurry was prepared as shown in (Table 6) using nano mill (Figure 3) adjusting pneumatic pump pressure between 0.2 to 0.4 MPa. The speed of the nano shaft was adjusted at 2500rpm.Flow rate of calcium carbonate slurry coming out of the nano mill was adjusted around 3 gm/ sec. Dispersion was checked through ASTM-D 1210. i.e. “Standard Test Method for Fineness of Dispersion of Pigment-Vehicle Systems by Hegman-Type Gage”. The term ‘fineness of grind’ is defined as the reading obtained on a gauge under specified conditions of test and the reading indicates the depth of the gauge at which discrete solid particles are readily discernible. Dispersion of calcium carbonate slurry found on hegman guage was below 10 micron. The similar composition (Table 6) was prepared using laboratory mixer. Dispersion of calcium carbonate slurry found on hegman guage was 50 micron.

biomedres-openaccess-journal-bjstr

Table 6: Calcium carbonate slurry composition.

biomedres-openaccess-journal-bjstr

Figure 3: Nano mill.

Determination of Various Properties of Paint Samples

Panels Applications

Panels are applied on hiding power charts through Automatic Film Applicator following the standard ASTM D 823-95.i.e. “Producing Films of Uniform Thickness of Paint, Varnish, and Related Products on Test Panels” as shown in (Figure 4). This standard is under the jurisdiction of ASTM Committee D01 on Paint and Related Coatings, Materials, and Applications and are the direct responsibility of Subcommittee D01.23 on Physical Properties of Applied Paint Films. From the panels drawn, difference in physical properties of latex paints were observed in terms of viscosity, density, pH values, wet hiding, gloss, smoothness, drying time, whiteness, scrubs and opacity. The results so obtained are summarized in (Tables 7 & 8).

biomedres-openaccess-journal-bjstr

Table 7: Latex Paint composition.

biomedres-openaccess-journal-bjstr

Table 8: Specifications of paint samples manufactured through Nano Mill and Conventional Agitator.

biomedres-openaccess-journal-bjstr

Figure 4: Comparison of Wet panel (left) and Dried panel (right) for
(a) nano mill
(b) and conventional agitator.

Wet and Dry Opacity

Wet hiding was checked through Crypto meter. The Crypto meters offer a quick method to determine the wet opacity, hiding power and coverage in square meters per liter of liquid coating materials. A small sample of liquid coating (approximately 4ml) was applied on the joint line of the black and white base plate, the top plate (pins facing downwards) was placed across base plate joint line the sample forms a wedge of paint, (maximum thickness nearest the pins) by sliding the plate back and forth till the sample perfectly hides both the black and the white section of the base plate. At the position of hiding a reading was observed on the engraved scale of the Base Plate, this was then converted into covering power (Square meters/liter).Top Plates (number K007) were offered with each of the Crypto meter products to cover a range of film thickness.

Gloss Measurements

Gloss was tested through Tri-Gloss master following the standard ASTM D2457. i.e. “Standard Test Method for Specular Gloss of Plastic Films and Solid Plastics”.

Dry Hiding and Whiteness

Dry Opacity was checked through Spectrophotometer data color 110.

Drying Time

Drying time was measured through stop watch at ambient temperature.

Adhesion / Scrubs

Scrubs were checked through Wet abrasion scrub tester following the standard ASTM D 3450. i.e. “Standard Test Method for Wash ability Properties of Interior Architectural Coatings”.

Viscosity

Viscosity was tested through Brookfield DV2T latest viscometer following the standard ASTM D1084. i.e. “Standard Test Methods for Viscosity of Adhesives”.

Density

Densities of the samples were measured through Pyknometer following the standard ASTM D1475. i.e. “Standard Test Method for Density of Liquid Coatings, Inks, and Related Products”.

Specific Surface Area and Particle Size

Specific surface area and Particle size of calcium carbonate slurries manufactured through nano mill as well as through conventional agitator was tested with Malvern Mastersizer

PH Value

PH values of the samples were tested through pH meter following the standard ASTM ASTM E70 – 07(2015). i.e. “Standard Test Method for pH of Aqueous Solutions with the Glass Electrode”.

Discussion and Results

Paint in which calcium carbonate slurry processed from Nano mill showed brilliant results in terms of wet opacity, dry opacity, gloss, smoothness and drying time compared to the paint in which calcium carbonate slurry processed through laboratory mixer as shown in (Figure 4). All quality parameters were significantly increased in the paint in which calcium carbonate slurry processed from Nano mill.

Conclusion

It is observed that with the reduction of particle size of calcium carbonate, latex paint showed better results in terms of better hiding, better whiteness, higher gloss, more adhesion. Calcium carbonate slurry processed through Nano mill showed exceptionally good compared to conventional agitator. The slurry processed through Nano Mill reduced the particle size of calcium carbonate from 37.5 microns to 2.63 microns while the slurry processed through conventional agitator reduced the particle size of calcium carbonate from 37.5 microns to 18.05 microns. Furthermore, the paint manufactured with Nano mill slurry showed better Whiteness, wet hiding, dry hiding, adhesion and gloss than the paint manufactured with conventional agitator.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Public Health

The Seroprevalence of SARS-CoV-2 Antibodies in Romania – First Prevalence Survey

Introduction

The infection with the new Coronavirus generated important socio-economic transformations, through social distancing measures, with profound economic implications, but also a lot of concern, due to evolutionary and clinical complications and lack of specific treatment. The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) associated disease – 2019 (COVID-19) has spread globally, affecting in one year and half over 170 million people from more than 180 countries or regions, leading to a global pandemic with a fatality rate of 2.1% [1]. The laboratory diagnosis of suspected COVID-19 clinical / contact cases is based on the detection of SARS-CoV-2 viral genome by qRT-PCR assays. However, asymptomatic or mild COVID-19 infections remain undiagnosed, therefore the burden (incidence and spread) of SARS-CoV-2 infection can be underestimated, affecting the implementation and efficiency of infection control and prevention measures. Given this limitation, countries are seeking to assess the spread of the infection in the population through prevalence studies conducted on study groups which are representative for the general population [2,3].

The surveys conducted in the first half of the year 2020 in different countries or geographical regions on populations of different sizes revealed different seroprevalence rates, ranging from <0.1% to more than 20% and that it can increase over time during longitudinal follow-up. In Europe, the seroprevalence reported by different countries was in decreasing order Italy (11.0%) [4], Switzerland (weekly seroprevalence rate of 4.8% to 10.8% during five weeks) [5], France (between 3.8 and 10% in different regions) (2), Spain (4.6%) [6], Denmark (1.9%) [7], Greece (0.42%) [8]. In USA, a great variation of seroprevalence was reported for different geographical regions (1.0% – 31.5%) [9,10], while for Brazil the rate was 3.8% [11]. In South America, Chile reported a seroprevalence of 13,4 – 16% [12]. In Africa, Kenya reported a crude seroprevalence of 5,6% and a study done in Alzintan City of Libya presented a seroprevalence of 2,74% [13,14].

In Asia, the highest rates were reported for Pakistan (15.6- 37.7%) [15], Guilan province, Iran (22%) [16], in China different serological studies reported positivity rates ranging from 0.6% in Chengdu, Sichuan to 3.8% in Wuhan, Hubei [17], while the lowest rates were recorded in Malaysia (0.4 – 0.6%) [18] and South Korea (0.07%) [19]. Japan reported 3.3% seroprevalence in Kobe [20] and a cumulative case detection ratios (2.6 – 8.7%) at 3 prefecture-level seroprevalence (Tokyo, Osaka and Miyagi) [21]. All studies reported a higher seroprevalence rate in males, although the differences are not statistically significant [22]. Considering the large variation of seroprevalence among different populations, filling the gap with data from different geographical regions is needed in order to better evaluate the burden of COVID-19 pandemic. This study reports for the first time the results of a seroprevalence survey performed in the Romanian population, to estimate the degree of spread of SARSCoV- 2 infection and to substantiate the measures to respond to the COVID pandemic that will be adopted at the level of the Romanian health care system for the next period.

Material and Methods

In this study, people that presented themselves conjuncturally at selected laboratories have been invited to participate in the seroprevalence survey. The participating laboratories were selected from each of the 42 counties of Romania.

Study Design and Participants

A cross-sectional study was performed to assess the SARSCoV- 2 antibody seropositivity prevalence. The study used a nonprobability sampling method, known as convenience sampling. The sampling strategy had two steps: the selection of laboratories and the selection of persons. The inclusion criteria for the laboratories were the following: either public or private facilities, with high addressability (over 40000 samples per year) and serving ambulatory patients (non-hospitalized). Based on these criteria, each of the 42 County Public Health Directorates over the country selected between 3 and 5 laboratories to participate in the study (except Bucharest Public Health Directorate, which selected 9 laboratories). Inclusion and exclusion criteria for the enrolment of the study subjects were also defined. In order to be selected, people from all ages that presented themselves conjuncturally at the selected laboratories for check-ups were invited to participate in the study. They should not present signs of symptoms of respiratory infection or requested to be tested for Covid-19. The participants to the study were selected based on a sampling step, and only individuals who expressed their informed consent to participate in the study were enrolled.

If a person qualified in the sampling step did not agree to participate in the study, the next person was asked if willing to be enrolled. The data collection took place between July and October 2020. The participants had to sign an informed consent to be included in the study (for children the consent was signed by the parent/legal representative). The participants had also to provide their demographic information, that included age, gender, city of residence and personal pathologic history. The seroprevalence analysis involved residual serum obtained from these individuals. The size of the study sample was calculated using the EpiInfo 7 program, for obtaining regional and decadal age-group representation. The regional sample for a specific agegroup was proportionally allocated for the counties in the region, considering their total population for the corresponding age-group. The resident population of Romania from July 1, 2018, on decadal age groups was used, with an expected frequency of SARS-CoV-2 infection in Romania of 50% on each age group, an accuracy of 95% CI, error accepted 5 % and 5% losses accepted for each age group in the region.

Procedures

All the serum samples of the enrolled participants were analyzed by the National Institute of Public Health laboratory, using a chemiluminescent technology (CLIA) based assay to detect the anti-SARS CoV-2 antibodies of the IgG type. The samples were kept at temperatures between minus 12 and minus 20 degrees Celsius. Transportation of residual serum samples was done using refrigeration machines and, exceptionally, isothermal bags with ice packs. The quality criteria for the serum samples were the following: blood samples collected in biochemistry vacuums, without anticoagulant, with or without separating gel; samples with a serum volume of 0.5-1 ml for the age group 0-14 years and 1-2 ml in people over 14 years. The residual serum from people that were suspected of Covid-19 and those presenting jaundice, haemolysis or superinfection (with flakes or veil) were not considered.

Ethics Statement

The existing study protocol was reviewed and accepted by the Scientific Council of the National Institute of Public Health – Research Ethics Committee. The seroprevalence study was performed in full compliance with the principles of ethics and confidentiality of personal data. Written informed consent was obtained from all eligible for enrolment individuals, while all professionals involved in the collection, retrieval and storing of data have signed a confidentiality agreement.

Results

Of all the individuals that presented themselves at the selected laboratories across 8 regions of the country, 19738 agreed to participate in this study and 19597 provided a serum sample for which a CLIA result of anti-SARS-CoV-2 IgG specific antibodies was available. Males represented 36.2% of the total study population and this could be probably associated to the higher health-related concern of females in general, considering that the selection was conjunctural (people addressing themselves for different blood tests). The sample population had a mean age of 46.61±21.08 years and a median age of 48 years. The proportion of each decadal age-group is shown in Figure 1. As could be noticed, the young age-groups were seriously under-represented, meanwhile the agegroups 50-59y, 60-69y and 70-79 y were slightly over-represented (last-one in particular).

biomedres-openaccess-journal-bjstr

Figure 1: Proportion of the decadal age-groups in total population – sample versus country population.

Seroprevalence at National Level

Overall, we found 1213 positive IgG samples in the study population, resulting in a seroprevalence rate of 6.19% (95%CI: 5.85:6.53). The seroprevalence rate by age-groups at national level is shown in Table 1. The level of protection was similar in children and young adults (slightly higher in children, but statistical significance was not met). The middle aged adults, especially the age-group 40-49 years showed a significantly higher level of protection. Population aged 60+ years seemed to be less protected compared to both adults and children. A statistically lower level of seroprevalence was revealed between each elderly age-group compared to middle-age adult population. A slight difference in seroprevalence was found compared to children and young adults, but this did not meet the statistical significance. We found also differences within the elderly groups. The seroprevalence seemed to be lower over the age of 70 years, compared to age-group 60 – 69, but, again, this difference did not meet the statistical significance.

biomedres-openaccess-journal-bjstr

Table 1: The seroprevalence rate by age-group.

Seroprevalence by Regions

Romania is divided in eight region: North-East (NE), South- East (SE), South (S), South-West (SW), West (W), North-West (NW), Center (C) and Bucharest-Ilfov (BI) – the last-one including the capital city of Bucharest. By comparing the regions with the national rate, we found significantly higher prevalence in NE, S and SW, and significantly lower one in NW, C and BI (Table 2).

biomedres-openaccess-journal-bjstr

Table 2: The seroprevalence rate by regions.

Seroprevalence by Age-Groups – Regional Versus National Level

The seroprevalence by age-groups in the regions is shown in Table 3. Although the seroprevalence for each age group registered some variations among regions, significant differences compared to the national level were found only in limited cases. Thus, we found significantly lower seroprevalence rates compared to the national level in the regions NW (age-groups 10-19y and 30-59y) and Centre (age groups 30 – 39y and 40-49y). The only situation with a significantly higher level of protection was age-group 40-49y, in the NE region.

biomedres-openaccess-journal-bjstr

Table 3: Seroprevalence by age-groups in the regions.

Seroprevalence in the Capital Region (BI)

The enrolment rate in the Bucharest-Ilfov region was from far very poor (23% of planned). Table 4 provides details about the number and age of participants in Bucharest. Out of 845 participants, 30 tested positive for SARS-CoV-2-specific IgG antibodies, meaning a seroprevalence of 3.55% (2.30:4.80). A very limited number of cases was enrolled in the extreme age-groups (children and elderly population) and nonpositive case has been identified in age-groups 0-9y and 70-79y. The proportion of males was 33.3%, slightly lower compared to the national proportion (36.2%), but without statistical significance (p=0.081, Chi Square test). From the total positive cases, 18 were females and 12 were males. The enloled and positive cases are shown in Table 4.

biomedres-openaccess-journal-bjstr

Table 4: Seroprevalence by age-groups in the regions.

Discussion

Given that the vast majority of infection cases remains asymptomatic, countries are seeking to assess the spread of the infection in the population through seroprevalence studies with representation for the general public. The aim of this study was to estimate the degree of spread of SARS-CoV-2 infection in the Romanian population. In this purpose, we have assessed, using a chemiluminescence immunoassay, the anti-SARS-CoV-2 IgG antibodies, as they last longer than IgM and therefore, play a crucial role in assessing the real prevalence of the virus [23]. SARS-CoV-2 invades human cells by binding the spike protein to the membrane protein receptor of the cell. The genome of this virus encodes four key proteins – spike (S), nucleocapsid (N), envelope (E) and membrane (M) [24-27]. As the spike protein is involved in the first step of the infectious process, represented by the interaction with specific receptors, followed by virus internalization in the infected cells, there are many assays that detect the specific antibodies anti-S protein of SARS-CoV-2. Chemiluminescence immunoassay represents an indirect detection method of the anti-SARS-CoV-2 antibodies [28].

It can detect either IgM or IgG in serum [29]. Different countries tested the performance of CLIA, all indicating good specificity, sensitivity and its convenience for sampling [29-31]. Other studies used this method on a specific population to report the seroprevalence: private healthcare group in Fukushima Prefecture, Japan [32]; elite football players in Germany [33]; multicenter, primary care, and emergency care facilities in North Carolina [34]. The findings in this seroprevalence study for SARS-CoV-2 suggest that the prevalence of IgG antibodies against the Spike protein of SARS-CoV-2 is over 6% in Romania. However, according to the official data reported from the surveillance system, the cumulated notification rate for confirmed COVID-19 cases reached to 1.27% at end October 2020, when our study was finished. Our results support the data published regarding the lower proportion of COVID cases which are generally requiring health care, based on the severity of their symptoms The overall seroprevalence in Romania was lower than that recorded in Sweden, but higher than reported in Germany and Spain [2]. However, it should be noted that the specified studies presented a number of differences, regarding the number of participants, time frame and the methods that were used to evaluate the presence of antibodies.

The more modest seroprevalence rate among elderly could be a reason for consideration in the next planning phase for controlling the pandemic. Also we found interesting and significant geographical variations among regions, which could be an argument in favour of adopting public health interventions tailored to the epidemiological situation in the region, even with particularization for the smallest territorial units. Our study has a number of limitations. Although convenience sample is a common strategy used by many researchers, it can provide biased results because this method has the possibility to over/underrepresent a population [35]. The response rate to the study invite achieved lower levels in extreme age-groups. This is normal, because generally the parents could be reluctant or hesitant in agree the enrolment of their children in surveys. On the other hand, the children are less likely to perform blood tests compared to the adults, thus their enrolment was more difficult. As for the elderly, due to the epidemiological situation, they might avoid or postpone their usual blood tests. Women were represented in a higher proportion than men in this study, meaning that women could be more interested in participating in surveys, or more active in general, in investigating their health status.

Conclusion

Our study suggests that the real number of individuals infected with SARS-CoV-2 in Romania exceeds by around five times the number of reported cases confirmed by PCR. Therefore, data on seroprevalence are very important for understanding the magnitude and distribution of the pandemic at country level. Repeating the study after the vaccination campaign could provide strong indications about the further needs of public health interventions.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Public Health

The Seroprevalence of SARS-CoV-2 Antibodies in Romania – First Prevalence Survey

Introduction

The infection with the new Coronavirus generated important socio-economic transformations, through social distancing measures, with profound economic implications, but also a lot of concern, due to evolutionary and clinical complications and lack of specific treatment. The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) associated disease – 2019 (COVID-19) has spread globally, affecting in one year and half over 170 million people from more than 180 countries or regions, leading to a global pandemic with a fatality rate of 2.1% [1]. The laboratory diagnosis of suspected COVID-19 clinical / contact cases is based on the detection of SARS-CoV-2 viral genome by qRT-PCR assays. However, asymptomatic or mild COVID-19 infections remain undiagnosed, therefore the burden (incidence and spread) of SARS-CoV-2 infection can be underestimated, affecting the implementation and efficiency of infection control and prevention measures. Given this limitation, countries are seeking to assess the spread of the infection in the population through prevalence studies conducted on study groups which are representative for the general population [2,3].

The surveys conducted in the first half of the year 2020 in different countries or geographical regions on populations of different sizes revealed different seroprevalence rates, ranging from <0.1% to more than 20% and that it can increase over time during longitudinal follow-up. In Europe, the seroprevalence reported by different countries was in decreasing order Italy (11.0%) [4], Switzerland (weekly seroprevalence rate of 4.8% to 10.8% during five weeks) [5], France (between 3.8 and 10% in different regions) (2), Spain (4.6%) [6], Denmark (1.9%) [7], Greece (0.42%) [8]. In USA, a great variation of seroprevalence was reported for different geographical regions (1.0% – 31.5%) [9,10], while for Brazil the rate was 3.8% [11]. In South America, Chile reported a seroprevalence of 13,4 – 16% [12]. In Africa, Kenya reported a crude seroprevalence of 5,6% and a study done in Alzintan City of Libya presented a seroprevalence of 2,74% [13,14].

In Asia, the highest rates were reported for Pakistan (15.6- 37.7%) [15], Guilan province, Iran (22%) [16], in China different serological studies reported positivity rates ranging from 0.6% in Chengdu, Sichuan to 3.8% in Wuhan, Hubei [17], while the lowest rates were recorded in Malaysia (0.4 – 0.6%) [18] and South Korea (0.07%) [19]. Japan reported 3.3% seroprevalence in Kobe [20] and a cumulative case detection ratios (2.6 – 8.7%) at 3 prefecture-level seroprevalence (Tokyo, Osaka and Miyagi) [21]. All studies reported a higher seroprevalence rate in males, although the differences are not statistically significant [22]. Considering the large variation of seroprevalence among different populations, filling the gap with data from different geographical regions is needed in order to better evaluate the burden of COVID-19 pandemic. This study reports for the first time the results of a seroprevalence survey performed in the Romanian population, to estimate the degree of spread of SARSCoV- 2 infection and to substantiate the measures to respond to the COVID pandemic that will be adopted at the level of the Romanian health care system for the next period.

Material and Methods

In this study, people that presented themselves conjuncturally at selected laboratories have been invited to participate in the seroprevalence survey. The participating laboratories were selected from each of the 42 counties of Romania.

Study Design and Participants

A cross-sectional study was performed to assess the SARSCoV- 2 antibody seropositivity prevalence. The study used a nonprobability sampling method, known as convenience sampling. The sampling strategy had two steps: the selection of laboratories and the selection of persons. The inclusion criteria for the laboratories were the following: either public or private facilities, with high addressability (over 40000 samples per year) and serving ambulatory patients (non-hospitalized). Based on these criteria, each of the 42 County Public Health Directorates over the country selected between 3 and 5 laboratories to participate in the study (except Bucharest Public Health Directorate, which selected 9 laboratories). Inclusion and exclusion criteria for the enrolment of the study subjects were also defined. In order to be selected, people from all ages that presented themselves conjuncturally at the selected laboratories for check-ups were invited to participate in the study. They should not present signs of symptoms of respiratory infection or requested to be tested for Covid-19. The participants to the study were selected based on a sampling step, and only individuals who expressed their informed consent to participate in the study were enrolled.

If a person qualified in the sampling step did not agree to participate in the study, the next person was asked if willing to be enrolled. The data collection took place between July and October 2020. The participants had to sign an informed consent to be included in the study (for children the consent was signed by the parent/legal representative). The participants had also to provide their demographic information, that included age, gender, city of residence and personal pathologic history. The seroprevalence analysis involved residual serum obtained from these individuals. The size of the study sample was calculated using the EpiInfo 7 program, for obtaining regional and decadal age-group representation. The regional sample for a specific agegroup was proportionally allocated for the counties in the region, considering their total population for the corresponding age-group. The resident population of Romania from July 1, 2018, on decadal age groups was used, with an expected frequency of SARS-CoV-2 infection in Romania of 50% on each age group, an accuracy of 95% CI, error accepted 5 % and 5% losses accepted for each age group in the region.

Procedures

All the serum samples of the enrolled participants were analyzed by the National Institute of Public Health laboratory, using a chemiluminescent technology (CLIA) based assay to detect the anti-SARS CoV-2 antibodies of the IgG type. The samples were kept at temperatures between minus 12 and minus 20 degrees Celsius. Transportation of residual serum samples was done using refrigeration machines and, exceptionally, isothermal bags with ice packs. The quality criteria for the serum samples were the following: blood samples collected in biochemistry vacuums, without anticoagulant, with or without separating gel; samples with a serum volume of 0.5-1 ml for the age group 0-14 years and 1-2 ml in people over 14 years. The residual serum from people that were suspected of Covid-19 and those presenting jaundice, haemolysis or superinfection (with flakes or veil) were not considered.

Ethics Statement

The existing study protocol was reviewed and accepted by the Scientific Council of the National Institute of Public Health – Research Ethics Committee. The seroprevalence study was performed in full compliance with the principles of ethics and confidentiality of personal data. Written informed consent was obtained from all eligible for enrolment individuals, while all professionals involved in the collection, retrieval and storing of data have signed a confidentiality agreement.

Results

Of all the individuals that presented themselves at the selected laboratories across 8 regions of the country, 19738 agreed to participate in this study and 19597 provided a serum sample for which a CLIA result of anti-SARS-CoV-2 IgG specific antibodies was available. Males represented 36.2% of the total study population and this could be probably associated to the higher health-related concern of females in general, considering that the selection was conjunctural (people addressing themselves for different blood tests). The sample population had a mean age of 46.61±21.08 years and a median age of 48 years. The proportion of each decadal age-group is shown in Figure 1. As could be noticed, the young age-groups were seriously under-represented, meanwhile the agegroups 50-59y, 60-69y and 70-79 y were slightly over-represented (last-one in particular).

biomedres-openaccess-journal-bjstr

Figure 1: Proportion of the decadal age-groups in total population – sample versus country population.

Seroprevalence at National Level

Overall, we found 1213 positive IgG samples in the study population, resulting in a seroprevalence rate of 6.19% (95%CI: 5.85:6.53). The seroprevalence rate by age-groups at national level is shown in Table 1. The level of protection was similar in children and young adults (slightly higher in children, but statistical significance was not met). The middle aged adults, especially the age-group 40-49 years showed a significantly higher level of protection. Population aged 60+ years seemed to be less protected compared to both adults and children. A statistically lower level of seroprevalence was revealed between each elderly age-group compared to middle-age adult population. A slight difference in seroprevalence was found compared to children and young adults, but this did not meet the statistical significance. We found also differences within the elderly groups. The seroprevalence seemed to be lower over the age of 70 years, compared to age-group 60 – 69, but, again, this difference did not meet the statistical significance.

biomedres-openaccess-journal-bjstr

Table 1: The seroprevalence rate by age-group.

Seroprevalence by Regions

Romania is divided in eight region: North-East (NE), South- East (SE), South (S), South-West (SW), West (W), North-West (NW), Center (C) and Bucharest-Ilfov (BI) – the last-one including the capital city of Bucharest. By comparing the regions with the national rate, we found significantly higher prevalence in NE, S and SW, and significantly lower one in NW, C and BI (Table 2).

biomedres-openaccess-journal-bjstr

Table 2: The seroprevalence rate by regions.

Seroprevalence by Age-Groups – Regional Versus National Level

The seroprevalence by age-groups in the regions is shown in Table 3. Although the seroprevalence for each age group registered some variations among regions, significant differences compared to the national level were found only in limited cases. Thus, we found significantly lower seroprevalence rates compared to the national level in the regions NW (age-groups 10-19y and 30-59y) and Centre (age groups 30 – 39y and 40-49y). The only situation with a significantly higher level of protection was age-group 40-49y, in the NE region.

biomedres-openaccess-journal-bjstr

Table 3: Seroprevalence by age-groups in the regions.

Seroprevalence in the Capital Region (BI)

The enrolment rate in the Bucharest-Ilfov region was from far very poor (23% of planned). Table 4 provides details about the number and age of participants in Bucharest. Out of 845 participants, 30 tested positive for SARS-CoV-2-specific IgG antibodies, meaning a seroprevalence of 3.55% (2.30:4.80). A very limited number of cases was enrolled in the extreme age-groups (children and elderly population) and nonpositive case has been identified in age-groups 0-9y and 70-79y. The proportion of males was 33.3%, slightly lower compared to the national proportion (36.2%), but without statistical significance (p=0.081, Chi Square test). From the total positive cases, 18 were females and 12 were males. The enloled and positive cases are shown in Table 4.

biomedres-openaccess-journal-bjstr

Table 4: Seroprevalence by age-groups in the regions.

Discussion

Given that the vast majority of infection cases remains asymptomatic, countries are seeking to assess the spread of the infection in the population through seroprevalence studies with representation for the general public. The aim of this study was to estimate the degree of spread of SARS-CoV-2 infection in the Romanian population. In this purpose, we have assessed, using a chemiluminescence immunoassay, the anti-SARS-CoV-2 IgG antibodies, as they last longer than IgM and therefore, play a crucial role in assessing the real prevalence of the virus [23]. SARS-CoV-2 invades human cells by binding the spike protein to the membrane protein receptor of the cell. The genome of this virus encodes four key proteins – spike (S), nucleocapsid (N), envelope (E) and membrane (M) [24-27]. As the spike protein is involved in the first step of the infectious process, represented by the interaction with specific receptors, followed by virus internalization in the infected cells, there are many assays that detect the specific antibodies anti-S protein of SARS-CoV-2. Chemiluminescence immunoassay represents an indirect detection method of the anti-SARS-CoV-2 antibodies [28].

It can detect either IgM or IgG in serum [29]. Different countries tested the performance of CLIA, all indicating good specificity, sensitivity and its convenience for sampling [29-31]. Other studies used this method on a specific population to report the seroprevalence: private healthcare group in Fukushima Prefecture, Japan [32]; elite football players in Germany [33]; multicenter, primary care, and emergency care facilities in North Carolina [34]. The findings in this seroprevalence study for SARS-CoV-2 suggest that the prevalence of IgG antibodies against the Spike protein of SARS-CoV-2 is over 6% in Romania. However, according to the official data reported from the surveillance system, the cumulated notification rate for confirmed COVID-19 cases reached to 1.27% at end October 2020, when our study was finished. Our results support the data published regarding the lower proportion of COVID cases which are generally requiring health care, based on the severity of their symptoms The overall seroprevalence in Romania was lower than that recorded in Sweden, but higher than reported in Germany and Spain [2]. However, it should be noted that the specified studies presented a number of differences, regarding the number of participants, time frame and the methods that were used to evaluate the presence of antibodies.

The more modest seroprevalence rate among elderly could be a reason for consideration in the next planning phase for controlling the pandemic. Also we found interesting and significant geographical variations among regions, which could be an argument in favour of adopting public health interventions tailored to the epidemiological situation in the region, even with particularization for the smallest territorial units. Our study has a number of limitations. Although convenience sample is a common strategy used by many researchers, it can provide biased results because this method has the possibility to over/underrepresent a population [35]. The response rate to the study invite achieved lower levels in extreme age-groups. This is normal, because generally the parents could be reluctant or hesitant in agree the enrolment of their children in surveys. On the other hand, the children are less likely to perform blood tests compared to the adults, thus their enrolment was more difficult. As for the elderly, due to the epidemiological situation, they might avoid or postpone their usual blood tests. Women were represented in a higher proportion than men in this study, meaning that women could be more interested in participating in surveys, or more active in general, in investigating their health status.

Conclusion

Our study suggests that the real number of individuals infected with SARS-CoV-2 in Romania exceeds by around five times the number of reported cases confirmed by PCR. Therefore, data on seroprevalence are very important for understanding the magnitude and distribution of the pandemic at country level. Repeating the study after the vaccination campaign could provide strong indications about the further needs of public health interventions.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Radiology

Opportunistic Diagnosis of Osteoporotic Vertebral Fractures on Imaging Studies performed for Alternative Clinical Indications

Introduction

In an era of increasing life expectancy, osteoporosis has become a major global health concern [1,2]. Osteoporosis is a skeletal disorder characterised by compromised bone strength which predisposes to increased fracture risk [2]. At least one third of all post-menopausal women, and one fifth of men older than 50 will suffer an osteoporotic fracture in his/her lifetime [3-5]. The National Osteoporosis Foundation (NOF) estimates that approximately 54 million Americans suffer from osteoporosis resulting in 2 million fractures annually [6]. Population-based studies have demonstrated an increasing prevalence of osteoporotic fractures resulting in hospitalisation, increased morbidity and mortality and placing increasing burden on healthcare systems [7-9]. Vertebral fractures (VF) account for up to 50% of osteoporotic fractures making them the most common fracture subtype [10]. The incidence of vertebral fractures increases with age [10,11]. Up to 26% of Scandinavian women are diagnosed with at least one VF in their lifetime [11]. VFs are a major cause of pain and reduced mobility. Many patients who have sustained a VF suffer with the psychological fear of isolation and loss of independence [12,13]. Additionally, sustaining a VF is an independent risk factor for mortality [14]. Studies show that patients with previous VFs are five times more likely to obtain an additional VF and are twice as likely to suffer a hip fracture with resulting morbidity and mortality [15,16]. Encouragingly, evidence has shown that early intervention with pharmacological agents such as bisphosphonates result in a relative risk reduction of up to 0.6 for vertebral fractures and up to 0.8 for non-vertebral fractures [17]. Therefore, it is vital that VFs are correctly diagnosed so that patients are investigated and treated appropriately. However, there is a discrepancy between best recommended management and real-life clinical practice studies concluding that many patients diagnosed with an osteoporotic fracture are never appropriately investigated or treated for osteoporosis [18-20].

Many imaging studies performed for alternative clinical indications fortuitously include the spine. Radiologists do not always systematically review the spinal vertebra when they are not the specific clinical area of concern [21,22]. This can lead to a missed opportunity to detect vertebral fractures and diagnose osteoporosis [21,22]. VFs are evident on various imaging modalities that are performed for alternative clinical indications but are frequently not reported by radiologists [23,24]. Use of terminology such as ‘wedging’, ‘endplate compression’ and ‘endplate concavity’ in radiology reports can be confusing and may not be clearly understood as vertebral fracture or implication of underlying osteoporosis by the ordering physician. Non-diagnosis or inappropriate reporting of VFs in this way is a missed opportunity to diagnose osteoporosis, to provide appropriate treatment and to reduce patients risk of further osteoporotic fractures [18]. In this paper, we discuss the radiological assessment of VFs and describe how fractures can be diagnosed on the most used imaging modalities including plain film, MRI, CT and bone scans (Figure 1A- 1B).

biomedres-openaccess-journal-bjstr

Figure 1A: Lateral lumbar spine radiograph of an 80-yearold female patient. The radiograph demonstrates several insufficiency compression fractures; severe anterior wedge fracture at T12, mild compression fracture of the L1 and L4 superior endplates and moderate compression fracture at L2.

biomedres-openaccess-journal-bjstr

Figure 1B: Lateral thoracic spine radiograph demonstrates a moderate compression fracture at T7 with secondary kyphosis.

Assessment of Fractures

Genant et al. devised the Semi-Quantitative (SQ) method for describing vertebral fractures [25]. This method has high inter- and intra-observer agreement, even amongst inexperienced reviewers [25]. The method is widely reproducible and is often used in research settings and clinical trials. The SQ method is a relatively straight-forward method to grade fractures and avoids otherwise confusing language which may be misinterpreted. First described on lateral radiographs, the SQ method employs visual inspection to grade vertebral fractures. Grade 0 is normal without loss of vertebral body height. Grade 0.5 are borderline vertebral fractures. Grade 1 fractures show mild deformity with approximately 20 % to 25 % loss of height and 10 % to 20 % reduction in area. Grade 2 fractures are moderately deformed with 25 % to 40 % loss of height and 20 % to 40 % loss of area. Grade 3 vertebral fractures have lost 40 % or more of their height and area. The SQ method is not without its limitations. Employing this method may inadvertently overdiagnoses VFs in patients with congenital or acquired vertebral anomalies [26]. Additionally, employing the SQ method alone would fail to diagnose minor endplate fractures which do not result in loss of vertebral body height. In response, Jiang et al devised the algorithm-based qualitative (ABQ) approach which focuses on vertebral endplate deformities [27]. Using this method, an experienced radiologist needs to assess various aspects of endplate abnormality before diagnosing it as a fracture. Jiang et al showed that using a stringent criteria-based algorithm in this way, the ABQ method is likely to diagnose only one third of fractures that would be diagnosed by the SQ method alone. Similarly, Black et al showed that the SQ method diagnosed three times the number of mild vertebral fractures compared to other quantitative methods [28].

Recognition of Fractures

Imaging Modalities:

A. Plain Films: For clinically suspected VFs, plain films including antero-posterior (AP) and lateral projections are usually the first line of investigation. The lateral film is particularly useful (Fig. 1A and 1B). The radiologist should carefully examine the vertebral body outline, especially the superior and inferior endplates to ensure VFs are not missed. The pedicles are examined for symmetry on the AP film. Subjectively identifying reduced bone density heightens the index of suspicion for VFs as these patients are at much greater risk. Dynamic radiographs of the vertebrae can increase the likelihood of correct diagnosis on plain radiography. This method allows the radiologist to compare supine images with lateral sitting radiographs to evaluate for changes in vertebral body height. The sensitivity and specificity of dynamic radiographs for diagnosing acute VFs is 66% and 96% respectively [29]. While moderate and severe VFs are rarely misdiagnosed, there are several conditions which can be mistaken for mild VFs leading to overdiagnosis. These include developmental short vertebral height, physiological wedging, Scheuermann’s disease, degenerative scoliosis, Schmorl’s nodes and Cupid’s bow deformity (smooth developmental curvature of the inferior endplate of lumbar vertebra) [30]. Possible reasons for underdiagnosis of VFs by non-musculoskeletal radiologists include focusing on other acute imaging findings, lack of specialist knowledge about osteoporosis/ osteoporotic VFs or simply ignoring osteoporotic VFs completely [31].
Vertebrae are included on many plain films when there is no clinical suspicion of VF. Examples include abdominal radiographs for patients with abdominal pain or chest radiographs in patients with cardio-respiratory symptoms. Less commonly, the vertebrae are incidentally imaged during barium investigations, interventional, cardiac, and fluoroscopic procedures. Even if not performed to out rule a VF, each imaged vertebra should be carefully evaluated to ensure no underlying occult VF. Despite the obvious opportunity to diagnose VFs in this way, there is a paucity of published literature in the area. The most studied radiographic technique to incidentally diagnose VFs is the chest radiograph. In a large study of over 10,000 post-menopausal women who underwent a lateral chest x-ray, 41% of radiologists who identified a VF failed to document it in the report summary, and only 36% were put on treatment for osteoporosis on discharge [32]. In a smaller retrospective review of chest x-rays of post-menopausal women, Gerlach showed that 14.1 % had a moderate or severe VF visible on chest radiograph [21]. Unfortunately, less than one quarter of visible VFs were referenced in the radiologists’ summary and only one seventh of these patients received a discharge diagnosis of VF. As a result, only 18% of patients were discharged with appropriate medical therapy for underlying osteoporosis. The lateral chest radiograph on elderly patients is an opportunity to incidentally diagnose VFs by assessing vertebral bodies and clearly reporting them in the final summary [33]. Despite their importance in the initial investigation for suspected VF, many patients with VFs will have no morphological change on plain films. It is important not to dismiss patient symptoms based on normal radiographs since many patients with normal plain films may only have acute changes detectable on MRI [34]. Loss of vertebral height may not be evident at time of acute symptoms but can be evident on subsequent follow-up radiograph.
MRI: MRI is a time-intensive imaging modality with relative contraindications such as claustrophobia, presence of a nonconditional pacemaker and first trimester of pregnancy. MRI has a sensitivity of 100% in detecting spinal trauma and is an excellent method to diagnose and assess VFs [34]. MRI has a sensitivity and specificity of up to 82% and 98% respectively for distinguishing osteoporotic VFs from other types of fracture [35], (Figures 2A-2C). In addition to identifying a VF, MRI may also diagnose other uncommon causes for back pain such as infection or malignancy, and allows assessment of spinal ligaments, spinal cord, surrounding CSF and meninges. The Short Tau Inversion Recovery (STIR) sequence is particularly sensitive to acute fractures as it nullifies marrow fat signal over a large body area such as the entire vertebral column allowing increased visibility of acute pathology such as fracture. STIR sequences in combination with T1 weighted sequences are helpful to differentiate benign osteoporotic VFs from those caused by malignancy [36]. The presence of marrow oedema recognised as high signal on fluid sensitive STIR or T2-weighted fat saturated sequences indicates recent fracture. Marrow oedema is absent in a chronic vertebral fracture. Benign vertebral fractures typically are seen as linear low T1 signal. Malignancy or infection in contrast cause diffuse nonlinear replacement of the normal marrow of the vertebra. For every MRI study performed, initial localizer sequences are utilised by radiographers to plan image acquisition. These localizers are obtained from thick slices and are not suitable for diagnostic detail but do represent an opportunity to diagnose a VF when not suspected. Strong inter-observer agreement has been reported in detecting VFs in the thoracic and lumbar spine on localizer images [37]. In another study, musculoskeletal radiologists examined 856 localizers of patients undergoing breast MRI. The authors concluded that 8.9 % of patients had a VF visible on the MRI localizer, but none were documented in the final report [38]. MRI localizers are a quick and reliable method of diagnosing vertebral fractures when not suspected and may negate the necessity for further imaging or using ionising radiation.

biomedres-openaccess-journal-bjstr

Figure 2: MRI Lumbar Spine with T1, T2 & STIR sequences of an acute mild compression fracture at T10 in a 67-year-old female patient.

Computed Tomography (CT): CT uses high doses of ionising radiation to acquire images. CT imaging is available 24/7 in most tertiary hospitals and offers almost instant acquisition of images. CT has excellent sensitivity and specificity for identifying VFs;100% and 97% respectively [39]. CT of the spine may be requested when a VF is clinically suspected and when the radiograph is normal. Of note, a non-displaced vertebral fracture on a background of osteopenia, may not be evident on CT [39]. In patients with known VF, CT can help to provide additional information such as stability of the fracture and protrusion of bone fragments into the spinal canal. CT can also aid with clinical decisions such as patient suitability for surgical intervention or vertebroplasty. The majority of CTs are performed for clinical indications not specifically related to identification of VFs including Cardiac CT, CTPA and CT thorax to evaluate for thoracic pathology and CT KUB, CT abdomen/ pelvis, CT colonography and CT peripheral angiograms/venograms performed to identify intra-abdominal pathology. Vertebral morphology, particularly on sagittal reformats is well visualised on these CT studies. Modern CT scanners can display vertebrae in the region imaged in excellent bony detail in coronal, sagittal and axial reformats without the requirement for further imaging or radiation exposure to the patient. Of these, the sagittal reconstructions are particularly important to diagnose VFs (Figure 3) [40]. Despite the ability to utilize CT to diagnose occult VFs, CT is often not effectively exploited in this way. A New Zealand study retrospectively reviewed sagittal reconstructions of CT abdomen or thorax in patients over 65 years. 22 of 175 patients had a VF visible on sagittal reconstruction, and 77% of these had previously undiagnosed VF. The authors concluded that reviewing reformatted CT of the abdomen and pelvis improved diagnosis of VFs but are frequently not reported – thereby missing an opportunity to diagnose osteoporosis, treat with appropriate medical therapy and to reduce risk of future osteoporotic fractures and associated morbidity and mortality [41]. Similar to localizers in MRI, CT scout views are obtained prior to final image acquisition. These use low levels of radiation to acquire 2-Dimensional images which are used to plan the final CT image. Lateral CT scout views may show fractures not visible on axial CT images. One study of 300 CT scans involving the thoracic and lumbar spines demonstrated the sensitivity and specificity of diagnosing VFs on scout views to be 98.7% and 99.7% respectively. The authors concluded that scout views should be used to evaluate for VFs on CTs performed for other clinical indications [42].

biomedres-openaccess-journal-bjstr

Figure 3: Sagittal reformatted CT of the Lumbar Spine in an 83-year-old female demonstrating severe compression fracture at L1, moderate compression fracture of T11 and mild compression fracture of L2.

Skeletal Scintigraphy (Bone Scans)

Tc 99m is a radioisotope which can be bound to MDP and injected intravenously. The radioisotope travels through the patient’s bloodstream and binds to remodelling bone. Three hours after injection, patients are placed on a gamma camera which identifies bony hotspots where Tc99m has accumulated. 80% of VFs are visible as hotspots, usually linear in morphology, at 24 hours following injury and almost all return to normal within two years [43]. The major limitation of bone scans is their poor specificity. The most common indication for performing bone scans is to identify osseous metastatic disease in patients with known primary malignancy. However, bone scans are also utilized to identify occult fractures or osteomyelitis. Due to their non-specific nature, hotspots can also be caused by degenerative changes. For this reason, bone scans are often reported in conjunction with other available imaging such as MRI, CT, or plain films (Figures 4 & 5).

biomedres-openaccess-journal-bjstr

Figure 4: Bone scan for completion of staging in a 67-year-old female with non-small cell lung cancer. There are non-specific foci of increased radioisotope uptake in the mid thoracic spine. Comparison was then made to previous staging CT thorax (Fig. 5)

biomedres-openaccess-journal-bjstr

Figure 5: Review of the staging CT thorax confirmed the areas of uptake on bone scan in Fig 4 correlating to previous moderate wedge compression sclerotic vertebral fractures at T6 and T7 secondary to metastatic disease.

Discussion

Osteoporosis is an increasing public health concern and predisposes patients to VFs. Prompt diagnosis and early intervention with appropriate medical treatment is imperative. The literature shows that incidental VFs, on imaging studies performed for alternative clinical indications are underdiagnosed thereby missing an opportunity to diagnose vertebral fracture, diagnose osteoporosis if not previously diagnosed and treat patient appropriately. Untreated and undiagnosed VFs can significantly impact on a patient’s quality of life and life expectancy. Patients with osteoporotic fractures can endure intolerable pain, loss of independence and suffer psychologically due to fear of isolation. Many patients require polypharmacy for pain control, and all are at high risk of future osteoporotic fractures. The mid-thoracic region and thoraco-lumbar junction are the most frequently affected areas and may result in spinal kyphotic deformities. Kyphosis predisposes to loss of balance, muscle wasting, further degenerative changes at adjacent intervertebral joints, restrictive lung disease, inability to work and loss of earnings [44]. Fortuitously many imaging studies including plain radiography, CT, MRI and Bon scans include the thoracic and lumbar spine in the area of imaging. This provides an opportunity to diagnose unsuspected abnormalities of the spine when these imaging studies are performed for alternate clinical indications. Many radiologists however do not systematically review the vertebra in these studies and miss the opportunity to identify abnormalities such as vertebral fractures and osteoporosis. When vertebral morphological abnormalities are identified equivocal language such as ‘loss of height’ or ‘wedging’ to describe VFs can be misleading. This terminology is ambiguous for referring physicians who may not appreciate that these are vertebral fractures and imply underlying osteoporosis. There is no agreed gold standard for diagnosing VFs on imaging. As a result, many VFs are both under and over-diagnosed. One strategy is the semi-quantitative method for grading fractures. Even amongst inexperienced observers, the SQ method demonstrates high levels of agreement [21]. Alternatively, the ABQ method forces the radiologist to answer a number of questions before diagnosing a VF and is arguably more accurate [27]. Whichever method is employed, it remains imperative the reading radiologist clearly states the existence of a VF in the report summary to improve the proportion of patients discharged on appropriate medical therapy. A number of imaging techniques performed for various clinical indications may show VFs in the area imaged. There is under reporting of VFs which are clearly visible on lateral chest radiographs, MRI localizers and CT scout views. Unless sagittal reformats of CT studies are routinely performed often VFs are not visible on standard axial images even to experienced musculoskeletal radiologists. The term ‘inattentional blindness’ refers to an inability to notice unexpected events when immersed in an alternative task. In one experiment, 83% of expert radiologists failed to recognise a gorilla drawn onto a stack of CT images when they were focusing on finding pulmonary nodules [45]. Another phenomenon coined “satisfaction of search” refers to a relative difficulty in identifying further pathological findings following identification of another significant abnormality [46]. These factors are relevant to radiologists when searching for clinically significant pathology not related to the spine on x-ray, MRI, or CT and thus VFs can easily be overlooked. Dedicated education programmes delivered to radiologists and internal medical physicians may help to improve the diagnosis and management of VFs. In one study, recognition of VFs amongst internists almost doubled from 22% to 43% following provision of basic lectures, posters and flyers. The same study demonstrated a significant increase in patients discharged on osteoporosis treatment from 11% to 40% [47]. In another study, there was a marked improvement in the ability of a radiology resident to correctly identify VFs after undergoing specific teaching [48].

Conclusion

In conclusion, VFs are a major health concern in an era of aging population. Many factors have contributed to underdiagnosis and treatment of VFs. When identified by a radiologist ambiguous terminology should be avoided and the SQ method employed. The spine is included in many imaging studies performed for alternative clinical indications. This is a fortuitous opportunity to assess the spinal vertebrae and diagnose fractures when present. Irrespective of the clinical indication or imaging modality, a high index of suspicion for VFs should be always employed. Basic education programmes for radiologists and internists would serve to improve the diagnosis of VFs and treatment of osteoporosis.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Surgery

New Approach to the Treatment of CoV-2 Infection by Means of Immune-modulators and Non-Steroid Anti- Inflammatory Drugs

Historical Background of the “COVID-19” Pandemic

The first known case of coronavirus was described as “severe acute respiratory syndrome” (SARS), which occurred on November 16, 2002, in Foshan, a city about 20 km from Guangzhou in China’s Guangdong province. Since November 2002, an unknown infectious agent had caused outbreaks of an atypical pneumonia that spread throughout Guangdong province in southern China. The disease usually started with high fever and mild respiratory symptoms, but rapidly progressed to pneumonia and within a few days new cases emerged in mainland China, so that by February 2003 more than 300 cases had been reported, about one-third of which involved health care workers [1]. Persons who became infected and subsequently traveled spread the outbreak to Hong Kong [2] and from there to Vietnam, Canada, and several other countries [3]. By the end of February 2003, the disease had spread to neighboring regions and countries, was severe, could be transmitted from person to person, and appeared to cause significant outbreaks in health care workers [3,4]. On March 13, 2003, WHO issued a global alert on the disease that it termed “severe acute respiratory syndrome” (SARS) [5], and a remarkable global effort led to the identification of the SARS coronavirus (SARS-CoV). In early April of the same year [4,6], 6 outbreaks occurred in Southeast Asia, North America and Europe and led to the first pandemic of the 21st century. In July 2003 and after a total of 8,096 reported cases, including 774 deaths in 27 countries [7], no further infections were detected and the SARS pandemic was declared to be over. Five additional cases of SARS, as a result of zoonosis, occurred between December 2003 and January 2004 [8], but no further human cases of SARS have been detected since then. Infection control measures, rather than medical interventions, then put an end to the first SARS-CoV pandemic of the 21st century. However, the possibility of transmission in a variety of ways was noted. It was later shown that certain viruses, similar to SARS-CoV found in bats, could infect human cells without prior adaptation [9,10], indicating that SARS could re-emerge [11]. Indeed, 10 years after the first occurrence of SARS-CoV, a man in Saudi Arabia died of “acute respiratory syndrome” and in his serum the “coronavirus” had been isolated, this syndrome was called “Middle East Respiratory Syndrome coronavirus” (MERS) because of its place of origin. In April 2012, several cases of “severe respiratory illness” had already occurred in a hospital in Jordan [12], these cases were retrospectively diagnosed, and considered to be human-to-human transmitted, furthermore in the United Kingdom, 3 cases of MERS were reported in September 2012 [13].

In May 2015, a single person, returning from the Middle East, initiated a nosocomial MERS outbreak in South Korea that affected 16 hospitals and 186 patients [14]. By April 26, 2016, 1,728 MERS cases, including 624 deaths, had been confirmed in 27 countries [15,16]. (Figure 1) In the accompanying figure, published in 2016 (copied from review paper: de Wit E, Doremalen VN, Falzarano D, et al. SARS and MERS: recent insights into emerging coronaviruses. Nat Rev Microbiol. (2016) 14: 523-34.doi: 10.1038/nrmicro.2016.81) shows as different ways of coronavirus transmission. Bats could have been the main reservoir of the coronavirus 30 years before passing to humans, due to “cross-species transmission” between bats and camels; these animals, through continuous contact with humans, could have produced the direct zoonosis that gave rise to MERS-CoV. Moreover, the detection of the virus in “palm civets” (Chinese species) and in a “raccoon dog” (Japanese raccoon), as well as the detection of antibodies to the virus in the Chinese ferretbadger (also known as small-toothed ferret- badger) observed at a live animal market in Shenzhen, China [17] alerted researchers to the possible transmission of the virus to humans. However, these animals were only incidental hosts, as there was no evidence of SARS-CoV-like virus circulation in “palm civets,” both in the wild and in breeding facilities [18]. Thus, the search for the MERSCoV reservoir initially focused on bats, but a serological study in dromedaries from Oman and the Canary Islands showed a high prevalence of MERS-CoV neutralizing antibodies in these animals [19]. In addition, MERS-CoV RNA was detected in swabs collected from dromedaries on a farm in Qatar that was associated with two human cases of MERS, and infectious virus was isolated from dromedaries in Saudi Arabia and Qatar [20-23], and serological tests also detected circulation of a MERS-CoV-like virus in dromedaries in the Middle East, East Africa, and North Africa. Dromedaries in Saudi Arabia harbor several viral genetic lineages [24], including those that have caused outbreaks in humans. Taken together, these data pointed to the role of dromedaries as a reservoir of MERS-CoV. The ubiquity of infected dromedaries near humans and the resulting zoonosis may explain why MERS-CoV continues to cause human infections, whereas SARS-CoV, without the continued presence of an infected intermediate host and with relatively infrequent human-bat interactions, had not caused further human infections.

biomedres-openaccess-journal-bjstr

Figure 1.

Person-to-person transmission of SARS-CoV and MERS-CoV occurred primarily through nosocomial transmission. Between 43.5 and 100% of MERS cases in individual outbreaks were hospital-related, and very similar observations were made for some of the SARS clusters [25-26]. Transmission among family members occurred in only 13 to 21% of MERS cases and 22 to 39% of SARS cases. Patient-to- patient transmission of MERSCoV was the most common route of infection (62-79% of cases), whereas for SARS-CoV, infection of health care workers by infected patients was very common (33-42%) [25]. The predominance of nosocomial transmission is probably due to the fact that substantial virus shedding occurs only after symptom onset [27-28], when most patients are already seeking medical care [29]. An analysis of hospital surfaces after treatment of patients with MERS showed the ubiquitous presence of viral RNA in the environment for several days after patients stopped testing positive [30]. In addition, many SARS or MERS patients were infected through “superpropagators” [25-27,31-33]. As of 2016, it had already been provided, in various publications, that the key features of these viruses are: the predominance of nosocomial transmission, pathogenesis driven by a combination of viral replication in the lower respiratory tract and an aberrant host immune response, and several potential treatments for SARS and MERS in animal models and “in vitro” had also been suggested, including small-molecule protease inhibitors, neutralizing antibodies and inhibitors of the host immune response.

Current Pandemic COVID-19

In December 2019, a new coronavirus (“nCoV”) emerged in Wuhan, Hubei province in China. Attention was focused on the Huanan food market, where in addition to fish, livestock animals were also traded. However, analysis of the first 41 hospitalized patients showed that the Wuhan seafood market might not be the main source for the spread of a new virus [34]. Nevertheless, an epidemic of severe pneumonia of unknown cause soon appeared [35], and genomic sequencing of viral isolates from five pneumonia patients hospitalized from December 18 to 29, 2019, indicated the presence of a previously unknown “b- CoV” strain in patients [36]. This “new” coronavirus (nCoV) subsequently spread from the original outbreak site in China and was designated as “SARS-CoV-2” by the World Health Organization (WHO) on January 12, 2020 and the disease as “COVID-19” on February 11, 2020 [37] and this virus was confirmed to have 75-80% similarity to the coronavirus that caused severe acute respiratory syndrome (SARS-CoV) [38]From February 2020 to April 2020, the disease “COVID-19” affected 188 countries worldwide. [38]and up to July 14, 2020 the cumulative number of confirmed cases was 13.1 million people and at least 572,426 people died from SARS-CoV-2 infection [39], the incidence of deaths ranged from less than 1% to 3.7% among the different countries [40], these figures were compared with the rate of deaths from influenza which was less than 0.1% [35].

After the first pandemic period, the incidence of COVID-2 infected cases declined during the summer months and then rose again significantly from September/October 2020 to date (31 January 2021), the increase in incidence is statistically shown as a “wave”, with 3/4 “waves” with “peaks”, “plateaus” and “valleys” in different countries; Most European Union countries, including Spain, have experienced high levels of incidence, but the highest number of infections has been observed in Great Britain, the USA, Brazil and India, up to this point. As of January 30, 2021, the number of cases in the world since the pandemic began at the end of 2019 has been: 102,000,000, and the number of deaths: 2,210,000.

The Acute Inflammatory Process

From the clinical point of view, the disease caused by CoV-2 presents 3 fundamental stages: in the first stage the patient shows signs and symptoms similar to infection by other viruses and/ or bacteria of the respiratory tract (e.g., Influenza), in this stage the symptoms are shown to a lesser degree and the patient may even be asymptomatic. In the second stage the patient feels worse and the signs and symptoms are more evident (fever, tiredness, general malaise, anosmia, hypoacusis, etc.); this stage is definitive for the patient, who may improve in the following days until cured or worsen until reaching the third stage, which may worsen to the point that the patient has to be admitted to the ICU, where intubation and assisted respiration may even be necessary; this moment is crucial for the patient since the feared “cytokine storm” may occur. From the immunological point of view, infections by bacteria and/or viruses, accidental or provoked trauma (e.g. surgical interventions), allograft rejection and the development of neoplasms have a common point: inflammation. Inflammation is the result of multiple interactions of the systems involved in the homeostasis of the organism, mainly the immune system, which have as their first objective the localization of the process and the elimination of the aggressor agent. When the infection is aggravated by a huge excess of antigen (due to the unstoppable and rapid replication of the virus), the inflammation reaches its climax and becomes a systemic process that affects the whole organism, it is called “systemic inflammatory response syndrome” (SIRS), and in the case of COVID-19, since the respiratory system is the main system affected, it is called “SARS-CoV-2” (“systemic acute respiratory syndrome”), the response of the immune system overflows and the “cytokine storm” appears, which can lead to “multiorgan failure” (MOF) and death of the patient. In fact, from a biological point of view, tissue injury and its sequelae are involved in most medical problems and the response of living tissues to aggression is the basis and foundation of the immune response. [41-45].

In addition to cytokine storming, COVID-19 viral particles can also directly induce multiple organ dysfunctions. In this regard viral particles from COVID-19 infection have been identified in bronchial and alveolar type 2 epithelial cells, and in fecal and urine samples [46,47]. Therefore, it is suggested that multiple organ dysfunction in patients with severe COVID-19 may also be caused by a direct attack by the virus. Many authors think that the synergistic effects of both effects contribute to the “multi-organ” failure of patients with severe COVID-19 however, we and some authors believe that in fatal COVID-19 cases, severe dysfunction of the immune response is responsible to a greater degree than the direct damaging effect of the virus itself [42,47]. (Figure 2) When macrophages or any other “antigen presenting cell” (APC) are stimulated, the “proinflammatory” cytokines par excellence are released: IL-1, IL-6, IL-8, IL-15, IL-17, IL-18, TNFs, IFN□ and PAF (platelet-activating factor). These cytokines play a relevant role in the inflammatory process and, in turn, can give rise to the so-called “cytokine storm”, the consequence of which is “systemic inflammatory response syndrome” (SIRS) and finally “multi-organ failure” (MOF), leading to death. On the other hand, as Niels Jerne (1974) said: “any stimulus capable of producing an immune response provokes a reaction comparable to the transmission of the ripples that can be observed in a pond when a stone is thrown, so that in the immune system the variation at the site of the stimulus receptor is transmitted everywhere”. In “SARS” this allegory reaches a dramatic expression and encompasses not only the network of signals, which cross and intersect within the immune system, but between the different systems (coagulation, fibrinolytics, cyanins, arachidonic acid, leukotrienes and thromboxanes, the immune system itself (complement system, circulating immune complexes ICC, ADCC, NK cells, adaptive immune response: CTL and cytokines) (Figure 2).

biomedres-openaccess-journal-bjstr

Figure 2: Navarro-Zorraquino M Immunologic response in shock and multiorgan failure.
In: Navarro-Zorraquino M, editor. Immunological aspects of surgery. Zaragoza:
Prensas Universitarias de Zaragoza; 1997. p. 261-300.

For this reason, the lack of control of the servomechanisms that maintain homeostasis in any of the mentioned systems can cause an unstoppable situation of mediator release leading irremediably to tissue damage [42]. From the pathophysiological point of view, inflammation is the result of multiple interactions between the various systems of the organism, which have as their first objective the localization of the process and the elimination of the aggressor agent; this is followed by a repair process. The main physical-chemical events that occur during inflammation are: increased blood supply to the site of the attack, increased capillary permeability – which allows larger molecules than usual, such as antibodies and fractions of the complement system and other enzyme systems, to pass through the vascular endothelium – and the activation of leukocytes: initially neutrophils and macrophages, then lymphocytes. The development of the inflammatory reaction is controlled by cytokines, which are the intercellular messengers of the “immunocompetent” molecules, the products of the plasma enzyme systems, the coagulation, fibrinolytic, cyanin and complement systems, vasoactive mediators released from mast cells, basophils and platelets, and endothelial adhesion molecules. Since CoV-2 exhibits tropism to the lung, the initiation of the immune response against coronavirus begins with direct infection of the bronchus and bronchiole epithelium. First, antigen-independent innate immunity provides the first line of defense of leukocytes against microorganisms. The “innate immune response involves several cell types, including leukocytes, neutrophils, eosinophils, eosinophils, basophils, monocytes, macrophages, lung epithelial cells, mast cells, and NK cells. After initial CoV-2 infection, dendritic cells (DCs) residing in the lungs become activated and change to “antigen presenting cells” (APCs).

biomedres-openaccess-journal-bjstr

Figure 3.

In the lung, DCs reside within and beneath the airway epithelium, alveolar septa, pulmonary capillaries and airway spaces. Activated APCs” cells ingest and process the antigens and migrate to the lymph nodes, in the lymph nodes the “APC cells” present the antigen in the form of MHC/peptide complex to the “virgin circulating T helper cells” (Th0), inducing the immune response. Following activation of the Th0 receptor by the MHC/peptide complex, Th2 cells are activated, proliferate and differentiate into CD4+ (Th lymphocytes) and CD8+ (cytotoxic T lymphocytes) cells. Subsequently, Th lymphocytes further differentiate into Th1 and Th2 cells, which are capable of releasing different cytokine profiles: Th1 cells drive cell-mediated immunity and release pro- inflammatory cytokines such as IFN-γ, IL-1β, IL-12 and pro-inflammatory factors such as TNFs, IFNs, PAF, GM-CSF, MCSF; Th2 cells activate the production of antibody-producing B cells and release anti- inflammatory cytokines such as TGF-β, IL-4, IL-5, IL-9, IL-10 and IL-13 [42,47]. In the immune response of healthy adults with CoV-2 infection there is a balance between Th1 and Th2 lymphocyte activity. The inflammatory reaction initiated by the immune system, through the Th1 activation pathway and with the participation of Th17 cells and various cytokines, is regulated by the immune response itself through a “regulatory servo-mechanism” involving mainly Th2 cells (considered as the main pathway of the “anti-inflammatory response”), through sub-populations of Th2 cells, called “regulatory cells”: Treg (CD4+25+FOXp3 and CD8+25+FOXp3) and Th-17 cells (Figure 3). Th17 cells” regulate the response by increasing the release of “pro-inflammatory” cytokines and Treg cells” regulate the response towards the release of anti-inflammatory cytokines. (Figure 3) Navarro-Zorraquino M. Immunologic response in shock and multiorgan failure. In: Navarro-Zorraquino M, editor. Immunological aspects of surgery. Zaragoza: Prensas Universitarias de Zaragoza; 1997. p. 261- 300. We wish to emphasize here that the “regulatory pathway” exerts its role by responding to the needs of the immune response, at a given time, against the corresponding antigen, by increasing the inflammatory activity of the Th1 pathway, mainly by means of Th17 cells and IL-17A, or by increasing the anti-inflammatory activity of the Th2 pathway, mainly by means of transforming growth factor β (TGF-β). Since these 2 cytokines are going to be the key in the design of our research project, we will insist on them later.

Systems of the Human Organism Affected by the “Cytokine Storm”

It is important to remember here the influence and consequences that the immune response has on the most important systems of the human organism, especially when it overflows producing the “cytokine storm”. If we look at Fig. 2, we can see that this response is related to the release of histamine, activation of the coagulation, fibrinolysis and “kinins” systems, release of arachidonic acid metabolites, neuroendocrine response, release of free radicals and release of prostacyclins and prostaglandins [42]. When the complement system is activated, the different fractions are released (activation by the classical pathway begins with the C1 fraction, and activation by the alternative pathway begins with the C3 fraction), but the most important for their pathophysiological actions are the C3a and C5a fractions (called anaphylatoxins), which increase capillary permeability and produce smooth muscle contractionboth at the level of the bronchial tree and the gastrointestinal tract; the C3a fraction is capable of producing tachycardia, impairing cardiac function and inducing coronary vasoconstriction. The C3a and C5a fractions stimulate basophils and mast cells to release histamine, whose main action is to increase vascular permeability and smooth muscle contraction. When aggression to the organism occurs, activation of the enzymatic cascades of the complement system, kinins, coagulation and fibrinolysis occurs rapidly, as well as cell activation of PMN leukocytes, macrophages, endothelial cells and platelets. Tissue damage produced by viruses (the case of CoV- 2) induces platelet aggregation and adhesiveness on subendothelial collagen when the vascular endothelium is damaged, thus initiating an activation, by means of the so-called intrinsic pathway, through the activation of factor XII (Hageman’s factor), which gives rise to FXIIa [42] which is an active protease; this is a key factor that directly relates the coagulation system to the so-called “kinin system”, “kinins” or “kinins” (Figure 2). kinins” or “kinins” (Figure 2). FXIIa activates pre-kallikrein which becomes kallikrein and this, in turn, becomes kininogen, a high molecular weight substance, which together with factor XII and pre-kallikrein binds directly to sub- endothelial collagen, as do platelets through the mediation of Willebrand factor (Figure 2).

At the same time that activation of the coagulation system by the intrinsic pathway occurs, activation of the so-called “extrinsic pathway” can also occur, by means of tissue thromboplastin released by damaged cells; tissue thromboplastin activates the extrinsic pathway in collaboration with factor VIIa (FVIIa) causing factor X (FX) to also become an active protease -FXa-. The result of the activation of the coagulation system by both pathways is the conversion of prothrombin to thrombin, which increases platelet aggregation and induces the release of arachidonic acid metabolites, especially thromboxane A2 (TxA2) (Figure 2).This activation of the coagulation system would be implicated in the immune response to CoV-2 and the production of clots in patients with COVID-19, especially in the most severe stage of the disease, as well as in the finding of clots in the necropsies of deceased patients. The hypothalamic-pituitary-adrenal axis responds to stimuli that represent the release of mediators and the organism’s own aggressor agent in a given situation. At the present time there are numerous studies that attempt to relate different hormones, whose synthesis and release is regulated by the neuroendocrine system, with the immune response in various situations. We will refer here only to what seems to us most relevant in relation to the inflammatory response and in particular to the pathophysiology of the “cytokine response”[42].

Cortisol is the most important glucocorticoid secreted by the adrenal cortex in response to ACTH and corticotropin-releasing hormone (CRH). Cortisol plays a very important role in many aspects of the inflammatory response and shock (it increases the effect of catecholamines, increases protein catabolism at the muscular level, has action (together with epinephrine and norepinephrine) on vascular smooth muscle, on lipolysis and on neoglycogenesis). But here we try to emphasize that cortisol inhibits the release of “kinins” and that it is closely connected with the release of other mediators and with the systems of coagulation, fibrinolysis and the complement system in the inflammatory response. In addition, cortisol considerably reduces the number of lymphocytes, especially the number of T- lymphocytes, in patients with sepsis. in this regard, it is very noticeable that the majority of patients affected by COVID-19 show lymphopenia. Nitric oxide (NO) is synthesized in the body from L-arginine by an enzyme: nitric oxide synthase. There are two types of this enzyme: one is a constituent of the cytoplasm and is Ca++ and calmodulin dependent for NO release; the other enzyme is also a cytoplasmic component, but is Ca++ independent, however it requires tetrahydro-biopterin and other cofactors for its activation and is inhibited by glucocorticoids. Following the studies of Furchgott and Zawadzki, et al. [47-50] there is no doubt that NO is a very important neurotransmitter. The enzyme nitric oxide synthase is found in brain neurons, but is not present in glia; in the pituitary it is found in brain neurons, but is not present in glia; in the pituitary it is found mainly in neurons located in the posterior lobe (which are the neurons that synthesize and release vasopressin and oxytocin), it is also found in the adrenal medulla in neurons that stimulate the cells that release adrenaline or epinephrine), in the intestine it is found in the mesenteric plexuses, regulating peristaltic movements. In addition, nitric oxide synthase is present in numerous tissues, but especially in the cells of the endothelial layer of blood vessels, where it seems to play an important role in vasomotor phenomena, but also as a “messenger” molecule closely connected to the immune system. In all these tissues NO release by nitric oxide synthase appears to be Ca++ and calmodulin dependent (as described above), constituting the “physiological NO production pathway”.

The point of view that most interests us here is the relationship of NO with the immune response, not only because it is able to stimulate macrophages, endothelial and dendritic cells against bacteria, but also against viruses and rikettsias, and because it is actively involved in the inflammatory process. Its excess production may contribute to a high degree to the pathophysiology of SARS and multiorgan failure. Macrophages produce detectable levels of NO about 6 hours after activation by IFN-g, reaching the maximum level at 24h. However, there is a “servo-control mechanism” by which NO can regulate its own synthesis, inhibiting IFN-g production from Th1 cells and also that of nitric oxide synthase. In addition, some cytokines, including IL-4 and IL-10 and TGF-b, also have an inhibitory effect (apparently “dose-dependent”) on NO production. In this respect, antagonists of cytokine and NO production could be a therapeutic measure in the treatment of COVID-19, as evidenced by some in vitro studies.

Risk Factors Associated with COVID-19 Infection

Diseases associated with COVID-19 infection, mainly severe heart disease, chronic kidney disease, chronic obstructive pulmonary disease (COPD), cancer (patients undergoing active treatment), immunosuppression due to solid organ transplantation, obesity and type 2 diabetes mellitus, together with advanced age of the patients, can result in “immune dysregulation” leading to failure of the “system regulatory pathway” and “anti-inflammatory pathway” with an exaggerated shift towards the “inflammatory pathway”, can result in “immune dysregulation”, leading to failure of the “system regulatory pathway” and the “anti-inflammatory pathway” with an exaggerated shift to the “inflammatory pathway” which can develop into a huge release of cytokines and inflammatory factors called “cytokine storm”.

Advanced age is perhaps the most important factor in our century, since there are populations of people living in the world at very advanced ages of life (even people > 100 years), especially in developed countries. Overall, published work with respect to patient age shows that the COVID-19 pandemic is causing a large increase in mortality in the elderly population, relative to the mortality rate observed in patients under 70 years of age.” The mortality rate is dramatically alarming in the case of patients older than 80 years, about 30% compared to the total population of COVID-19 infected patients [44]. Some currently published statistical data show that the probability of death from COVID-19, compared with the age group of infected patients aged 18-29 years, can be summarized as follows: persons aged 30-39 years (2 times higher), 40-49 years (3 times higher), 50-64 years (4 times higher), and 50-64 years (4 times higher). higher). 65-74 years (5 times higher), 74-84 years (8 times higher), > 85 years (13 times higher) [51]. All of the above shows that the COVID-19 pandemic is causing a large increase in mortality in the elderly population, compared to the mortality rate observed in patients younger than 70 years of age. The mortality rate is dramatically alarming in the case of patients over 80 years of age, about 30% compared to the total population of CoV-2 infected patients.

Important Characteristics of Aging

Chronic inflammation in aging, described as “inflammatory aging, may occur in elderly patients, and may also be associated with other related disorders. with inflammation: diabetes mellitus, obesity, arthrosis, etc. Consequently, the increased generation of pro-inflammatory markers in “inflammatory aging” may have an impact on the severe inflammatory process that occurs in patients with COVID-19 and increased risk of mortality. Several factors, including altered ACE2 receptor expression, excess reactive oxygen species (ROS) production, senescent adipocyte activity, altered autophagy and mitophagy, “immunosenescence”, as well as severe vitamin D deficiency (VD) may be associated with “inflammatory aging” and contribute to the cytokine storm in elderly patients suffering from COVID-19 [52,53].

Alteration Of Ace2 Receptor Expression

SARS-CoV-2” uses the same receptor “angiotensin-converting enzyme 2” (ACE2) as “SARS-CoV” (the coronavirus associated with the SARS outbreak in 2003). The “renin-angiotensin system” (RAS) is an important regulator of several physiological events, including cardiovascular and blood volume, natriuresis, diabetes, chronic kidney disease and liver fibrosis. The study by Xudong and colleagues in 2006 observed in the rat lung that ACE2 expression is significantly reduced with aging; these authors suggest that ACE2, which is higher in young adults compared to older age groups, may contribute to the prevalence of SARS episodes in this age group. On the other hand, Chen and colleagues, in 2020, found a markedly higher expression of ACE2 in Asian women compared with men; they also found an age-dependent decrease in ACE2 expression, and a highly significant decrease in type II diabetic patients, and established a negative correlation between ACE2 expression and death from COVID-19 [54].

Excess Production of Reactive Oxygen Species (ROS)

The effects of reactive oxygen species (ROS) on cellular metabolism have been well documented in a wide variety of species. These include not only roles in programmed cell death and necrosis, but also positive effects, such as induction of defense genes and mobilization of ion transport systems. It is also frequently implicated in “redox signaling” or “oxidative signaling” functions. In particular, platelets involved in wound repair and blood homeostasis release reactive oxygen species to recruit more platelets to sites of injury. They also provide a link to [immune system] adaptation through white blood cell recruitment. Reactive oxygen species are involved in cellular activity in a variety of inflammatory responses including cardiovascular disease. They may also be involved in cochlear damage induced by elevated sound levels, ototoxicity of drugs such as cisplatin, and in congenital deafness in animals and humans. Redox signaling is also involved in mediating apoptosis or programmed cell death and in ischemic injury. Specific examples are strokes and heart attacks. Garrido, et al. [55] identified that immune cells from prematurely aging mice had lower values of antioxidant defenses and higher values of ROS and pro-inflammatory cytokines, thus suggesting that excessive ROS production during aging may activate the inflammatory response and subsequently increased release of pro-inflammatory cytokines, which include TNF-α, IL-1β, IL-2 and IL-6 and adhesion molecules. Therefore, excessive ROS production and inflammation are closely related, as they are involved in the pathogenesis of chronic inflammation and “inflammatory aging” in older adults.

Autophagy and Age

Autophagy is a conserved catabolic turnover pathway in eukaryotic cells by which cellular material is delivered to lysosomes for degradation. The autophagy process is related to the maintenance of cellular homeostasis, and its dysregulation could lead to the development of several pathophysiological diseases related to aging [56]. It has been shown that the autophagy process decreases during aging and leads to the accumulation of damaged macromolecules and organelles. Decreased autophagy during aging may also lead to dysfunctions in mitochondria and consequently to increased ROS production [57] (since mitochondria are the main source of ROS. On the other hand, mitophagy, which is characterized by autophagic degradation of mitochondria, decreases in aging the decrease in mitophagy, together with the decrease in antioxidant capacity during aging [58], may increase the levels of ROS in the human organism and also to the increased secretion of proinflammatory cytokines during aging [59-62].

Senescent Adipocytes and Age

Some studies on aging highlight the importance of adipose tissue inflammation in aged animals by elevated release of IL- 6, IL-8, IL-1β, and TNF-α. [63-65] Adipose tissue is a dynamic structure that plays an important role in modulating metabolism and inflammation. It is very likely that adipose tissue dysfunction (e.g., obesity during aging) is associated with chronic inflammation in elderly subjects [66]. The mortality rate of obese elderly patients with COVID-19 is approximately 14%. Covarrubias ,et al. [67] found that during aging senescent cells accumulate significantly in visceral adipose tissue and that “inflammatory cytokines” are found in the supernatant of senescent cells, Alicka et al. in 2020 found that “stem cells” derived from adipose tissue of old horses (older than 5 years) exhibited increased gene expression of pro-inflammatory and miRNA genes (such as IL-8, IL-1β, TNF-α, miR-203b-5p and miR-16-5p) and markers of apoptosis (such as p21, p53, caspase-3, caspase-9) [68]. Therefore, it is possible that elevated release of pro-inflammatory cytokines by senescing adipocytes carries an elevated risk of the “cytokine storm” in obese elderly patients with COVID-19.

Age and Immunosenescence

Immunological senescence” is characterized by alterations in both humoral and cell-mediated immune response. Dysregulation of the response severely impacts the pro-inflammatory/antiinflammatory balance when the organism is attacked by an infectious agent. It is known that NK cells and macrophages link the innate and cell-mediated immune systems. Some authors have described an increase in the number of circulating NK cells during aging [69]. One of the important cytokines for the cytotoxic activity of NK cells is IL-2, which increases the killing properties and proliferation of NK cells. In a young healthy individual, IL-2 can induce IFNg secretion by NK cells, but this effect is diminished in the elderly [70]. On the other hand, it has been observed that T cell numbers do not decrease during aging, but the T cell pool shows significant age-related alterations, including impaired responses to T cell stimulation by mitogens, an inverted CD4+/CD8+ T cell ratio, a reduced proportion of Th0 cells, and an increased proportion of “memory cells,” in animals and humans [71-73]. In addition, aging is associated with overproduction of pro-inflammatory cytokines by T cells, leading to immune pathology [74]. The proportion of Th17 cells increases during aging, resulting in an “inflammatory aging” state in adults [75]. The “Th17 regulatory” cells have the “pro-inflammatory” phenotype and are in balance with “antiinflammatory Th-reg cells.” Both cells are derived from a common precursor: Th0 cells [76]. During aging, the generation of several macrophage-induced factors, including fibroblast growth factor, vascular endothelial growth factor, epithelial growth factor, transforming growth factor (TGFβ), is reduced. TGFβ is one of the most important “cytokines” released by “anti-inflammatory regulatory cells”. Therefore, it is thought that the fragile and mildly overactive immune system in older adults cannot turn off proinflammatory response in COVID- infection. 19. The clinical findings in severe patients with COVID-19 infection are consistent with the literature mentioned above. In 2019, Schouten et al. identified that the increase in “pro-inflammatory cytokines” during aging also correlated with SARS severity and could explain, at least in part, the difference in COVID-19 severity between young adult patients and elderly patients [77].

Age and Vitamin D Deficiency

Older adults are at risk for vitamin D deficiency due to several factors, including decreased pre-vitamin D production, poor skin integrity, decreased dietary intake of vitamin D, increased adiposity, obesity, decreased kidney function, as well as less time outdoors [78].Vitamin D deficiency has been linked to various inflammatory diseases related to aging, such as rheumatoid arthritis, asthma, inflammatory bowel disease, multiple sclerosis, cardiovascular disease, hypertension, diabetes mellitus, and cancer [79].

Vitamin D together with the vitamin D receptor (VDR) have an important anti-inflammatory function, acting as “immunomodulators” by decreasing the release by Th1 cells of “proinflammatory cytokines” and increasing the release by Th2 cells of “anti-inflammatory cytokines”. Furthermore, vitamin D deficiency in elderly subjects is associated with the pro-inflammatory phenotype of immune cells, which probably increases the risk of “inflammatory aging” in older adults [80], and this chronic inflammatory condition could contribute to the “cytokine storm” in elderly patients with COVID-19. However, patients with renal failure or granulomatous disease are at high risk for side effects and should be excluded from being treated with vitamin D supplementation. Upcoming vitamin D supplementation trials will provide more clarity on the in vivo effects and the opportunities and possible limitations of vitamin D as an immuno-regulatory agent. In this regard, recent work by Murai, et. al [81] shows that high-dose vitamin D3 shows no significant difference among hospitalized patients with COVID-19, nor does it significantly reduce the length of hospital stay. These findings do not support the use of high- dose vitamin D3 for the treatment of moderate to severe COVID-19.

Influence of Sex

The higher COVID-19 case fatality rate and greater disease severity in men compared to women are likely due to a combination of behavioral/lifestyle risk factors, prevalence of comorbidities, aging, and underlying biological sex differences. However, the underlying biological sex differences and their effects on COVID-19 outcomes have received less attention. The recent review conducted by Haitao Tu, Vermunt JV et al. of the Mayo Clinic (October 2020) [82] summarizes the available literature regarding proposed molecular and cellular markers in COVID-19 infection, their associations with health outcomes, and any reported modifications by sex.

Biological sex differences characterized by such biomarkers exist within healthy populations and also differ with age- and sex-specific conditions, such as pregnancy and menopause. In the context of COVID-19, descriptive biomarker levels are often reported by sex, but data regarding the effect of patient sex on the relationship between biomarkers and COVID-19 disease severity/outcome are scarce. Such biomarkers may offer plausible explanations for the worse COVID- 19 outcomes observed in men. Larger studies with sex-specific reporting and robust analyses are needed to elucidate how sex modifies the cellular and molecular pathways associated with SARS-CoV-2. This would improve biomarker interpretation and clinical management of patients with COVID-19 by facilitating a personalized medical approach to risk stratification, prevention, and treatment. Several comorbidities, which occur disproportionately in men, likely contribute to worse COVID-19 outcomes, it is thought that perhaps ACE inhibitors are involved or that angiotensin receptor blockers may exert adverse effects on COVID-19. Experimental and epidemiological evidence is conflicting as to whether the use of ACE inhibitors and angiotensin receptor blockers upregulate ACE2 expression and affect susceptibility to infection and/or disease severity. Ongoing randomized clinical trials could inform whether this differs by sex and recommendations on the use of such therapy in patients with COVID-19.

Immunologically

It appears that women have a stronger immune response overall; however, men are more likely to develop the “cytokine storm associated with poor outcomes against COVID-19. Further research on immuno-modulation by sex hormones, age and X-linked gene expression could help explain the poorer survival of men and identify sex-specific risk factors for SARS-CoV-2 infection and the course, outcome and prognosis of COVID.

Current Treatment of COVID-19

Despite advances in the deterioration of the COVID-19 patient population, there is no approved drug that has considerable beneficial effects in the medical treatment of COVID-19 patients. Hydroxychloroquine was the first drug of choice for the treatment of the disease, but today it is being rejected because of its ineffectiveness and because in some cases it has aggravated the condition of the treated patient. At present, umifenovir, remdesivir and favipiravir are thought to be the most promising antiviral agents for improving the health of infected patients. Dexamethasone is being considered as the first known steroid drug that can save the lives of critically ill patients, as it was shown in a randomized clinical trial in the UK to reduce the death rate in patients with COVID-19. However, despite its increased use worldwide it is not a truly effective treatment over the current high mortality rate in severe cases.

Based on the evidence, the US Food and Drug Administration (FDA) approved some drugs that had already been used in the treatment of SARC-CoV and MERC-COV. The primary treatment chosen for COVID-19, lopinavir, is an antiretroviral (ARV) drug used for the treatment of HIV-1 and has been used for COVID- 19 in combination with ritonavir (potent anti-HIV drug). Currently, 64 clinical trials are underway with lopinavir-ritonavir along with other drug implications, and most of them are in the early stage of progress. The latest evidence for the management of COVID-19 will be uncovered shortly. No single drug may be superior or inferior, however, the use of a single drug may not be effective enough to control this deadly virus, considering PK and drug metabolism, the use of a combination of antivirals with different mechanisms of action may be more effective [83].

Antiviral Agents Used to Date

Remdesivir

Remdesivir (GS-5734) was developed by Gilead Sciences (Foster City, CA, USA). It is an adenosine triphosphate analog and has been used to treat coronavirus and Ebola virus. Remdesivir stops viral replication by inhibiting essential replication enzymes (RNA-dependent RNA polymerase). Currently, more than 24 clinical trials are underway in patients with COVID-19 [84].

Favipiravir

Favipiravir directly inhibits viral transcription by inhibiting RNA polymerase. Currently, 18 clinical trials in various stages of development are underway for the treatment of COVID-A Phase 3 clinical trial has recently been initiated in India, and full study results are expected to be published soon. Clearance for the clinical trial phase evaluation for the safety and efficacy of favipiravir in tablet form has been granted to Appili Therapeutics to monitor COVID-19 in long-term care services [85].

Lopinavir/ritonavir

Lopinavir (Kaletra) is a potent anti-HIV drug used to treat HIV infection in combination with ritonavir. Ritonavir inhibits the pharmacological metabolism of lopinavir to improve PK (half-life) and activity. The Infectious Diseases Society of America (IDSA) recommended ritonavir-boosted combination therapy for HCV patients as first-line therapy. Lopinavir / ritonavir have shown anti-SARS-CoV-2 activity “in vitro” by inhibiting protease in Vero E6 cells [86]. In addition, SARS patients revealed that lopinavirritonavir plays an important role in explaining clinical outcomes and in combination with IFN improved clinical outcomes in some MERS patients [87]. In India, the EMR division has recommended the dosing schedule of this drug combination for the clinical management of COVID-19.

Ribavirin

Ribavirin is a broad-spectrum antiviral drug developed by Bausch Health Companies (Bridgewater Township, New Jersey, USA). It is a guanosine analog used to treat several viral diseases. It showed a lower risk of death in ARDS (acute respiratory distress syndrome) infection in combination with lopinavir- ritonavir. In recent “in vitro” studies, ribavirin showed high efficacy against COVID-19; however, in other studies rivavirin showed an unexpected adverse effect, which was very detrimental to some patients with SARS. [88-89].

Umifenovir

Umifenovir, also known as Arbidolâ , is a broad-spectrum antiviral agent developed by the Russian Institute of Chemical and Pharmaceutical Research. Lopinavir-ritonavir and umifenovir were previously used to treat acute SARC-CoV in clinical practice; however, their efficacy remains debated. The clinical safety and efficacy of umifenovir monotherapy were analyzed in patients with COVID-19 and compared with lopinavir-ritonavir therapy. Umifenovir was found to be better than lopinavir-ritonavir for the treatment of COVID-19 [90]. This drug has obtained approval to proceed with the phase III clinical trial of umifenovir. This randomized, double-blind, placebo-controlled trial will test the efficacy, safety and tolerability of umifenovir. Results are expected to be reported soon [83].

Nitazoxanide

Nitazoxanide inhibits viral infection by potentiating the hostspecific mechanism. Although the “in vitro” activity of nitazoxanide against SARC-CoV-2 suggests that it is effective, more clinical data are needed to estimate efficacy and safety against CO-VID-19 [91]. Currently, many clinical trials of nitazoxanide are underway with various doses to treat patients with COVID-19. 969Although the results are not encouraging or available yet, the FDA has given approval to Azidus Brazil for nitazoxanide to continue with the Phase II clinical trial.

Ivermectin

Ivermectin, an FDA-approved antiparasitic agent as effective as Albendazoleâ, has shown activity against many viruses. Recently, an in vitro study has shown that ivermectin inhibits COVID-19 replication. Its antiviral activity may play a key role and be a potential candidate to treat COVID-19. Finally, the FDA announced a statement for the administration of ivermectin in patients with COVID-19 [92].

Interferons

Interferon (IFN) is a broad-spectrum antiviral agent that inhibits viral replication by interacting with the toll-like receptor (TLR Type III IFNs (IFN-λs) were identified in 2003 and were independently used to elicit antiviral resistances in cells. One member of this family (IFN-λ) [93] was shown to be effective in 2013. IFNs of this type have been used to treat patients critically ill with chronic hepatitis C virus and have also been effective in treating people infected with hepatitis B virus, so they are believed to have the ability to protect patients during outbreaks of other viruses. IFN-λ has also been shown to be more efficacious compared to IFNα-based therapies, also leading to less increase in inflammation and tissue damage, and potentially restricted viral spread from the nasal epithelium to the upper respiratory tract. Moreover, IFNα and β exhibited activity against SARS-CoV “in vitro”. IFNβ also showed potential action to decrease MERS-CoV replication. For the most part, type I IFN showed a rapid decrease in viral load in patients with mild or moderate COVID-19. In severe COVID- 19 infection, IFN showed an antiviral response, but with elevated pulmonary cytokine levels, and weakened T-cell response and acute clinical relapse [94].

Dexamethasone

The main synthetic glucocorticoids: dexamethasone, triancinolone and prednisone are used as immunosuppressants, but their therapeutic indications also include their anti-inflammatory action, and because of their qualities as anti-lymphocyte cytostatics they are used in oncology and in the treatment of allergic diseases. The immunological effects of these drugs are multiple and differ between experimental animals (rodents) and man. In man there is, within a few hours of administration, an increase in neutrophils and a decrease in all other white blood cells in peripheral blood, this decrease being more pronounced for B and T lymphocytes. Although a single dose of glucocorticoids has little effect on B lymphocytes, treatment for several days (3 to 10) may result in a decrease in IgG, IgA and IgM. The FDA approved dexamethasone as a spectrum immunosuppressant in 1958. It is 30 times more potent and longer lasting than cortisone and reduces the ability of B cells to synthesize antibodies [95]. However, a clinical trial showed that dexamethasone saved the lives of severely ill COVID-19-infected patients in the United Kingdom [96]. The UK government declared that dexamethasone was allowed as an immediate treatment option for hospitalized patients who were critically ill and on ventilators. WHO added dexamethasone to the list of life-saving drugs that are readily available at low cost. In the U.S., guidance was issued to recommend dexamethasone as a treatment option for patients infected with CO- VID-19. However, clinical evidence does not support the use of corticosteroids in COVID-19 infection [96]. Dexamethasone may regulate, to some extent, the damaging effects of cytokines by limiting their release, but it has not been shown to be able to inhibit the “cytokine storm”, when the antigen overwhelms the regulatory capacity of the immune response. In addition, dexamethasone prevents macrophages and NK cells from eliminating nosocomial pathogens associated with “coronavirus”.

Tetracyclines

Tetracycline can be used as a possible treatment option for patients with COVID-19 because of its known activity to decrease the level of inflammatory cytokines such as IL-1b and IL-6 [97]. Both IL-1b and IL-6 levels increase significantly in the body of patients during COVID-19 infection. Tetracycline has also been shown to decrease inflammatory factors in the circulation through activation of protein kinase C and induction of programmed cell death [98].

Tocilizumab

Tocilizumab (called Actemra) is a recombinant monoclonal antibody developed by Roche Pharmaceuticals (Basel, Switzerland). Tocilizumab is basically used to treat rheumatoid arthritis. It was designed as an IL-6 receptor blocker to inhibit the binding of IL-6 to its receptor, thus alleviating the “cytokine release” syndrome.

IL-6 is significantly increased in the body of patients when exposed to COVID-19 infection. This is why tocilizumab is used as a therapeutic option for the treatment of patients with COVID-19 [99]. In COVID- 19 infected patients, T lymphocytes and macrophages produce IL-6 and and help the “cytokine storm” and severe inflammatory responses in the lungs and other tissues. Tocilizumab has binding affinity for the IL- 6 receptor and renders the receptor unable to bind IL-6, decreasing the inflammatory response and ultimately decreasing the IL-6 signal transduction pathway [100]. Consequently, it may be essentially an effective therapeutic drug for the treatment of patients with severe COVID-19 infection [101]. The FDA has given Genentech approval to proceed with the Phase III clinical trial of intravenous tocilizumab to evaluate its safety and efficacy in adult patients infected with COVID- 19.

Itolizumab

Itolizumab (called Alzumab) is a recombinant monoclonal antibody against CD6 (IgG1 (Immunoglobulin G1) differentiation group. It was developed for the treatment of psoriatic patients [102]. It showed reduction of IL-6 in critically ill patients. Itolizumab has been shown to have the effect of regulating downstream activation pathways and reduction of inflammatory cytokines, such as IFN-γ, TNF- α and IL-6 [103]. Based on the mode of action, it could be used as a treatment option for COVID-19 infection [103].

Teicoplanin

Teicoplanin (called Targocid) was developed by Sanofi Pharmaceuticals (Paris, France). It is an antiviral drug that can inhibit replication and transcription of the competent virus. It also works against MERS and SARS [104]. Mechanistic investigations revealed that teicoplanin specifically inhibits the activity of host cell cathepsin L and cathepsin B; these proteins are responsible for cleaving the viral glycoprotein, allowing contact of the receptorbinding domain of its core genome and subsequent release into the host cell cytoplasm [105-106]. Since COVID-19 is also “virusdependent” on cathepsin L, some studies suggested that teicoplanin could be used as a therapeutic option to treat COVID-19. According to Ceccarelli, et al. [107], teicoplanin would have a possible therapeutic effect in COVID-19 infected subjects. At present, an in vivo study using teicoplanin in subjects affected by COVID-19 has already been performed for the first time and the results seem quite acceptable compared to a previous report from the same geographical area. Teicoplanin is now thought to be a promising option for the treatment of COVID-19 although more safety data in humans are still required.

Meplazumab

Meplazumab is a humanized monoclonal antibody that acts against the CD147 spike protein. In in vitro studies, it has been shown to effectively inhibit virus replication in Vero E6 cells [108]. Based on this evidence, a study has been conducted to determine the clinical outcomes with the use of meplazumab in treating patients infected with COVID-19. Meplazumab was previously reported to exhibit activity against “Chauge-Strauss syndrome” (characterized by eosinophilic vasculitis, pulmonary infiltration, sinusitis, neuropathy and asthma) [109].

The Phase I clinical trial (NCT0436369586) in healthy volunteers with maplazumab injection is currently being completed to find the safety, efficacy, tolerability, pharmacokinetic characteristics and dosing regimen for the Phase II clinical trial. In the U.S., an openlabel Phase I and Phase II clinical trial is underway to determine the safety and efficacy of meplazumab injection in patients infected with COVID- 19 (NCT04275245). Meplazumab could be used as a therapeutic option to treat patients with COVID-19.

Eculizumab

Eculizumab (Soliris, Alexion Pharma International, Zürich, Switzerland), a human monoclonal antibody, is a highly selective and effective C5-binding protein of the complement system with high affinity. It prevents cleavage to C5a and C5b and inhibits the production of the membrane attack complex (MAC) C5b- 9 to lyse cells. Interestingly, blockade of C5 reveals an indirect “immunoprotective” action by preserving early components of the complement system [110]. Consequently, eculizumab could function as an emergency therapy to treat patients with CO-VID-19 associated with SARS. Some studies have supported the use of eculizumab as a treatment for severe COVID-19. In addition, more clinical trials are approved, some already completed, studying the action of eculizumab in combination with ruxolitinib for efficacy in patients with severe COVID-19 [111].

AMY101

AMY101 is a highly selective inhibitor of the C3 fraction of the complement system that was developed by Amyndas Pharmaceuticals). AMY101 has successfully completed clinical phase I with acceptable safety and tolerability and is now in phase II clinical trial (NCT04395456) AMY101 could be a unique therapeutic option to overcome the complement-mediated inflammatory response in patients with COVID-19 [112-113].

ARDS-003

Cannabinoid (CBD) is also a potential treatment for patients with severe COVID-19. It was designed as an injectable form to treat a severe case of coronavirus with “acute respiratory distress syndrome” It may have the advantage of affecting several proinflammatory signaling pathways by enhancing the effectiveness of the drug to rapidly dampen cytokine release and prevent acute ARDS outcomes [114]. The cannabinoid drug named “ARDS-003” has been approved for a Phase I clinical trial, which is still being conducted by Tetra Bio-Pharma. Initially, the FDA emphasized that the results of the non-clinical studies were appropriate to begin the study in COVID-19 infected patients.

LCB1

CB1 has been shown to be the “SARS-CoV-2 neutralizing antibody”. It is a computer-engineered mini- protein that has been synthesized by researchers at the University of Washington School of Medicine. It binds tightly to SARS-CoV-2 spike proteins and prevents it from infecting cells. “LCB1” was shown to protect “Vero E6” cells from SARS-CoV-2 infection. Synthetic antiviral candidates were designed to stop infection by interfering with the mechanism used by the coronavirus to penetrate and enter cells. LCB1 is currently being evaluated in rodents [115]. These “hyperstable mini-agglutinants” provide a starting point for the most novel COVID-19) therapeutics.

Convalescent plasma

Convalescent plasma therapy has potential to cure COVID-19 (145). Clinical data are very limited to date, but suggest that it is safe, clinically effective, and reduces mortality. However, there is an urgent need for “multicenter clinical trial” studies to establish its efficacy in patients with COVID-19. The U.S. FDA has issued an “emergency use” clearance for “convalescent plasma”, currently under investigation, for the treatment of patients with COVID-19. In addition, polyclonal antibodies from convalescent individuals and immunoglobulin concentrates (human and bovine) may also be of interest in the treatment of COVID-19, at this moment a Spanish company is working on it.

Vaccine development

Coronaviruses are a family of single-stranded RNA viruses that infect many animal species, including bats and humans. Prior to 2003, only twelve animal or human coronaviruses were identified. In the last eighteen years, three new and deadly strains have spread to humans. In 2003, the severe acute respiratory syndrome coronavirus (SARS-CoV) had an official number of 8096 cases and 774 deaths, with people with pre- existing conditions suffering the highest mortality. The overall effect of COVID-19 vaccine development has been a massive invigoration of the field of pandemic vaccine development. The current vaccines are realizing the theoretical promise of antigen sequence-only platforms, such as mRNA and vector-based platforms, and have massively accelerated their development toward rapid “Phase 3 vaccination against COVID-19” evaluation in a timeframe never seen before for vaccines. However, it is important to note that, despite their rapid manufacturing timeline, these platforms encode an antigen that was developed over a timeline of many years through basic research on coronavirus biology and protein engineering. Largescale investment and unprecedented mobilization of the research community have generated insights into the design, manufacture, formulation, and deployment of candidate vaccines that may pay dividends in the future when society must cope with the next inevitable infectious disease outbreak [116-117,83].

Herbal Medicines

In China, during the COVID-19 outbreak, some traditional medicines were used, such as Astragali Radix (Huangqi), Saposhnikoviae Radix (Fangfeng), Glycyrrhizae Radix et Rhizoma (Gancao), Atractylodis Macrocephalae Rhizoma (Baizhu) [118]. Some cannabinoid products were also used [119]. As a treatment option to control the inflammatory response medicinal plants with proven antiviral effects and related beneficial effects could be considered as an alternative approach to prevent high-risk population from COVID-19. However, there are no randomized studies to know the true efficacy and side effects of these natural products obtained from plants. Currently, other researchers, and ourselves, are focusing our attention on the acute and systemic inflammatory process that leads to the activation of “damageassociated molecular patterns” (DAMPs), as well as the study of substances capable of preventing or decreasing cell damage by “suppressing/inhibiting DAMPs” (SAMPs), leading to the resolution of the “inflammatory disease”.

Authors such as Land WG (Laboratory of Excellence Transplantex, University of Strasbourg, Strasbourg, France and German Academy for Transplantation Medicine), think that current or future therapeutics will include the inhibition of “DAMPs” in hyper-inflammatory processes, e.g., “systemic inflammatory response syndrome” (SIRS), which is currently observed in Covid-19, as well as the application of “SAMPs” in chronic inflammatory diseases, in “hyperresolution” processes, systemic inflammatory response syndrome” (SIRS), currently observed in Covid-19, as well as the application of “SAMPs” in chronic inflammatory diseases, in “hyper-resolving” processes (e.g. compensatory anti-inflammatory response syndrome) and in the administration of “SAMPs” in the treatment of chronic inflammatory diseases. We are in full agreement with this author that controlled production of “DAMPs” and “SAMPs” is necessary to achieve complete homeostatic restoration and repair of tissue injury and tissue damage. On the other hand, we fully agree with this author that a controlled production of “DAMPs” and “SAMPs” is necessary to achieve complete homeostatic restoration and repair of tissue injury, and also with the need to identify and define “a priori” a context-dependent “homeostatic DAMPs/ SAMPs ratio” in each case and a “homeostatic window” of DAMP, and SAMP concentrations, to ensure a safe treatment modality in patients [120]. In this aspect, our research group has recently published the work: “Implant of mesenchymal cells decreases acute cellular rejection in small bowel transplantation”[121]in which the inhibition of acute cell-mediated rejection is observed in an experimental model of allogeneic small intestine transplantation, through the implantation of mesenchymal cells and the activation of the “immune-regulatory response”, with an increase in the percentage of Treg cells, a significant increase in TGFb-1 and a decrease in IL-17. This finding will serve as the basis for the project: “Treatment of coronavirus-19 infection using nonsteroidal immunomodulators”, where the DAMPs will be the proinflammatory cytokines and the SAMPs will be the “anti-IL17” and “TGF-β1” molecules.

TGF- β1

As we have described above, the “pathway of regulation of the immune response” exerts its role by responding to the needs of the immune response, at a given moment, against the corresponding antigen, increasing the inflammatory activity of the Th1 pathway, mainly by means of Th17 cells and IL-17A, or increasing the antiinflammatory activity of the Th2 pathway, mainly by means of the “transforming growth factor β” (TGF-β). In the scheme shown on page 7 of this paper, we can see that this factor already acts from the “immune-regulatory pathway” and is part of the “anti-inflammatory pathway”, tipping the balance towards this second pathway. In our previously cited work on the inhibition of acute cell-mediated rejection in intestinal transplantation, “TGFb-1” is shown to be the most important factor, in relation to the other cytokines studied, acting as an inhibitor of the inflammatory immune response in rejection. In 2000, the Canadian researchers Prud’homme GJ and Piccirillo CA already pointed out that the importance of the factor “TGF-β” in “immuno-regulation” and tolerance had been recognized once again [122].

Like us, the authors propose that there are regulatory T-cell (T-reg) populations, some called T-helper type 3 (Th3), exert their action mainly by secreting this cytokine, and furthermore these authors emphasize the following concepts: 1) TGF-β1 has multiple suppressive actions on T cells, B cells, macrophages and other cells, and increased TGF-β1 production correlates with protection and/ or recovery from autoimmune diseases; 2) TGF-β1 and CTLA-4 are molecules that work together to terminate immune responses; 3) Th0, Th1 and Th2 clones can secrete TGF-β1 following CTLA-4 cross-linking; 4) TGF-β1 may play a role in the switch from effector T cells to memory T cells; 5) TGF-β1 acts with some other inhibitory molecules to maintain a state of tolerance, which is most evident in immunologically privileged sites, but may also be important in other organs; 6) TGF-β1 is produced by many cell types, is always present in plasma (in its latent form) and permeates all organs, binding to matrix components and creating a reservoir of this immunosuppressive molecule; and 7) TGF-β1 has beneficial effects in several autoimmune diseases and shows that it can be effectively administered by a somatic gene therapy approach, resulting in depressed inflammatory cytokine production and increased production of endogenous regulatory cytokines.

Currently, March 2021, Aydemir MN, et al. [123] have published an interesting paper. The authors believe that despite the information obtained on the structure of the SARS-CoV-2 viral genome, many aspects of virus-host interactions during infection are still unknown; their purpose in this study has been to identify the “microRNAs” (“miRNAs”) encoded by SARS-CoV- 2” and their cellular targets. The authors have employed for this purpose a computational method to predict SARS-CoV-2-encoded miRNAs along with their putative targets in humans. The predicted miRNA targets were grouped into clusters according to their biological processes, molecular function and cellular compartments. Aydemir MN, et al. note that the “TGF-β1 pathway” has important functions in many cellular processes, and that it is often manipulated by viruses, as it is a simple pathway. The authors expose that proteins that play a crucial role in almost all steps of this pathway are targeted by SARS-CoV-2 miRNAs and demonstrate that the SARS-CoV “nucleocapsid protein” (“N”) inhibits the formation of the “SMAD” complex (family of inducing genes of this pathway), resulting in blocking TGF-β1-induced apoptosis of Cov-2-infected cells and, conversely, tissue fibrosis in SARS-CoV-infected “host cells” [124]. Finally, these authors performed an integrative pathway network analysis with target genes and identified 40 SARS-CoV-2 miRNAs and their regulated targets, the analysis shows that the targeted genes including NFKB1, NFKBIE, JAK1-2, STAT3-4, STAT5B, STAT6, SOCS1-6, IL2, IL8, IL10, IL17, TGFBR1-2, SMAD2-4, HDAC1-6 and JARID1A-C , JARID2 plays an important role in NFKB, JAK/STAT and TGFB signaling pathways as well as epigenetic regulatory pathways in cells and they believe that their results may help to understand the virus-host interaction and the role of viral miRNAs during SARS-CoV-2 infection. Since there is currently no drug or effective treatment available for COVID19, it may also help to develop new treatment strategies.

Monoclonal Antibody Against Il-17

COVID-19 is caused by SARS-CoV-2, a “beta-coronavirus” closely related to MERS-CoV and SARS- CoV, the causative agents of “Middle East respiratory syndrome (MERS)” and “severe acute respiratory syndrome” (SARS), respectively. COVID-19 appears to follow a similar pattern, with 81% of fatal cases diagnosed with SARS (2). In consideration of this, a recent publication in The Lancet [125] suggests that all patients with COVID-19 should be evaluated for “hyper-inflammation” in order to identify those who would benefit from targeted immunosuppression or immunomodulation to prevent acute lung disease. (“ALI”) (acute lung injury) [126]. IL-17 (formally IL-17A) is the best known member of a family of multifunctional cytokines. Its predominant role seems to depend on where the cytokine is expressed (gut, lung or skin) and what the trigger is. These two factors appear to influence whether the predominant effect of its expression is protective or whether it leads to a detrimental hyper-inflammatory state. For MERS-CoV, SARS-CoV and SARS-CoV-2, disease severity was shown to correlate positively with levels of IL-17 and other T helper 17 (Th17) cellrelated pro-inflammatory cytokines, such as IL-1, IL-6, IL-15, TNF and IFNγ. (see page 7 of this paper) Increased IL-17 levels in LPS-induced “ALI” (“acute lung injury”) mice correlated with increased lung injury scores, increased protein-rich inflammatory lung infiltration and decreased overall survival. Furthermore, the addition of exogenous IL- 17 further exacerbated LPS-induced production of TNF, IL-1β, IL-6 and CXCL2, revealing the role of IL- 17 as a key principal modulator of the inflammatory pathway. In the same study, mice genetically deficient in IL-17 or those that received anti-IL-17 antibodies demonstrated improved survival, less pulmonary infiltration, and improved lung pathology scores after LPS exposure [127] Taken together, analyses of patients with coronavirus-induced lung disease suggest that IL-17 may serve as a biomarker (“DAMP”) of disease severity and a potential target for therapy to mitigate SARS-CoV-2 damage, particularly in the lung. Of note, COVID-19 mortality is also associated with myocarditis in the context of SARS.

Zhao Y, et al. [128] in a very recent paper (February 2021) propose a model to understand the underlying mechanisms involved in lung pathology by investigating the role of the lungspecific immune response. The authors obtain immune cells in bronchoalveolar lavage fluid and in blood drawn from patients with COVID-19 with severe disease and patients with bacterial pneumonia not associated with viral infection. By tracing T-cell clones across tissues, they identify Th17 cells similar to clonally expanded “memory T cells” resident in lung tissue, which they term “Trm17 cells” and which reside in the lungs even after viral clearance. Analysis of the lung suggests that Trm17 cells may interact with lung macrophages and Tc/s (CD8+ cytotoxic) cells, and is associated with disease severity and lung damage. Ultimately, elevated serum IL-17A and GM-CSF protein levels in patients with COVID- 19 are associated with a more severe clinical course. Zhao Y, et al. suggest that lung Trm17 cells are a potential orchestrator of hyperinflammation in severe COVID-19. On the other hand, Trm17 cells become activated or reactivated as part of the ongoing cytokine storm, during which they may begin to produce pro-inflammatory cytokines such as GM-CSF. This could lead to increased activation of macrophages and cytotoxic CD8+ cells, which other authors have linked to disease severity and ultimately mediate lethal lung damage [42,45].

As related by Zhao Y, et al. To date there have been 2 small pilot studies that have indicated that targeting GM-CSF in patients with severe lung disease by COVID-19 using anti-GM-CSF receptor monoclonal antibodies mavrilimumab or lenzilumab, respectively, may be a strategy to improve clinical outcomes [3,4], although larger controlled clinical trials would be needed to determine the efficacy and biological impact of such approaches. This network of tissue-resident cells may persist in the lungs even after the initiating event, e.g., a viral infection, has been eliminated, contributing to chronic lung pathology. There are three commercially available options: secuquinumab (human monoclonal antibody against IL- 17), ixeki-zumab (humanized monoclonal antibody against IL-17) and brodalumab (human monoclonal antibody against the IL-17 receptor). Both secukinumab and ixekizumab are approved for psoriasis, psoriatic arthritis and ankylosing spondylitis; brodalumab is approved for the treatment of psoriasis alone. All three of these drugs come with warnings about an increased risk of infections. Compared to placebo, clinical trials showed a moderate increase in upper respiratory tract infections (“URIs”) for patients treated with secukinumab and a similar number of URIs for patients treated with ixekizumab, while treatment with brodalumab resulted in a lower rate of “URIs.” The risk of serious infections is unchanged or low in the short term. Therefore, the use of these drugs in the acute setting of COVID- 19 should not lead to an increased risk of secondary infections.

“NSAIDs” (Non-Steroidal Anti-Inflammatory Drugs)

Nonsteroidal anti-inflammatory drugs (NSAIDs) are a group of often chemically unrelated compounds that have potent antiinflammatory, analgesic and antipyretic activity and are among the most widely used drugs worldwide. It is generally thought that one of their main mechanisms of action is the inhibition of cyclooxygenase (COX), the enzyme responsible for the biosynthesis of prostaglandins (PGs) and thromboxane. NSAIDs are also associated with an increased risk of gastrointestinal, renal and cardiovascular adverse effects.

The review paper by Bacchi S et al., [129] describes the clinical pharmacology of “NSAIDs, their classification, molecular mechanisms of action and adverse effects, including their possible contribution to “neuro-inflammation” and carcinogenesis, as well as some recent developments aimed at designing effective antiinflammatory agents with improved safety and tolerability profiles. In the late 1980s, it was discovered that COX has two isoforms, each produced by a different gene. The COX-1 gene is located on chromosome 9 and functions as an internal gene that regulates numerous cellular functions, including the complex series of processes responsible for protecting the gastrointestinal mucosa from ulceration. The COX-2 gene, located on chromosome 1, is an early and immediately activated gene and is rapidly deregulated in response to a variety of inflammatory cytokines and cellular injury.

The COX-1 enzyme produces the prostaglandins responsible for gastrointestinal cytoprotection and platelet function, while the COX-2 enzyme produces the reactions responsible for pain perception and inflammation. Thus, COX-1 and CoX-2 enzymes can produce both beneficial and adverse effects due to inhibition of prostanoids, derived from arachidonic acid (AA), which is converted to prostaglandin G2 (PGG2) and H2 (PGH2) as a result of cyclooxygenase (COX) activity, and PGH2 is subsequently metabolized by terminal synthases into biologically active prostanoids. COX-2 expression is greatly restricted under basal conditions, but is greatly increased at inflammatory sites in response to cytokines such as interferon-gTNFa, IL-1, hormones, growth factors, and hypoxia. The pharmacological effects of NSAIDs are due to blockade of COX and consequent reduction of PG synthesis, leading to a decrease in inflammation, pain and fever. The anti-inflammatory action of “NSAIDs” is due to the decrease of vasodilator PGs (PGE2, PGI2), which indirectly reduces edema.

Within the group of non-steroidal anti-inflammatory molecules, the group of pyrazolones stands out and among them metamizole (dipyrone). In 2014, the research group of Jasiecka A et al, [130] published a paper on the pharmacological characteristics of “metamizole”: a popular, non-opioid analgesic drug commonly used in human and veterinary medicine. In some cases, this agent is still incorrectly classified as a non-steroidal anti-inflammatory drug. Metamizole is a “pro-drug” that spontaneously breaks down after oral administration into structurally related pyrazolone compounds. In addition to its analgesic effect, the drug is an antipyretic and spasmolytic agent. The mechanism responsible for the analgesic effect is complex and most likely based on inhibition of a central cyclooxygenase-3 and activation of the opioidergic and cannabinoid systems. Metamizole can block both PG-dependent and PG-independent LPS-induced fever pathways, suggesting that this drug has a distinctly different antipyretic action profile than the other NSAIDs. The mechanism responsible for the spasmolytic effect of metamizole is associated with inhibition of intracellular Ca2 + as a result of reduced inositol phosphate synthesis [130]. Metamizole is predominantly applied in the therapy of pain of different etiology, spastic conditions, especially affecting the digestive tract, and fever refractory to other treatments. Coadministration of morphine and metamizole produces super-additive anti-nociceptive effects [131]. On the other hand, metamizole is a relatively safe pharmaceutical preparation, although it is not completely free of undesirable effects. Among these side effects, the most serious and most controversial is the myelotoxic effect; however, it seems that in the past the risk of metamizole-induced agranulocytosis was exaggerated [132]. Today it is considered that the side effects of metamizole appear only in long periods of treatment of chronic inflammatory diseases. Our research team has studied the effects of “magnesium metamizole” marketed under the name of Nolotilâ and currently produced by Boehringer Laboratories (Germany).

The active ingredient of this drug is the “methylated oxyquinazine” molecule, whose chemical structure is included here: The complete structure of this synthetic drug corresponds to a phenyl-dimethyl pyrazolone derivative whose “R” root is magnesium methylene sulfonate. Its complete formula is dimethyl oxyquinazine methylene methylamine magnesium sulfonate. Since 1975, we have used this drug in the surgical clinic as an analgesic in the first days of the immediate postoperative period, due to its high analgesic power and also to its anti-adhesiveness and anti-platelet aggregation properties. These findings were obtained in “in vivo” and “in vitro” studies carried out by our research team in the 70’s of the last century. [132-134]. Our results have now been ratified by Pfrepper C et al. in 2019 [135]. It is very important to emphasize here that increased platelet adhesiveness and aggregation leads to initial thrombus formation and eventually to thrombosis. Methylated oxyquinazine by its anti- aggregating and antiadhesive action on platelets may contribute to prevent vascular thrombosis. When “CoV-2 antigens” stimulate macrophages or any other “antigen presenting cell” the “pro- inflammatory” cytokines par excellence are released: IL-1, IL-6, IL-8, IL-15, IL-17, IL-18, TNFs, IFNg and PAF (platelet activating factor) (and source of PF4). Through the release of PAF, the immune response acts on the coagulation and fibrinolytic systems, giving rise to signals that cross and intersect between the immune response and the different systems: coagulation, fibrinolytics, cyanins, arachidonic acid, leukotrienes and thromboxanes, etc. Thus, we believe that methylated oxyquinazine could contribute to inhibit or palliate the immune response overwhelmed by COVID-19 in the most severe stage of Cov-2 disease [42]. (see Figure 2) page 5 of this paper). It has recently been published those vaccines using adenovirus vectors can produce thrombosis by activation of PAF and PF4, resulting in increased platelet aggregation and adhesiveness accompanied by thrombocytopenia [136-143]. In this aspect, methylated oxyquinazine (metamizole) could also serve as a prophylaxis of thrombus formation when these vaccines are given (Figure 4).

biomedres-openaccess-journal-bjstr

Figure 4: The complete structure of this synthetic drug corresponds to a phenyl-dimethyl pyrazolone derivative whose “R” root is magnesium methylene sulfonate. Its complete formula is: dimethyl oxyquinazine methylene methylamine magnesium sulfonate [134-136].

Our research project is aimed at avoiding the “cytokine storm” by means of molecules that do not inhibit the response to the virus but can avoid the excessive inflammatory response, by activating the “immuno-regulatory pathway” of the immune system. The treatments applied to date fail when patients, especially the elderly or “immuno-compromised” people (suffering from severe heart disease, chronic kidney disease, chronic obstructive pulmonary disease, cancer –patients undergoing active treatment–, immunosuppression by transplantation of solid organs, obesity or type 2 diabetes mellitus, or elderly people who also suffer from any of these diseases, reach the most serious stage of the disease, and their body is not capable of avoiding the “cytokine storm” and the multi-organic failure that leads inexorably to death.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Medical Science

How COVID-19 Pandemic Indirectly Affected Orthopedic Patients: A Case Report of a Rescue Treatment For a Proximal Humerus Nonunion

Introduction

Humeral fractures account for 5% to 8% of all fractures, whereas proximal humerus fractures represent the seventh most frequent fractures in adults [1,2]. Nonunion is a complication that occurs in 15% of all the humeral fractures [3] and its incidence increases in case of proximal humerus fracture [4]. Risk factors are advanced age, osteoporosis, obesity, smoking, alcoholism, and infection. Comminution and impaction of fractures and loss of fixation also contribute to the develop of nonunion [5,6]. This condition results in pain and loss of shoulder function. The management of proximal humerus nonunion is challenging and often the results are disappointing. Treatment of these kind of complications include open reduction and internal fixation with bone grafting but often it is an unsuccessful treatment resulting in bad clinical outcomes and further surgery is required. Other options are fixation with tension wires or with intramedullary nail [7].

These are optimal options for bone of good quality such as in young patients and with no signs of gleno-humeral arthritis. Shoulder arthroplasty is a reasonable option in case of proximal humerus nonunion associated with a rotator cuff damage and osteoporosis [8]. This case-report describes a proximal humeral nonunion in a 69-year-old woman who was first treated with an external fixation before the advent of COVID-19 pandemic. After removing the external fixator (EF), she was lost at follow-up because of the closure of our department during the Italian lockdown. She came back after seven months with pain and functional limitation. X-Ray reported a nonunion of proximal humerus. In the end she underwent to reverse shoulder arthroplasty, recovering with a good result.

Case Report

A 69-year-old woman came to ER of Poliambulanza of Brescia in October 2019. X-rays were obtained. Fracture involved proximal humerus of the right, dominant, upper arm. The fracture was 11C3.1 according to AO classification. At first, the fracture was treated with an external. fixation using a Galaxy EF in the first twenty-four hours. The EF was removed after one month because of the loss of reduction with displacement of the fracture. Physio- kinesitherapy was indicated but she was unable to underwent to treatment. She was lost at follow -up for several months due to COVID-19 pandemic and the social limitations that resulted. After 7 months, Xrays showed a dislocated nonunion of proximal humerus with necrosis of the head. She complained shoulder pain with passive elevation of 40° and scapular dyskinesia. In March 2021 she was listed for a reverse shoulder arthroplasty. During the surgery, Synovasure test and white blood cell count were performed: both tested negatives. A cemented trauma stem “Equinoxe” by Exactech number 8 mm was applied with a standard baseplate fixed with three screws of 26, 18, 18 mm. External rotators were reinserted, and range of motion (ROM) was good at three months follow-up.

Discussion

The impact of COVID-19-related restrictions has resulted in changes in patients’ healthcare and follow-up. During the pandemic, injured patients have experienced difficulties in receiving medical assistance, due to the lack of healthcare personnel and fear of contagion. Lombardy was the most affected region of Italy and orthopedic surgeons were involved in the emergency as other specialists [9,10]. Our level-2-trauma center in Brescia (Lombardy) went on admitting injured patients unceasingly, since several domestic accidents happened during that time, despite reports out of Italy noted a 65% reduction in trauma services provided for shoulder and elbow injuries during the time residents were asked to stay in the home [11]. One of the effects of the pandemic was the loss of follow-up of outpatients [12]. It is hypothesized that the main causes of this issue were the isolation and the fear of contagion in hospital environment [10] (Figure 1).

biomedres-openaccess-journal-bjstr

Figure 1: First radiograph showing comminuted and dislocated fracture.

biomedres-openaccess-journal-bjstr

Figure 2: First treatment with EF.

biomedres-openaccess-journal-bjstr

Figure 3: Partial loss of reduction after the removal of the EF.

biomedres-openaccess-journal-bjstr

Figure 4: Nonunion and dislocation at 7 months after trauma.

biomedres-openaccess-journal-bjstr

Figure 5: Final treatment.

biomedres-openaccess-journal-bjstr

Figure 6: Range of motion at 3 months after arthroplasty.

Radiographic checks showed a partial loss of reduction and physio-kinesitherapy was indicated but she was unable to underwent to treatment because of the new social restrictions. Further radiographs showed gradual loss of reduction. Therefore, we started contemplating a definitive treatment by performing a ORIF with bone graft or a shoulder arthroplasty, but at that moment, the patient was lost at follow-up. Given the poor bone quality, after the removal of the EF, we would probably have implanted a hemiarthroplasty or a reverse prosthesis with a press-fit primary humeral stem fixation, considered an optimal choice because of the possible easier revision, decreased operative time, healing time, and resolution of the symptoms [16]. After 7 months, the patient came back to our department, suffering from pain and severe functional limitation, compounded by a preternatural movement of the joint. Radiographs showed evident dislocated nonunion with reabsorption of tuberosities and metaphysis.

Therefore, our choice has been to implant a reverse shoulder prosthesis with a cemented trauma stem “Equinoxe” by Exactech number 8 mm. This choice involved several compromises like technical difficulties due to the severe bone loss and higher risks of dislocations, infections, nerve injuries and thromboembolism due to the use of cement, compared to an arthroplasty with a pressfit stem [17-20]. At three-month follow-up, the patient showed no pain and a sufficient function of the joint. Since the exact amount of loss to follow-up is not valuable, there is a chance that cases of nonunion in longstanding fractures like this could increase in the near future. Our experience shows that cemented stem fixation can be an important choice of treatment for these patients. Other strategies, like telemedicine, should be considered and eventually implemented to prevent this kind of consequences resulting from the pandemic [21-23].

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Science and Technology

Environmental Impacts and Control of Duststorms

Mini Review

Duststorms and sandstorms are natural meteorological phenomena and severe weather condition frequently occurring in arid and semi-arid regions mainly during summer season when these regions are subjected to strong winds; and driven by different factors: availability and nature of source sediments, vegetation cover density, prevailing climatic conditions, and the textural characteristics of the surface deposits, environmental, geomorphological and relief variation factors. One of the major terrestrial sources of moving sand worldwide is the Arabian peninsula and Sahara desert, while minor sources come from Iran, Pakistan, and India which deposit dust in the Arabian sea, and from China depositing dust in the Pacific. According to [1], the recent surface deposits are the major source of duststorms in Kuwait which are potentially originated from:
1) Dry sabkhas muddy sediments in the lower Mesopotamian flood plain;
2) Old sandstone, limestone and dolostone sediments exposed in the western desert of Iraq;
3) Dibdibba Formation Paraconglomeratic sediments exposed in southern Iraq and northern Kuwait; and
4) Air locally picked up particles from playa, sabkhas, and finegrained mobile sand.
Duststorms winds have variant local nomenclature. In Sahara desert they are named as Simoom, in some Africa Arabian countries like Egypt, Libya, Sudan, Morocco, and Tunisia, they are named Khamasine, Ghibli, Haboob, Sahel and Chili, respectively, while in Asian areas like India and the Arabian Gulf region they are named Loo and Shamal or Toze, respectively [2]. A duststorm is distinguished from a sandstorm on the basis of particle size. Dust storms are made up of a multitude of very fine particles while sandstorms have larger particle sizes that range from .08mm to 1mm [3]. The fine “dust” particles may be elevated as high as 3km or more while the “sand” particles are confined to the lowest 3.5m, rarely rise more than 15m above the ground. The term sandstorm is oftenly used in desert sandstorms context, especially in the Sahara Desert, or places where sand is a more prevalent soil type than dirt or rock, when, in addition to fine particles obscuring visibility, a considerable amount of larger sand particles are blown closer to the surface.
The term duststorm is more likely to be used when finer particles are blown long distances, especially when the duststorm affects urban areas. A sandstorm can transport and carry large volumes of sand unexpectedly. Dust storms can carry large amounts of dust, with the leading edge being composed of a wall of thick dust as much as 1.6 km (0.99 mi) high. In desert areas, dust and sand storms are most commonly caused by either thunderstorm outflows, or by strong pressure gradients which cause an increase in wind velocity over a wide area. Drought and wind contribute to the emergence of dust storms, as do poor farming (e.g., dryland farming techniques) and grazing practices by exposing the dust and sand to the wind. In addition to the environmental factors including: wind speed, atmospheric stability, source region surface characteristics, surface heating, soil moisture, soil type and surface vegetation.
Moreover, in Kuwait human activities in the desert contribute to duststorms occurrence including: extensive motor car movements, extensive urban development, environmentally uncontrolled quarrying activities, and overgrazing by cattle throughout the year. In Kuwait duststorms are more frequent during the Spring and Summer due to:
1) The dry fresh (15-24 m/s) northwesterly winds blowing from Iraq and local lands [4];
2) The deserts surrounding Kuwait: Iraqi desert from N-NW and Saudi Arabian from W-S;
3) The loose sediments covering most of the surface area [5]. There are 3 types of dust in Kuwait [6,7]. Duststorms (wind speed ≈18 knots (33.336 km/h), horizontal visibility is <1 km (if < 200 m it is called a severe duststorm)); 2) Rising dust (wind speed is moderate, horizontal visibility is ≥ 1 km); and 3) Suspended dust (horizontal visibility is < 1 km but with moderate wind speed (6-14 m/s) it is in the range of 1-5 km). Dust and sandstorms may have impacts in different aspects; physical, environmental, economic, social, human health … etc.
As physical and environmental sandstorm can possess a huge power that it can move whole sand dunes; Duststorms and suspended dust can reduce visibility to < 200 m; dust may block roads, damage materials and equipment and affect transportation and severely pollute the air. Dust particles can reflect and absorb solar radiation causing radioactive effect as they are tropospheric aerosols a significant component of the earth’s climatic system changing climate by their direct radiative scattering and absorption [8], and indirectly by their radiative effects through affecting on clouds microphysics [9] and affecting the processes of atmospheric chemistry. Dust can remarkably affect the soil characteristics, ocean productivity, and air chemistry by influencing the nutrient dynamics and biogeochemical cycling of ecosystems. Economically, duststorms lead to soil loss, which, in turn, will remove the organic matter and nutrient-rich particles reducing the soil fertility and by abrasion they damage the young crop plants and reduce the crop productivity.
Moreover, duststorms reduce visibility affecting aircrafts and road transportation, that would have consequences of financial and human lives loss. Duststorms reduce the amount of sunlight that reaches the surface, and hence cause critical complications on plants photosynthesis and productivity and reduce the livestock forage. Increased clouds of dust and sandstorms can affect the ecosystem stability by increasing the heat blanket effect. Socially wise, by reduction of livestock forage, ecosystem biodiversity and increase hunger, water availability and farmland yields, the land resources will be lost which result in turn will spread poverty, the spread of poverty and hunger will increase, which eventually will result in migration in search of food and relief, and increasing the environmental refugees number that poses pressure on neighborhood areas and leading to enormous social problems. In relation to public health, duststorms have adverse shorttime impacts on the public health including immediate increased symptoms and worsening of the lung function in individuals with asthma, increased mortality and morbidity long-transported duststorm particles adversely affect the circulatory system. Prolonged and unprotected exposure of the respiratory system in a dust storm can also cause silicosis, which, if left untreated, will lead to asphyxiation; silicosis is an incurable condition that may also lead to lung cancer. It was found by [10] that the concentrations of all pollutants (including Particulate Matter (PM10)) in the ambient air of Kuwait in the residential areas is dependent on the meteorological conditions (PM10 and NOx). It was indicated by [11] that in Kuwait duststorms and fossil fuel combustion strongly contribute to the air pollutants (especially (PM)) which play a significant role in determination the symptoms of Rheumatoid Arthritis (RA) disease and worsening it on overall.
It was stated by [12] that the chronic and long-term exposure to calcite and quartz particles (the major constituents of dustfallout in Kuwait) may produce alkalosis and hypercalcemia and can have potentially serious respiratory effects. There is also the danger of keratoconjunctivitis sicca (“dry eyes”) which, in severe cases without immediate and proper treatment, can lead to blindness. There are short-term approaches for dust and sand storm control (e.g., forecasting and early warning) and others are long-term (e.g., source area rehabilitation). In general, dust and sand storms can be controlled by applying different kinds of dust suppressants or wind breakers. Such suppressants may include physical covers, e.g., vegetation, aggregate, mulches or paving; and chemical compounds, e.g., water, either fresh, sea water or even reclaimed, especially on construction sites and unpaved roads; calcium and magnesium chloride and petroleum-based chemicals, which can stabilize the soil by absorbing the moisture from the atmosphere.
This will change the soil surface physical properties as by applying the suppressant the soil particles will be coated and aggregated together becoming heavy to be airborne particles hence unsusceptible for wind erosion. Controlling the movement and sand encroachment by wind can be done by creating tree windbreakers, reducing ground level wind velocity by inserting straw bundles into the sand in a checkerboard pattern, or using creeping plants. Sand or dust encroachment can also be controlled by rehabilitating and improving the land surface by reducing barren land through reforesting and planting degraded land, and improving the environmental capacity of the soil by introducing water-saving and water management techniques for the efficient use of water and application of farm animal manure.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Medical Science

Keratoacanthoma and Well Differentiated Squamous Cell Carcinoma Have a Distinct Prognosis Running Head: Prognosis of Keratoacanthoma

Introduction

Keratoacanthoma (KA) is a rapidly growing skin tumor thought to originate from the hair follicle [1]. The exact classification of the tumor is still a matter of debate. Due to its ability to spontaneously regress some consider KA a benign lesion [2]. However, KA can also display perineural as well as venous invasion [3,4] and cases of metastatic KA were also reported [5,6] suggesting the classification as a subtype of well differentiated SCC is more appropriate. On a molecular level, the etiology of KA and Squamous Cell Carcinoma (SCC) seems to differ, since deletion of polarity proteins in mouse models can rescue SCC formation but promote KA formation [7]. Expression of tumor suppressors and promoters is also different in KA compared to SCC with a gain of 11q and subsequent amplification of the cyclin D1 locus being the most frequent aberration in KA [8]. Mutated p53 is more frequently found in SCC compared to KA and the cell cycle inhibitor p16 is expressed in KA but downregulated in SCC [9]. Additionally, the tumor microenvironment of KA and SCC is different providing a possible explanation for the ability of KA to regress [10].

The therapy of KA usually consists of complete excision since it is impossible to predict whether the tumor will regress or progress to invade the underlying tissue and even metastasize [5,6]. Since current guidelines on SCC do not take the KA histological subtype as a prognostic factor into account the question whether KA has a distinct prognosis compared SCC remains open. This question is particularly relevant since KA often have an increased tumor thickness due to rapid growth and may unnecessarily fall into the category of high risk SCC thereby burdening the financial system with frequent follow-up of these patients. In this study, we compared the prognosis of KA with that of well differentiated SCC without KA histology (wSCC) and found that KA histology favorably impacts metastasis-free survival but not local relapse-free survival.

Patients and Methods

A retrospective analysis of medical records of the Department of Dermatology, University Hospital of Cologne identified 403 KA and 905 wSCC. Tumors with an incomplete excision were excluded. The histopathologic criteria for KA were a crateriform tumor surrounding a central keratin plug showing epithelial lipping [1,11]. Solar elastosis was defined as a fibrillary basophilic material in the upper dermis [12]. Immunosuppression was defined as use of chemotherapy, other malignancies than non-melanoma skin cancer and use of immunosuppressive drugs. Diabetes was not considered immunosuppressive. Statistical evaluations were performed with the statistical software package IBM SPSS, version 20.0. Graphs were made using GraphPad Prism, version 5.0. Student’s t test was used with continuous variables, while the X2 test was used for categorical variables. Survival rates were calculated by the Kaplan– Meier method and compared using log-rank tests. A P-value of <0.05 was considered significant.

Results

A total of 403 KA and 905 wSCC were retrospectively analyzed. The median follow-up was 23 months. 8 Patients with KA (2%) developed metastasis after a median of 7 months (range 1-49) and 35 patients with wSCC (3.9%) developed metastasis after a median 8 of months (range 1-37). Except for 2 patients with wSCC, who developed lung metastasis all patients developed metastasis of skin and/or lymph nodes. The metastasis-free survival was significantly decreased in the wSCC groups compared to the KA group (p=0.042) (Figure 1). 9 KA (2.2%) recurred locally following complete excision after a median of 32 months, while after a median of 8 months 24 wSCC (2.7%) recurred locally. The local recurrence-free survival was not significantly different in the KA group compared to the wSCC group (p=0.426) (Figure 1).

Figure 1:

a) Metastasis-free survival in keratoacanthoma compared to well differentiated squamous cell carcinoma (SCC) (p=0.042).
b) Local recurrence-free survival in keratoacanthoma compared to well differentiated SCC (p=0.436).

Patients with KA were more likely to be female (40.2% versus 27%, p<0.0001) and to have thicker lesions compared to patients with wSCC (tumor thickness 3.2 vs 2.7, p=0.012) (Table 1). Age was also significantly decreased in KA compared to wSCC (p<0.0001). Furthermore the patterns of tumor localization were different in KA compared to wSCC, with KA occurring more frequently on the lower extremity and to a lesser extent in the head and neck region. Since the difference in tumor location could be explained by sun exposure, we analyzed the presence of solar elastosis in the periphery of tumors and found that solar elastosis occurred more frequently in wSCC compared to KA (82% vs 60.7%, p<0.0001). In contrast, the presence of immunosuppression and tumor diameter was not significantly different in the KA group compared to the wSCC group (Table 1).

biomedres-openaccess-journal-bjstr

Table 1: Clinical data keratoacanthoma compared to well differentiated SCC.

Note: SCC- squamous cell carcinoma; STDEV-standard deviation.

Discussion

Currently, no specific guideline for the follow-up of patients with KA exists and it is unclear whether the guidelines regarding the follow-up of SCC [13] can also be applied to KA. To our knowledge, this the first study comparing the prognosis of KA with wSCC. We found that the metastasis-free survival was significantly increased in KA compared to wSCC. 2% of KA developed metastasis in the skin and regional lymph node indicating that similar to SCC ultrasound of the regional lymph nodes is also important in patients with KA. Since local recurrence-free survival was not different in KA compared to wSCC we propose to use the same frequency of physical examinations of the skin in KA as in SCC [13]. In line with the hypothesis that SCC and KA have different etiologies our study could show that clinical and pathologic characteristics were also significantly different in the wSCC group compared to KA. As expected from a rapidly growing tumor, the tumor thickness of KA was significantly increased compared to wSCC. Interestingly, patients with KA were more likely to be female and were significantly younger compared to patients with wSCC further underlining the different etiology of the two tumors.
In accordance with a previous study [14] we could show that the predominant distribution of KA is on the lower extremity while SCC were mostly localized in the head and neck region. Moreover, the presence of solar elastosis was significantly increased in the KA group suggesting sun exposure plays a more minor role in KA. Consistent with the hypothesis that UV plays a less important role in the etiology of KA the mutation burden of KA is lower compared to cutaneous SCC [15]. Further supporting a different etiology of KA and SCC the rate of HPV DNA detection was higher in KA compared to wSCC suggesting a possible viral etiology in the pathogenesis of KA [16]. This would then imply that immunosuppression would play a more important role in the etiology of KA compared to SCC, however the incidence of immunosuppression was not different in KA compared to the wSCC group in our study, suggesting other etiologic factors such as the tissue microenvironment could play a role.

Conclusion

We could show that KA histology favorably impacts metastasisfree survival but does not influence local relapse-free survival.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us