Open Access Journals On Medical Biochemistry

Anthrax Toxins and their receptors

Introduction

Under unfavorable growth conditions, B.anthracis undertake the developmental process of sporulation. B.anthracis spores are their infectious form, because contact with such spore forms under favorable conditions can lead to inhalation, skin and gastrointestinal infections [1]. For example, when spores enter the lungs, they are phagocytosed by macrophages and dendritic cells. However, some of them are able to spread throughout the body, despite the initial immune response. The spores that survive then transform into vegetative bacilli thanks to the formation of a polyɣ- D-glutamine capsule and the secretion of anthrax toxin proteins [2]. The anthrax toxin is composed of two binary combinations of three soluble proteins: 83 kDa protective antigen (PA83), 90 kDa lethal factor (LF), and 89 kDa edema factor (EF). PA forms complexes on the surface of host cells. It binds to one of two known anthrax toxin receptors, tumor endothelial marker-8 (TEM-8) or capillary morphogenesis protein-2 (CMG-2) [3]. Receptor-bound PA is proteolytically activated by a cell surface protease to generate a 63-kDa form (PA63) which oligomerizes, generating ring-shaped heptameric and octameric pore precursors [4].

These pre-channel oligomers are capable of binding up to three and four LF and/or EF molecules, respectively. The complexes are endocytosed and delivered to an acidic endosomal compartment. The PA oligomer transformed into a translocase channel, allows the transmembrane proton gradient to force lethal factor and edema factor translocation into the cytosol where they carry out their enzymatic functions, (Figure 1) [5]. Bacillus anthracis is a Grampositive, spore-forming, rod-shaped bacterium and is recognized by the presence of the pX01 and pX02 virulence plasmids, which give its unique ability to produce the anthrax toxin [6]. The plasmids pXO1 and pXO2 are very important for the virulence of B. anthracis. The pXO2 plasmid contains genes encoding the poly-D-ɣ-glutamic acid capsule, and the pXO1 plasmid contains genes encoding the toxin components: PA, EF, LF and virulence regulator anthrax toxin activator (AtxA) [7]. AtxA regulates genes encoding anthrax toxins and capsule synthesis [7]. AtxA includes the domains: two helixturn- helix (HTH), which is responsible for binding to DNA, and two phosphotransferase system (PTS) regulation domains (PRDs) and an EIIB-like domain [7]. Due to the presence of the PRDs domain in AtxA, its activity is regulated by phosphorylation strictly dependent on the presence of carbon dioxide [8,9]. These findings have important implications for developing research on the main role anthrax toxins as a major virulence factors at the initial stage of anthrax infection.

biomedres-openaccess-journal-bjstr

Figure 1: Schematic mechanisms of virulence B. anthracis by anthrax toxin and progression through the endocytic pathway. B. anthracis produces the three subunits of anthrax toxin: protective antigen (PA), lethal factor (LF) and edema factor (EF) encoded on plasmid pXO1 and poly-D-glutamic acid polimer capsule (CAP) ancoded on plasmid pXO2. AP-1 – activator protein 1; Cbl – E3 ubiquitin-protein ligase; CMG2 – capillary morphogenesis gene 2; E3 – ligase; EF – edema factor calmodulindependent adenylate cyclase; Fyn – tyrosine-protein kinase; LF – lethal toxin zinc metalloprotease; Src – Src-like kinase; TEM8 – tumor endothelial marker 8; VNTR – variable number tandem repeat.

Anthrax Toxin

Anthrax toxin is comprised of three nontoxic proteins that combine on eukaryotic host cell surface to form a toxic complex. A tripartite AB-type anthrax toxin is comprised of two catalytic A moieties: lethal factor (LF) and edema factor (EF) and a single receptor-binding B moiety, designated as protective antigen (PA). Lethal factor is a zinc-dependent metalloprotease that, together with protective antigen forms lethal toxin. It is a main virulence factor and the major cause of death for the Bacillus anthracis infected organism [10]. Lethal factor specifically cleaves the N-terminal end of mitogen-activated protein kinase kinases (MAPKKs) (Pellizzari et al. 1999). Because the N-terminal domain of MAPKKs is essential for the interaction between MAPKKs and mitogen-activated protein kinases (MAPKs), the cleavage of this domain impairs the activation of MAPKs [11]. Lethal factor cleavage of MAPKKs leads to the inhibition of three major signaling pathways-ERK1/2 (extracellular signal-regulated kinase), JNK/SAPK (c-Jun N-terminal kinase) and p38 kinases [12].

They are involved in diverse cellular processes including growth, apoptosis, innate and adaptive immune responses and several responses to various forms of cellular stress. According to lethal factor crystal structure, the enzyme comprises four different domains [13]. Domain I is responsible for protective antigen binding. The catalytic site and two zinc-binding motifs are found in C-terminal of domain IV [13]. Several studies have shown that histidine residues play an important role in the catalytic activities of lethal factor. Three histidine residues, His-35, His-42, His-229 are important for lethal factor binding to protective antigen but His- 686, His-690 and specially His-669 are essential for lethal factor catalytic activity [14,15]. Edema factor (EF) is composed of two functional domains, domain I (EFN, residues 1-291) and domain II (residues 292-798) [16]. The first domain (30 kDaN-terminal PA binding domain) interacts with protective antigen whereas the second (43 kDa AC domain and 17 kDa helical domain) interacts with adenylyl cyclase [17].

A series of biochemical studies has been revealed that EF also has two conserved aspartate residues, which coordinate two magnesium ions required for adenylyl cyclase activity [17]. EF is a calmodulin-dependent adenylyl cyclase that increases intracellular cAMP concentration of infected cells. cAMP is a secondary messenger with multiple downstream effectors, including protein kinase A (PKA) and protein activated by cAMP (EPAC). High levels of cAMP generated by ET activate PKA-induced transcriptional changes including modulation of cAMP-responsive element binding (CREB) protein [18]. Study on monocyte-derived cells suggests that CREB and glycogen synthase kinase 3 (GSK-3) are important for the ET-induced expression of anthrax toxin receptor 2 [19]. In addition, cAMP as a second messenger contributes to the regulation of leukocyte chemotaxis and endothelial barrier integrity [20,21]. The results reported by Nguyen, et al. [22] demonstrated that edema toxin (ET) impedes IL-8 driven movement of neutrophils across an endothelium independent of c AMP/PKA activity.

The stability and even the formation of the EF-calmodulin complex depends on the level of calcium bound to calmodulin (CaM) [23]. The binding of calmodulin to EF is a sequential process, first the N-terminal CaM is anchored to the helical domain and next C-terminal CaM region can insert between the catalytic core and helical domains of EF [24]. It leads to a conformational change of C-terminal region to stabilize the catalytic loop of EF for enzymatic activity. According to crystallographic studies residues Leu 667, Ser 668, Arg 671, Arg 672 and Val 694 are implicated in binding of calmodulin to EF [16,25,26]. Makiya, at al. [25] identified these amino acids residues as the binding epitope of EF-neutralizing mAb EF13D, which can neutralize EF in vitro in the subnanomolar range. Other labs have also reported small molecules which inhibit EF by different mechanisms but in the micromolar range [27,28]. Nanomolar affinities are often requested for an efficient competition, which explains that antibody concentration plays a role in toxin neutralization.

A variety of other types of EF inhibitors have been proposed. Especially, various purine and pyrimidine nucleotides with unique preference for the base cytosine were studied [29]. Edema factor and lethal factor forms toxic complexes with protective antigen, edema toxin (ET), which induces tissue swelling and lethal toxin (LT), which can alter cell function and may cause death [4]. The maintenance of homeostasis of the neural microenvironment is responsible for the blood-brain barrier. It is a regulatory interface between the peripheral circulation and the central nervous system (CNS) [30]. The endothelial barrier protects the brain from microorganisms and toxins circulating in the blood. Unfortunately, pathogenic microorganisms have evolved neuroinvasiveness mechanisms to penetrate host cell barriers. In vitro and in vivo studies from several laboratories suggest a principle role for ET in modulating brain endothelial integrity by disrupting the intercellular contacts and a role for LT in promoting penetration of the blood-brain barrier and development of meningitis [31-34].

Edema toxin has been shown to alter host defense like reduced activation of antigen-presenting cells, increased release of cytokines from dendritic cells, impaired chemotaxis and differentiation of T lymphocytes [35]. ET has also been shown to play an important role in the pathogenesis of anthrax-associated shock [36]. Infection with Bacillus anthracis can be cutaneous, gastrointestinal or pulmonary (inhalational). Frequently affected organs include secondary lymph nodes, lung, spleen, kidney, liver, intestinal serosa, heart, and brain proper [37]. Destruction of the organ function is due to the secretion of LT and ET. Some labs have reported that lethal toxin can disrupt endothelial barrier function [37]. The mechanisms causing endothelial dysfunction are stimulation of endothelial apoptosis, alteration of actin fibers and cadherins and mast cell activation [36,37]. Other lab has found that endothelial permeability is under tight control system where hypoxia activates signaling through the Rho-kinase-myosin light chain phosphatase pathway which leads to increased permeability [38].

However, hypoxia can activates p38 MAP kinase signaling leading to heat shock protein 27 (hsp27) phosphorylation which decreases endothelial permeability [39]. The majority studies indicate that anthrax lethal toxin induces the apoptosis of macrophages in an activated caspase-dependent way [40,41]. The cytotoxicity of lethal toxin is related to the activation of the transcription factor- NF-κB and TNF-α (tumor necrosis factoralpha) production in bovine macrophages [42]. It was shown that in bovine macrophages lethal toxin efficiently induces inhibitor-1- κB degradation and enhances the nuclear translocation of NF-κB. Neither protective antigen nor lethal factor alone had any impact of NF-κB activation. Lethal toxin induces apoptosis and necrosis in bone marrow derived macrophages and in activated human peripheral blood monocytes [41,43].

Interestingly, human alveolar macrophages demonstrate significant resistance to all the effects of lethal toxin, including inhibition of cytokine induction, lethal toxin-mediated MEK cleavage and lethal toxin-mediated apoptosis [44]. Lethal toxin, through its effect on the p38 pathway, disrupts glucocorticoid receptor signaling [45]. In vitro study has suggested that lethal toxin may depress murine cardiomyocytes function via an NADPH oxidase-mediated superoxide production mechanism [46]. Series of histological and microbiological studies concerning the effect of LT on intestinal tissues confirm LT-induced intestinal pathology, which is marked by villous blunting, mucosal erosions and ulceration [47-49]. Protective antigen is an 83 kDa pore-forming protein that binds to the anthrax receptors on the surface of the target cells and arrange entry of lethal toxin and edema toxin into the cytosol [50]. Native form PA consists of four domains with different functions: domain 1 – proteolytic activation by furin occurs in it; domain 2 – forms a transmembrane pore to translocate edema factor and lethal factor into the cell and contributes significantly to the receptor interaction; domain 3 – mediates in self-association of nicked form of PA83; domain 4 – primarily involved in binding to anthrax toxin receptor [50,51].

Upon binding to receptors, PA molecules undergo furin cleavage into 20 kDa fragment and 63 kDa subunits that remain cell surface bound. Furin is a critical housekeeping enzyme involved in protoxin activation [52]. Furin is essential for introducing the anthrax toxin into macrophages in highly pathogenic strains. The PA 63 kDa molecule creates a membrane channel that allows the entry of the LF toxin into the cytoplasm of the host cell [53]. It is extremely interesting that the site of cleavage of the PA protein of anthrax toxin has homology with the S1 site of the SARS-CoV2 virus, which is also affected by furin and the transmembrane protease serine 2 (TMPRSS2) [53]. Furthermore, both the anthrax toxin and the SARS-CoV2 virus infect macrophages and respiratory epithelial cells. In both infections, furin, the infected host’s protease, is the initiating enzyme. It is a factor that activates both the anthrax toxin and the SARS-CoV2 virus protein [54]. The characteristic protein sequences affected by furin are common among influenza, measles viruses, flaviviruses and the botulinum toxin [54]. They are the socalled initiation sequences, the presence of which among bacterial or viral strains increases their pathogenicity and virulence. As mentioned earlier, furin is a key host enzyme involved in the activation of protoxins and is therefore an interesting target for the search for its inhibitors. It is highly probable that the tropism and pathogenicity of bacterial strains increases as a result of the action of furin [54,55].

Anthrax Toxin Receptors

Two different cell surface receptors mediate anthrax toxin entry to the cells: ANTXR1, tumor endothelium marker 8 (TEM- 8) and ANTXR2, capillary morphogenesis protein 2 (CMG-2) [1,3]. TEM 8 and CMG 2 are type I membrane proteins containing the domain of von Willebrand factor A, which was originally identified in the blood serum protein as a platelet adhesion factor. ANTXR1 was previously discovered as a tumor endothelium marker, which is present at very low levels in healthy tissues and significantly increased in tumor tissues. ANTXR1 shares many similarities with integrins [56]. The ANTXR1 structural domains are similar to the β1 integrin domains and interact with type I and type VI collagen which also aid in cell migration and extracellular matrix reorganization [57]. On the other hand, the cytoplasmic part of the ANTXR1 receptor, directly anchored to the cytoskeleton of the actin cell, influences cell signal transmission, similar to integrins [58,59].

Cheng at al. for the first time investigated the mechanical signal transduction pathway initiated by the mechanical stimulation of the ANTXR1 receptor and its subsequent conversion to a biological signal in bone marrow stromal cells (BMSCc) [60]. The ANTXR1-initiated mechanotransduction involving the proteins LPR6 and LPR5 (low-density lipoprotein receptor-related protein) partially activates β-catenin to transfer a mechanical signal to the cell nucleus to regulate chondrogenesis [60]. Moreover, further experiments confirmed the interaction of ANTXR1 with actin and fascin actin-bundling protein 1 (FSCN1), which may also suggest the participation of anthrax receptors in the reorganization of the cell cytoskeleton [60]. Both CMG2 and TEM8 receptors have long cytoplasmic domains of 148 and 222 amino acid residues, respectively, like many other signaling receptors, and their physiological roles are related to cell migration and extracellular matrix remodeling [61,62].

Receptors can be post-translationally modified as a result of glycosylation, palmitoylation or ubiquitination [63,64]. Glycosylation affects protein folding in the ER, movement, and function. The TEM8 receptor has putative three glycosylation sites that are necessary for the movement of this protein from the ER and reaching the cell membrane [65]. It was verified that the TEM8 receptor lacking glycosylation did not bind the anthrax toxin in HeLa cells [65]. In contrast, the CMG2 receptor in the same cells, in the absence of glycosylation, could leave the ER and reach the cell membrane where it was able to bind ligand. Both receptors can be ubiquitinated by the action of the host ubiquitin ligase, leading to endocytosis of the clathrin-dependent toxin complex [63]. This process is even necessary for the intracellular activity of the anthrax toxin. S-palmitoylation involves the attachment of a 16-carbon fatty acid to a specific cysteine to form a thioester bond. In proteins, there may be a correlation between palmitoylation and ubiquitination within the same molecule. An example of such a phenomenon is the ubiquitination of the TEM8 receptor, if it has not been palmitoylated before, which leads to its destabilization and premature degradation [66].

The cytoplasmic domain of ANTXR1 and ANTXR2 are important in regulating half-life of the receptors at the plasma membrane [67]. The palmitoylation of cysteine residues increase the half-life of these proteins by preventing its premature clearance for the cell surface [63]. In the cytoplasmic domain, both receptors contain tyrosine residues phosphorylated following binding of protective antigen by receptor which is required for efficient toxin uptake [64]. There are three isoforms of ANTXR1. The ANTXR1-sv1, the longest isoform has 564 amino acids and the medium isoform ANTXR1-sv2 has 386 amino acids [68]; ANTXR1-sv3, the short isoform does not contain the transmembrane domain, so it cannot bind of PA and probably acts as secreted protein [69]. The studies of isoforms have demonstrated that the extracellular and transmembrane domains of these receptors are essential for PA binding, oligomer formation and translocation of anthrax toxin into the cytosol [69].

Toxin Entry into Cells

Toxin entry into host cells begins when protective antigen (PA83) binds to either of two cell surface receptors, ANTXR1 or ANTXR2. Following that PA83 is proteolytically activated by furinlike protease to create an active 63kD-form (PA63). Receptor – bound PA63 has the ability to oligomerize into heptameric or octameric rings, to form a pre-pore that can bind up to three molecules of either edema factor or lethal factor. The toxin-receptor complex is then internalized preferentially via clathrin-mediated endocytosis (Figure 1). This endocytosis appears to be protein depend such as clathrin, dynamin, heterotetrameric adaptor (AP-1) and actin [70]. ANTXR1 and ANTXR2 both could interact with lipoproteinreceptor- related proteins 5 (LRP5) and lipoprotein-receptorrelated proteins 6 (LRP6) [71]. Presumably, there are required for anthrax endocytosis. The large hetero oligomeric complex is then transported to early endosomes where it is incorporated into intraluminal vesicles [69].

The acidic pH of the early endosomes induces structural changes in the PA pre-pore leading to the pore formation as well as to the partial unfolding of edema factor and lethal factor [72]. The current study found that both proteins, EF and LF undergo major conformational changes during binding to PA [69]. These are then translocated through the PA pore across the endosomal membrane. It is understood that the anthrax toxin enzymatic subunits before being released into the cytosol must be transported to late endosomes in a microtubule dependent manner which is essential to protect them from lysosomal proteases. However, many of the details of this sophisticated delivery system remain to be elucidated.

Conclusion

Bacillus anthracis has two virulence factors, a poly-ɣ-Dglutamine capsule and bipartite toxins. The capsule of B. anthracis contributes to pathogenesis by blocking phagocytosis. The lethal toxin (LT) and edema toxin (ET) play a significant role in the pathogenesis of the disease. The study of the mechanisms by which these toxins modulate host defense has tremendously improved. The discovery of anthrax toxin receptors is a relevant for anthrax pathogenesis. Anthrax toxin receptors can regulate ligand-binding after conformational changes. A better understanding of anthrax pathogenesis may allow design of effective inhibitors. Future studies of anthrax toxins and their receptors promises to yield more information concerning toxin entry into the cells and therapeutic applications.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Environmental Science

The New and Effective Methods for Removing Sulfur Compounds from Liquid Fuels: Challenges Ahead- Advantages and Disadvantages

Introduction

Combustion of liquid fuels with organosulfur compounds such as sulfides, disulfides, thiophenes and the corresponding derivatives emits harmful gases SOx and NOx. HDS is main methods used for desulfurization, but this process is inefficient in removing organo sulfur compounds [1]. So recently, former techniques such as adsorption desulfurization (ADS) and oxidation desulfurization (ODS) were considered [2]. The main challenge of the ADS method is the selection of adsorbents with high adsorption capacity and selectivity [3]. Vafaee, et al. [4], synthesized nanosorbents of (A: Ni, CO & Mg) AFe2O4-SiO2 by an auto-combustion sol-gel method and used them in the ADS process. Also, Vafaee, et al. [5] used NiFe2O4- Polyethylene glycol catalyst for ultrasound assisted oxidative desulfurization (UAOD) process using central composite design (CCD) under response surface methodology (RSM). Consequently, ferrites in the adsorbent and phase transfer catalyst were easily separated and recycled via magnetic field for desulfurization process.

Conclusion

In this study, efficiency of ADS and UAOD methods with the AFe2O4-SiO2 (A: Ni, Co & Mg) nanoadsorbent and NiFe2O4-PEG phase transfer nanocatalysts were reviewed. In the UAOD process, increasing the temperature and oxidant amount had the greatest effect on increasing the percentage of DBT conversion. In addition, one of the main challenges of ADS and UAOD methods is the use of adsorbents and phase transfer catalysts with easy separation and recovery capabilities. Therefore, using the magnetic field caused by ferrites in the adsorbent and phase transfer catalyst structure, they were easily separated and recycled after desulfurization.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Old Age Psychiatry

Vitamin D Supplementation for the Treatment of Depression in Females in a Private Practice Clinic

Introduction

During the last decade there is a strong interest regarding the effect that vitamin D plasma levels can have in depression [1]. There are also studies that suggest the use of vitamin D supplementation either by mouth or through light therapy as an add-on therapy for depression [2]. Influenced by this evidence, during the winter season of 2019, we used Vitamin D3 supplementation mainly as an add-on therapy for the treatment of patients suffering from depression in our private practice. In order to better assess the results of this intervention and also to communicate our experience to other practitioners, we concluded a small case series study with all our depressed patients that received vitamin d supplementation, during a certain time frame.

Material and Methods

Subjects

During autom of 2018 and witner 2018-2019 Every patient that was treated for depression that was presented with residual depressive symptoms was assessed for Vitamin 25(OH) blood levels. Subjects with Vitamin D blood levels below 30ngr/ml where assessed for this study. In total ten patients were assessed. Out of them eight where included in the study since two of the patients were suffering from psychosis and not depression and were excluded. All patients were Caucasian women with a mean age of 54,29 (S.D 16,75). (Table 1)

biomedres-openaccess-journal-bjstr

Table 1: Various numerical parameters of the study, initial with final vitamin d levels were compared with paired T test and had a statistically significant difference with p= 0,014.

Method

All subjects were prescribed with Vitamin D3 oral supplementation oral dose ranging between 2000 and 5000 UI of Vitamin D3 per day was administered. One of them was treated with Vitamin D3 as monotherapy while in the rest; Vitamin D3 was used as an add-on therapy. Levels of Vitamin 25(OH) were assessed again within two months’ time frame. Patients with major changes in their treatment such as addition of another antidepressant were to be excluded from this study but in our sample nothing like this occurred during the time frame of this study. Qualitive analysis was used on the psychiatric records that were kept in our private practice in order to assess the symptoms that are more likely correlated with low vitamin D levels. Qualitive analysis was also used in the follow up assessment of the patient in order to detect the symptoms that might have responded to vitamin D supplementation. Final assessment took place three months after the first assessment for each individual. Paired T test was used to compare changes in Vitamin 25(OH) blood levels. SPSS for windows in version 15.0 was used for this comparison.

Results

All assessed patients had some type of Vitamin D deficiency mean level of Vitamin D(OH) was 12,04 ngr/ml (S.D. 7,995) with levels ranging from 4,5ngr/ml to 26,5ngr/ml. Seven out of eight having vitamin d blood levels below 20ngr/ml. The mean dose of Vitamin D(OH) supplementation was 4142,88units/per day (S.D 1399,75), with most of them taking a dose of 4000 units/per day. Final assessment of Vitamin D (OH) levels took place within two months period, mean time of 52,67 days (S.D. 30,651) while it was still in winter. There was a significant increase of Vitamin D (OH) blood levels 25,65ngr/ml (S.D. 11,559) p=, 014 (Table 1). Qualitive analysis showed that the main complain that all patients had in common was psychomotor retardation less common but significant was also morning depression. These symptoms and especially psychomotor retardation tend to improve in various degrees two months or more after the beginning of Vitamin D supplementation (Figure 1).

biomedres-openaccess-journal-bjstr

Figure 1:

Discussion

This is a prospective case series; its results are hopeful. It seems that there is a strong possibility that depressed female patients with a certain residual symptom profile might also suffer from vitamin d deficiency. More specifically symptoms such as psychomotor retardation or morning depression seem to be more correlated with vitamin d deficiency [3]. This study was conducted in total in winter time. This happened since we wanted to reduce confounders such as sun exposure, which is much more likely to happen during summer, and can change the levels of vitamin d in blood regardless of our supplementation [4]. It is important to understand though that in this case psychiatrists should change their treatment culture. While in almost all their care tend to treat almost completely without biological markers, vitamin d supplementation for treatment of depression requires a different approach. In our study first we check and if there was a vitamin d deficiency and then we prescribed supplementation of vitamin D [5]. Vitamin D3 was prescribed since it seems to be a better alternative in comparison with other vitamin D supplements such as Vitamin D2 [6].
A significant improvement in depressive symptoms, that was correlated time wise at least with the increased of Vitamin D blood levels, was observed. This is in accordance with patient’s satisfactions, which do not consider Vitamin D as another ‘Psychiatric drug’. Caution should be placed thought to the regular follow up of Vitamin D levels since high above normal Vitamin D levels can also be toxic [7]. So if Vitamin D levels are raised above limits Vitamin D supplementation should be stopped. Furthermore, during summer period Vitamin D levels are raised since our body composes it to higher degree due to increased sun light exposure, thus vitamin D levels must be more thoroughly checked during summer time. This brings us to another point. The aim of supplementation is the increase of vitamin D levels in the body. If we can achieve that with other means except prescribing a supplement such as sun exposure it is also good practice to try. The knowledge of the effect that vitamin d can have in the mood and the benefits of sun exposure related to it, might motivate a significant proportion of depressed patients to increase their outdoors activities [8].
This study has significant limitations. Firstly, it is a small study. There are very few patients included. The researchers present this as a case series and not a cohort or other more powerful type of study. Furthermore, the fact that all patient were Caucasian women increases somehow the power of this study. The other limitation is the fact that no standardized assessment of patients initial status or treatment progress, with the use of questionnaires occurred. This makes more difficult to interpret study’s findings. To our view since the aim of our approach was to treat residuals symptoms it was difficult for these symptoms to be detected through formal questionnaires that asses overall depression, furthermore the observation and consensus of two specialized psychiatrists has its value when we asses depression and gives some addition credibility to the results. Also, qualitive analysis that we used is an acceptable measure of outcome [9].

Conclusion

Although results are quite preliminary, there is a strong feeling, that Vitamin D supplementation is effective in treating certain depressive symptoms. Of course, much further study is needed for any firm conclusions.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals On Department of General Practice

The Diagnostic Value of Galactomannan Testing in Bronchoalveolar Lavage Fluid on the Diagnosis of Pulmonary Aspergillosis in Patients with Chronic Respiratory Diseases

Introduction

According to the definition of chronic respiratory diseases (CRD) by the World Health Organization, CRD is a group of diseases that affect the airways and other structures of the lungs, the most common include COPD, bronchial asthma, bronchiectasis, etc. [1]. In-depth studies in recent years have found that pulmonary aspergillosis can also occur in patients with chronic respiratory diseases (CRD) [2,3]. As delayed treatment of pulmonary aspergillosis always leads to high mortality rate, early recognition of CRD with pulmonary aspergillosis is extremely important. Galactomannan (GM) is a thermally stable polysaccharide on the cell wall of aspergillus filaments, which is released into the blood from the tip of the mycelium during aspergillus growth [2]. GM can be detected in the blood in the early stages of infection. Nevertheless, various factors have been found in clinical practice to cause false positives and false negatives in galactomannan testing. Bronchoalveolar lavage fluid (BALF) can be applied to detect pathogens on lung lesions in the early stage of aspergillus infection.
Although BALF has been recommended for GM testing by domestic and foreign guidelines, there is no unified standard for BALF-GM testing cut-off value [2,3]. In this study, bronchoalveolar lavage fluid (BALF) was collected from 100 patients with suspected clinical pulmonary Aspergillus infections by means of bronchoscopy. BALF GM test and serum GM test were compared to assess the diagnostic value of galactomannan testing in bronchoalveolar lavage fluid on the diagnosis of pulmonary aspergillosis in patients with chronic respiratory diseases.

Patients and Methods

Patient Selection

Between June 2019 and December 2019,100 patients with suspected clinical pulmonary aspergillus infections from three different hospitals (50 patients from the Guangzhou Thoracic Hospital,45 patients from the Guangdong Province People’s Hospital, and 5 patients from the First Affiliated Hospital of Sun Yat-Sen University, respectively) were enrolled in this retrospective analysis. They all suffered from chronic respiratory diseases include COPD, bronchial asthma, bronchiectasis, etc. Data of all patients were collected, including age, sex, smoking history, past medical history and medication history, length of stay, laboratory tests, chest imaging examination, pathogen examination, lung pathology and bronchoscopy results. Serum and BALF GM tests were performed during their hospitalization. Factors that might cause false positives in the GM test such as piperacillin/tazobactam were excluded. Hematological malignancies, hematopoietic stem cell transplantation, solid organ transplantation, HIV infection, and patients with incomplete clinical data were excluded from the study.

Statistical Analysis

a. Koimogorov-smirnov test (K-S test) was used by SPSS 25 software to determine whether the target variables were normally distributed. If the measurement data conformed to normal distribution at the same time, it was represented by mean ± standard deviation (`X±s). For the measurement data that did not conform to normal distribution, it was represented by M(P25-P75). The counting data was expressed by percentage or constituent ratio. Independent sample T test was used for BALF GM values of the case group and the control group, paired sample T test was used for BALF GM values and serum GM values of the case group, non-parametric rank-sum test and Mann-Whitney U test were used for samples that did not conform to normal distribution.
b. SPSS 25 software was used to draw ROC curves of the diagnostic efficacy of BALF and serum GM test in the case group and the control group, and the optimal cut-off value of BALF and serum GM test for pulmonary aspergillosis was calculated.
c. The data of baseline features, clinical features and imaging examination of the subjects were analyzed with the independent sample T test or chi-square test for normal distribution, and non-parametric rank-sum test for nonnormal distribution. The differences were considered to be statistically significant when p<0.05.
d. According to several guidelines, the cut-off value of GM was between 0.5 and 1.5, and the cut-off value of 0.5, 0.8, 0.9, 1.0, 1.2 and 1.5 have been reported in many guidelines and metaanalyses. Sen, spe, positive predictive value (PPV), and negative predictive value (NPV) of BALF GM were calculated.

Results

Patient characteristics and data.4 patients with incomplete data and follow-up loss were excluded, and a total of 96 patients were included in this study. According to the diagnosis standards of IDSA (2016) [2], 43 patients were diagnosed by pathological data (proven diagnosis), and 3 cases were diagnosed by radiology, etiology, and other clinical examinations (probable diagnosis). Both of them were included in the case group. The control group included 6 cases of possible pulmonary aspergillosis and 44 cases of nonpulmonary aspergillosis. Clinical data of patients were collected (Table 1). The most common clinical symptoms in the case group were cough (41cases,89.1%), hemoptysis (30 cases,65.2%) and expectoration (27 cases,58.7%). Whereas in the control group were cough (36 cases,72.0%), expectoration (27 cases,54.0%) and fever (16 cases,32.0%). The clinical symptoms of hemoptysis and cough were statistically different between the two groups.
The imaging findings of patients in the two groups included nodular shadow, patchy shadow, consolidation shadow, air crescent sign, cavity and aspergillus balls. Nodular shadow (27 cases,58.7%) and cavity (22 cases,47.8%) were dominant in the case group, while patchy shadow (14 cases,28.0%) and nodular shadow (13 cases,26.0%) were dominant in the control group. The imaging manifestations of nodular shadow, cavity and aspergillus bulb were statistically different between the two groups. Microbiological examination results. In the case group there were 21 cases (45.7%) of positive aspergillus in BALF culture and 3 cases (6.50%) of positive aspergillus in BALF smear microscopy. The serum GM value was 0.18(0.12-0.34) in the case group and 0.12(0.07-0.21) in the control group, showing no statistical difference. BALF GM value was 1.93(0.61-5.78) in the case group and 0.51(0.25-0.82) in the control group, Z value =-4.709. BALF GM value in the case group was higher than that in the control group, P<0.05 (Table 2).

biomedres-openaccess-journal-bjstr

Table 1: Baseline characteristics of patients in case group and control group(%).

Note: *P value < 0.05, the difference was statistically significant

biomedres-openaccess-journal-bjstr

Table 2: Comparison of microbiological examination results between the case group and the control group.

Note: *P value < 0.05, the difference was statistically significant

Diagnostic effificacy of the BALF GM test. When the GM cutoff value was 0.5,0.8,0.9,1.0,1.2,1.5, the sensitivity of BALF GM test decreased with the increase of GM cut-off value, and the specificity increased with the increase of GM cut-off value. When the diagnostic threshold of serum GM test was 0.5 and 1.0, the sensitivity decreased with the increase of the threshold, but the specificity did not change. BALF GM test had higher sensitivity but lower specificity than serum GM test (Table 3). The area under ROC curve of BALF-GM was 0.779(95%CI: 0.684-0.874),standard error was 0.0487,Z value was 5.727,P =0.001, Youden index was 0.4939,when thre hold>0.96,the sensitivity and specificity were 67.4%,82.0% respectively (Figure 1, Table 4). The area under ROC curve of serum-GM was 0.638(95%CI: 0.439-0.807), standard error was 0.121, Z value was 1.147, P=0.255, Youden index was 0.3116, when threshold> 0.18, Sen was 47.8%, Spe was 83.3%; When serum-GM threshold ≥0.18, AUROC was the highest, for which the sensitivity and specificity were 45.5%,83.3% respectively (Figure 2, Table 4).

biomedres-openaccess-journal-bjstr

Figure 1: ROC curves of BALF-GM test in two groups.

biomedres-openaccess-journal-bjstr

Figure 2: ROC curves of serum-GM test in two groups.

biomedres-openaccess-journal-bjstr

Table 3: Diagnostic value of different GM test limits for BALF GM test.

biomedres-openaccess-journal-bjstr

Table 4: Comparison of ROC curve analysis parameters between BALF-GM test and serum GM test.

Discussion

Structural lung disease is a major cause of pulmonary aspergillosis, including bronchiectasis, PTB, bronchial asthma, COPD, etc. Long-term and chronic diseases lead to the destruction of the normal anatomical and physiological structure of the lungs, the destruction of the mucosal barrier of respiratory epithelial cells, and increase the ability of aspergillosis to adhere to airway epithelium. In addition, cilium lodging and degeneration of airway epithelium and obstruction of clearance of respiratory secretion increase the chance of aspergillus infection [4,5]. This study also confirmed that patients with pulmonary aspergillosis had more chronic respiratory diseases in the case group than in the control group (bronchiectasis 58.7% vs.12.0%, P=0.001),PTB (82.6 vs.20.0%,P=0.001),COPD (43.5% vs.22.0%,P=0.025),which was consistent with the results reported in literature [6].The early clinical manifestations of pulmonary aspergillosis are not specific, and the typical chest CT findings are often related to the time of disease occurrence and the severity of lesion development, and the imaging findings cannot lead to a definite etiological diagnosis.
Traditional methods such as smear microscopy and fungal culture have long cycle, low positive rate and are susceptible to environmental pollution. Therefore,a variety of auxiliary examination methods are used to achieve the purpose of early diagnosis. Galactomannan (GM) is a specific polysaccharide of aspergillus cell wall. At present, GM can be detected clinically by blood, BALF, pleural effusion, cerebrospinal fluid and lung tissue, and it is one of the common antigens for the diagnosis of aspergillosis. A large number of existing studies have proved that the sen, spe, ppv and diagnostic coincidence rates of BALF were higher than those of serum GM. The results of this study showed that the cutoff values of BALF GM test were all higher than serum GM, which was consistent with the results of previous studies. The uniform diagnostic threshold of BALF GM has been disputed at home and abroad. The IDSA 2016 guidelines again recommended BALF GM and serum GM tests as laboratory tests for pulmonary aspergillosis. However, they did not specify a BALF GM value, but the diagnostic threshold of serum GM test was ≥0.5[2]. In 2019, EORTC/MSGERC scholars updated the definition of IFD, which clearly indicated for the first time that the clinical diagnostic threshold of BALF GM as pulmonary aspergillosis was: serum GM≥1.0,2 BALF GM≥1.0; or a single serum GM≥0.7+a single BALF GM≥0.8 [7,8].
In this study, through ROC curve analysis, the AUROC of BALF GM test was 0.779(95%CI:0.684-0.874).When BALF GM test limit> 0.96,Sen was 67.4%,Spe was 78.0%,PPV was 73.8%, NPV was 72.2%, PLR was 3.06.When serum GM limit was greater than 0.18,AUROC was the highest, Sen was 45.5%,Spe was 83.3%,and P=0.255.The purpose of this study was to understand the value of BALF GM in the early diagnosis of pulmonary aspergillosis in patients with nonneutropenia complicated with pulmonary underlying diseases. Our results showed that the sensitivity of serum GM test was lower than BALF GM test regardless of setting GM≥0.5, ≥0.8, or ≥1.0 as the diagnostic threshold of BALF GM. When GM threshold was≥0.5, Sen,Spe,PPV of BALF GM were 80.43%,48.0%,58.73% respectively. When the BALF-GM threshold was increased to ≥1.0, the PPV was significantly increased. Compared with previous studies [9,10], BALF GM values of patients with chronic respiratory diseases were different from those of patients with traditional diseases such as neutropenia, hematological malignancies, parenchymal organ transplantation, hematopoietic stem cell transplantation, and immunosuppressant use.
At present, some scholars have proposed that different optimal diagnostic boundaries should be set for patients with different underlying diseases and different immune states, such as neutropenia and non-neutropenia [10], organ transplantation (including hematopoietic stem cell transplantation) and non-solid organ transplantation [11,12], hematological malignancies [13,14], etc. Similarly, the interpretation of BALF GM test results should also be based on the full assessment of the underlying diseases and immune status of patients to determine the optimal BALF GM diagnostic threshold for various patients, so as to improve the diagnostic efficacy of BALF GM in the diagnosis of pulmonary aspergillosis in different populations. Research and clinical practice at home and abroad have found that many factors affecting GM tests cause false positives and false negatives in GM tests, which often confuses clinical work and even leads to misdiagnosis, missed diagnosis and excessive antifungal treatment.
In this study, it was found that BALF GM value was higher in some patients without aspergillus infection in the control group, while BALF GM value was lower in a small number of patients with aspergillus infection in the case group, resulting in false negative in addition to sample dilution during BALF collection, which might also be related to the use of antifungal drugs. A recent review suggested that false negatives in GM tests were associated with the use of antifungal active agents and myxolytic agents [15]. Using beta lactam classes of antibiotics (especially piperacillin/ he azole temple, amoxicillin/clavulanic acid potassium, etc.), intravenous use of parenteral nutrition, blood product containers containing glucose acid, severe gastrointestinal mucous membrane inflammation, multiple myeloma will lead to GM false positive [15]. Clinical cases have also reported that contamination of sterile containers could lead to false positives of GM [16]. According to previous studies and the results of this study, the early diagnosis of pulmonary aspergillosis requires combining imaging examination, histopathology, smear microscopy, fungal culture, aspergillosis antigen detection, aspergillosis antibody detection, and molecular biological examination.

Conclusion

In this study, BALF GM test is more valuable than serum GM test for diagnosis. BALF GM test is more significant for the diagnosis of pulmonary aspergillosis. The best limit, sensitivity and specificity of BALF GM test are 0.96,67.4% and 82.0%(P=0.01). The optimal threshold of BALF GM may vary with host-based diseases and even with different species of Aspergillus. BALF GM value of pulmonary aspergillosis under different immune states needs more clinical data. At the same time, when serum GM and BALF GM are used in clinical practice, it is necessary to fully understand and identify the false positive and false negative of GM, and to diagnose pulmonary aspergillosis by integrating patient factors, clinical manifestations, imaging examination and pathogenic microbial examination.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Natural Sciences

Vaccination Against COVID-19, A Healthy Alternative

Commentary

Since the first reported cases of COVID-19, the world has been grappling with this disease and its consequences. Confirmed cases are increasing, the numbers of daily deaths are chilling, countries suffer the paralysis of social life and the national economy. Meanwhile, the scientific community has been racing against time to find effective vaccines in response to the pandemic. Vaccines have the function of preparing the immune system to detect and fight specific viruses and bacteria, achieving that if the body is exposed to pathogenic germs, it will be ready to destroy them immediately and thus prevent disease. It is sought with the vaccine, protection against the most severe forms of the disease and mortality. In June 2021, the World Health Organization (WHO) reported the existence of 185 vaccine candidates in the preclinical development stage and another 102 in the clinical trial phase [1]. Currently, the WHO has authorized the use of six vaccines, others are still being studied for subsequent approval. National Regulatory Agencies have authorized COVID-19 vaccines in specific countries [2].
The first mass vaccination program started in early December 2020. The great spread of the novel coronavirus increases the demand for vaccines, but their limited production will lead to the use of all formulations that prove to be suitable. The effectiveness of a vaccine is measured by the percentage reduction in the frequency of infections among vaccinated people compared to the frequency among those who were not vaccinated, assuming that the vaccine is the cause of this reduction. Effectiveness represents the health benefits provided by a vaccination program in the population when the vaccines are administered in the real or usual conditions of daily care practice or program development [3]. Although all currently approved vaccine platforms have been shown to stimulate both the humoral and cellular responses, there is a great unresolved question: How long does the immunity conferred by vaccines last? Nevertheless, the vaccination option is a healthy choice for the general population.
Vaccination has been intensifying in risk groups such as those over 60 years of age, patients with comorbidities such as heart disease, diabetes, among others. Pregnant women have been prioritized to achieve an ideal immunization, as this group is one of the most disadvantaged before the disease, due to the physiological changes of pregnancy and the decrease in the immune system of the pregnant woman. According to data provided by the Pan American Health Organization (PAHO), more than 270,000 pregnant women have fallen ill with COVID-19 in the Americas and more than 2,600 have died from that cause since the start of the pandemic, it is also important to take into account breastfeeding women [4]. The pediatric population is another of the risk populations towards which vaccine interventions are currently directed. Being especially vulnerable to COVID-19, health personnel have been among the first to receive any of the existing vaccines. This occupational group is at higher risk for severe COVID-19.
Until today, vaccination is the best strategy to control the spread of the virus, but it should not be forgotten that changes in personal behavior and attitude will be increasingly necessary. The biosafety and self-care measures that the population must maintain must be a priority to be able to return to a social normality that is so longed for. Getting the largest number of people vaccinated as quickly and globally as possible, along with non-pharmacological interventions, could ensure that the virus can be suppressed rather than spread [5]. The news reflects new outbreaks and the increase in cases in many countries, even as vaccination campaigns are progressing. The population is concerned about the increase in cases, which occurs at the same time as the introduction of vaccines. This largely depends on the type of variant that circulates in the region, the Delta is much more virulent and therefore the contagion increases in its presence, despite the fact that there is a certain level of population vaccination.
Vaccination leads to collective protection, although there will be patients in the group, they will not be able to transmit the disease, either due to the immunization conferred by the vaccine or because they were convalescent, this is the so-called herd immunity. The aim of the vaccine is to reduce deaths, serious and critical cases, intensive care units, hospitalizations and then the incidence rate [6]. Cuba began the intensive vaccination program on May 12, 2021, and as of September of the same year, more than 14 million doses of nationally produced vaccines had been applied. In a staggered manner, the strategy for the development, introduction and extension of Cuban vaccines has been applied, which ranges from clinical trials, studies in risk groups, health intervention to mass vaccination, a stage in which the country is immersed. Cuba already has three quality, safe and effective immunogens. Abdala approved in July 2021 and Soberana 02 together with Soberana Plus authorized in August of the same year [7].
One of the worst health situations in the history of mankind is being experienced. The loss of human life is quantified in the thousands, in addition to the situations of social deprivation and health that are worsening in the world population. Faced with this situation, it is necessary to consider the research progress, the production of science based on evidence, to respond in the best way to this pandemic caused by COVID-19. Vaccination, without a doubt, is an alternative that raises the hope of success in the face of this disease.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Medicine

Compromised Health and Constrained Human Life in COVID-19 Pandemic, and Concurrent Healthcare Transformation

The SARS-CoV-2 Infection and COVID-19 Pandemic

The current ongoing pandemic of COVID-19 caused by SARSCoV- 2, is associated with high morbidity and mortality in several countries across the globe. A prompt and effective detection of the disease is crucial to identify those infected, to monitor the infection from epidemiological perspective, and to take measures for its containment. On the other hand, the early diagnosis and efficient treatment of COVID-19 including newer therapeutic modalities such as monoclonal antibodies against SARS-CoV-2, may contribute to the individual clinical improvement and limit the morbidity and mortality in the society at large. The likely course of COVID-19 pandemic not certain, and the pandemic being considered a major health hazard, may continue in the foreseeable future or may with low or moderate level of transmission become endemic. The COVID-19 vaccines bear hope to bring COVID-19 pandemic under control, paving a way for its endemicity [1]. In this respect, the WHO in a recent communique indicated that COVID-19 in various countries including India may be entering some kind of stage of endemicity with low or moderate level of transmission [2].
The effects and fallouts of COVID-19 pandemic are striking as it has impacted the social, economic, political, and healthcare aspects of human life. The pandemic is being considered a major health hazard that may continue to afflict human life in the foreseeable future. The transformation of life, thus, at the individual level as well as at the community and collective levels, seems inevitable. Another aspect of the COVID-19 pandemic is the unprecedented levels of misinformation, rumours, and conspiracy theories related to COVID-19 relayed and reproduced by lay and social media, dubbed ‘infodemic’ by the WHO, which are counterproductive to the fight against the pandemic in the short and long term. There are concerns about low to middle income countries (LMICs) related to the COVID-19 preparedness, knowledge sharing, intellectual property rights, and environmental health together with the serious constraints regarding readiness of health care systems to respond to the pandemic. In fact, the spread of COVID-19 presents an extraordinary ethical dilemma for resource constrained nations with poorly developed health and research systems.
In the current crisis, sharing of scientific knowledge and technology has an important role to play. In addition, emergency preparedness is a shared responsibility of all countries with a moral obligation to support each other [3]. The ongoing pandemic has led to a situation in which the scale of emergency is similar to WWII, requiring decisiveness and commitment. In LMICs, the greatest challenge is to design strategy for early response to COVID-19 outbreaks. South Asia holds a quarter of the world’s population with currently COVID-19 affected countries including Afghanistan, Pakistan, India, Nepal, Bangladesh, and Sri Lanka which may have severe constraints in management of the pandemic. In fact, the current low number of reported cases from these areas is likely to be due to less testing with limited resources in these countries. The resource allocation should be rational, transparent, and based on scientific evidence as the current COVID-19 crisis presents challenges that are beyond and above the earlier outbreaks. Efforts for developing and supplying medical devices, diagnostic tools, vaccines, therapeutics, and other medical technologies for COVID-19 pandemic should be tackled judiciously.

Restricted Human Life and Compromised Health

The SARS-CoV-2 Infection control measures are recommended to prevent exposure as well as reduce transmission of the infection include the personal preventive measures at individual level such as mask-wearing, diligent hand washing, particularly after touching surfaces in public, respiratory hygiene (covering the cough or sneeze), avoiding touching the face (in particular eyes, nose, and mouth), cleaning and disinfecting objects and surfaces, and ensure adequate ventilation of indoor spaces. Apart from the mask-wearing decreasing exposure to the infection, has also been hypothesized to reduce the viral load when exposed, and hence to reduce the risk of severe illness [4]. There are other public health measures apart from personal preventive measures for infection transmission reduction focused for source control and containment of infection and include social/physical distancing, stay-at-home orders, school, venue, and nonessential business closure, bans on public gatherings, and travel restrictions with exit and/or entry screening.
The preventive measures are supplemented with aggressive case identification and isolation and contact tracing and quarantine. In the containment areas, the residents are encouraged to stay alert for symptoms and practice appropriate measures to reduce further transmission. The widespread testing and quarantine strategies are imposed to quickly identify secondary infections in an exposed individual and reduce the risk of exposure to others. There are strategies involving self-quarantine at home, with maintenance of at least six feet (two meters) distance from others at all times. There are variations about preventive and quarantine measures for vaccinated and unvaccinated individuals, and those with a recent history of SARS-CoV-2 infection. All these measures restrict human interactions and social and economic activities. The COVID-19 pandemic has thus imposed multiple restrictions on human life, with added risks to unprecedented morbidity and mortality, compromising the global human health, in general [5].
The COVID-19 pandemic has profoundly changed the human life, caused tremendous human suffering, and challenged the basic foundations of socioeconomic well-being, beyond the immediate impacts on health. The short and long-term impacts are likely to be severe for the disadvantaged groups such as older people, children, and women in LMICs. The COVID-19 outbreak poses significant challenges for the elderly, who have high risks for serious complications which can significantly deteriorate their functioning, health status, and social connections. The closure of schools and home confinement during health pandemics has enduring effects on child and adolescent psychological well-being. In today’s increasingly urban world, the cities may be better equipped than the rural areas to respond to the COVID-19 crisis as the latter vastly lack health care facilities. The COVID-19 will, thus, have a negative impact on various dimensions of human life and the potential for deeper effects with GDP and average household income falling by over 10%, unemployment rising by 5 percentage points and life expectancy dropping by half a year.

The Evolving Healthcare Options and Innovations

The COVID-19 pandemic has been a reality check for various provisions of healthcare available in different countries, including the preventive and therapeutic, outdoor consult as well as indoor and intensive care. Whereas in China, the totalitarian regime was able to deal with the pandemic with an iron hand, fully bifurcate COVID-19 healthcare from that for non-COVID-19, and ably carry out preventive measures and vaccination program, in other countries situation has been different. The public health surveillance programs and available infrastructures were shown as not consistently optimal. Additionally, the existing healthcare facilities were unable to cope with the sudden surge and manage intense pressure on their workload especially in the settings of acute care. Even with contingency plans well laid out, healthcare systems were incapable to cope with the abrupt surge in demand and needed to be transformed. The COVID-19 pandemic, thus, has acted as a transformation catalyst, accelerating the implementation and adoption of changes in healthcare. The emerging prototypes of healthcare delivery appear to put more emphasis on preventive measures, remote care, and utilization of innovative digital technologies.
The Hospital-at-Home (HaH) concept was already making inroads in the conventional hospital-based healthcare approach for a large number of diseases, with the hospice service being a surrogate example. In fact, it is being dubbed as the next frontier in the healthcare delivery and our experience with the pandemic has fast accelerated the HaH programs. The emerging HaH programs have advantage of lower costs and readmission rate, while maintaining quality and safety levels, and better patient experience. Build on the HaH concepts, the conditions can be identified and progressively dispensed with home-based primary and secondary care (Figure 1).

biomedres-openaccess-journal-bjstr

Figure 1: The conventional hospital-based healthcare, the HaH and the water-shed area for intermediate health conditions.

Similar to the scenario in various sectors, the health services and healthcare too have had profound impact owing to COVID-19 pandemic. The COVID-19 pandemic has brought home the realization that a significant proportion of healthcare activities can be tendered remotely equally effectively through technologically empowered approach. As related to the healthcare, there are certain salient aspects likely to emerge in the post-COVID-19 era.
1. There is shifting of large number of patients to remote care. The telehealth services have already been used in emergencies and during crises in the past. With possibility of quality transfer of data, audio and video communications during the COVID-19 pandemic, their utilization has widely accelerated. The pandemic has become a catalyst for swift implementation of online consult and therapy, replacing the clinician/patient face-to-face outdoor consultations.
2. In the hospital setting, the remote care is now being widely used for screening prior to the visit and triage assessment, for the indoor and ICU monitoring and supervising of patients in hospital by off-site experts. This trend is likely to persist to large extent in the post-COVID-19 period, as it provides higher convenience both for clinicians as well as patients.
3. In the mental healthcare, too, the remote consultation is proving helpful. It is likely that once mental healthcare institutions have developed the capabilities of serving their patients through digital technologies, a blended approach in future would emerge, where e-mental-health solutions cover an increasingly greater part of routine services.
4. The remote care system in form of HaH is likely to serve further as an adjunct for the gradual adoption of newer and advanced technologies, such as, the use of drones as delivery vehicles for critical supplies, robotics, the widespread 3D-printing of healthcare-related items, and smartphone-enabled monitoring of patients’ adherence to treatments.

The Healthcare Transformation – Evolution of HaH

As related to the public health, with the availability of the mobile-enabled technologies, there is an improved operation of surveillance systems and data analysis. The mobile-enabled technologies can be deployed en-masse to monitor quarantined individuals and trace exposed individuals with temporal and geographical correlates. The new tools are likely to move further into the public health domain and support the interconnected and hypercomplex global situations in real-time. On the other hand, the healthcare, in general, is needed to be people centred and integrated. The patient centred services include diagnosis and treatment and other supportive aspects of healthcare, whereas integrated healthcare involves adequate provision and efficient delivery of safe and quality health services. The people-oriented approach, on the other hand, implies planning the healthcare services by assessing the needs and expectations of community and applying them in a methodological and efficient way. The integration of modern technologies including telemedicine in healthcare services will improve the quality of healthcare.
The COVID-19 pandemic has led to realization about the limitations of existing healthcare systems and their capacity to respond to healthcare emergencies including infectious disease epidemics. It has underlined the inadequate health literacy among general population to grasp the healthcare recommendations and their outcomes [6]. It has also served as a reminder for proactive planning and preparedness. In addition, it has highlighted the necessity for technologically oriented solutions for healthcare provision and the need for significant healthcare transformation. On the other hand, it has opened the pathways to evolution and expansion of the concept of HaH incorporating communication technology-based approach as a major step to deliver healthcare at home or closer to home with all necessary steps to safeguard the safety and privacy of the participants.
In fact, the healthcare at home (HaH) can be modelled on lines of the hospice care as a multidisciplinary team approach, generally home-based and sometimes providing services through freestanding facilities, in nursing homes, or within hospitals for handling potentially treatable conditions such as pneumonia, heart failure, and alike, with brief hospital stays if necessary (Figure 2).

biomedres-openaccess-journal-bjstr

Figure 2: The development of home-based healthcare and potential spectrum of HaH.

The HaH describes a delivery paradigm where the entirety of the hospital-based inpatient care modality is substituted with intensive at-home treatment approach enabled by digital technologies, multidisciplinary teams, and ancillary services [7]. The potential spectrum of HaH can incorporate the hospice care. But as compared to the latter, apart from providing healthcare services for the terminally ill and elderly in form of hospice care, the HaH can be also useful for all those patients who need intense medical care and treatment but can be managed with help of technological monitoring and remote supervision by healthcare professionals at their homes with possible access to a nearby medical facility or hospital. HaH can make possible for people to receive a variety of medical services in their homes and can satisfactorily deal with various health conditions, as it incorporates therapeutic and nursing care, and medical assistance. In fact, the HaH is being envisaged as an alternate attractive model for accommodating increased demand for inpatient health care and as we prepare for the post-COVID-19 pandemic era, there are evolving salient features of HaH potentially promising to maximize the benefits of transformed health care [8].

The Management and Delivery of Healthcare at Home

During the COVID-19 pandemic, there has been a decline in emergency department visits and hospital admission rates in various countries [9]. It seems that in addition to a shift to virtual healthcare, COVID-19 also influenced emergency department visits and hospital admissions unrelated to COVID-19 itself. The studies from both Spain and Italy have shown a reduction in admissions and procedures related to conditions like myocardial infarction and acute coronary syndrome [10,11]. A recent study from Thailand demonstrated that during a national lockdown for COVID-19, there was a significant reduction in daily emergency department visits [12]. Similarly, a study from Melbourne, Australia documented that during times of COVID-19 restrictions there was a significant reduction in ED visits [13]. According to a survey by Canadian Home Care Association, there has been a decline of around 72% in emergency department visits, in turn resulting in the reduction of hospital admission rates [14]. These reductions in outpatient service and admissions underline the need to develop an alternative modality of healthcare for patients still requiring inpatient management for their acute and chronic medical conditions.
The integration of modern technologies like electronic health record (EHR) and telemedicine in healthcare services will save time and resources and provide better healthcare to the users. There are five major technologies which are likely to reform home-based healthcare, and include use of various biosensors, GPS, remote monitoring tools, electronic data and analysis, and telehealth. The e-Prescriptions generated are easy to be transmitted and compatible with the EHR.
In general, the HaH comprises of the following benefits:
1. With the primary focus of HaH, people get medical support at home rather than spending time in a medical facility. Further, it allows people to stay comfortably at their residential facility rather than at hospitals, having lower cost and various psychological advantages.
2. Activities of daily living are not altered and supported in-home in usual ways while maintaining a good quality of life for them in the known and perceptive atmosphere.
3. With the home care provided to patients with chronic health issues such as diabetes and respiratory disease, clinical trials have shown fewer complications and better health outcomes. The personalized and skilled care improves the overall response to the treatment.
4. With the real-time monitoring with technological equipment, the patients are seen and followed in real-time. Along with the AI and automation, the HaH aims to streamline the processes such as scheduling appointment, data collection, maintaining EHR, e-prescriptions, and scheduling and providing other health-related services as and when needed to improve the overall patient care at home.
The Covid-19 pandemic has amplified interest in HaH in the United States, European countries, and elsewhere as an alternative care model for both COVID-19 and non-COVID-19 patients, who can be remotely managed aided by current regulatory flexibilities (15). In fact, the HaH is being envisaged as an attractive model for accommodating unprecedented demand for inpatient capacity created by Covid-19. As we prepare for the health care for the postpandemic era, there are salient issues to be solved to maximize the benefits of HaH –
1. The HaH models must encompass the provision of healthcare of analogous intensity to hospital inpatient standards, and have a specified geographic catchment area, with properly defined correlates.
2. As the HaH is supposed to create the acute hospital care at home and to enable health systems to provide intensive care at home for patients with various acute and chronic conditions, this may lead to a remarkable expansion of HaH.
3. There is a unique opportunity to extend and expand HaH in current times, which can become a new vehicle for integrating non-medical services into healthcare as the patients may require further support due to complexity of their illness.
4. With the advances in digital technologies and their increased utilization by patients and healthcare providers, there is taking place transformation of the home environment into a preferred healthcare delivery site.
5. As the health awareness and rising cost of healthcare services may lead to increase in demand of HaH, managing and delivering HaH with technological backup should be affordable and providing quality service.
6. Further, a regulatory and policy implementation roadmap is required for provision of HaH, which should be accompanied by monitoring tools, such as, public reporting, patient registries, and maintenance of reliable database.

Conclusion – The Healthcare Solutions for the Future

With the COVID-19 pandemic having impact on almost every aspect of human life, the lessons have been learned relating to provision of healthcare. The telemedicine and virtual online consultations have been helpful in dealing with sudden surge and demand for healthcare both outdoor consult as well as emergency visits, and indoor and ICU care. During the COVID-19 and now in post- COVID-19 phase the alterations in provision of healthcare and its transformation have been enormous. The conventional healthcare encompassing outdoor consult and hospital-based care is being increasingly replaced by tele- and video- consultations, remote technologically assisted indoor care, and HaH. While the hospitalbased care cannot be fully dispensed with, a large proportion of it being increasingly assigned to HaH. The technologically assisted remote healthcare, outdoor as well as indoor, with its availability and acceptability, and associated challenges and benefits, is the new reality of current times.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Neonatology

Circulating HMGB-1, HistoneH3, and Syndecan-1 in a Newborn with Neonatal Cerebral Infarction

Introduction

Neonatal cerebral infarction is a relatively rare central nervous system disorder that occurs in about 1 in 5000 neonates. The most common cause is ischemic cerebral injury resulting from neonatal asphyxia but can also be idiopathic. Half of the infarctions develop by the day 1, and few occur over the 3rd day of life. Most occur in the middle cerebral artery region, most often on the left side [1- 3]. Convulsions are the most common initial symptom, but there are many non-specific symptoms such as decreased feeding ability and apneic attacks. Treatment is mainly systemic management and symptomatic treatment for convulsions. Currently, there are only a few reported cases of the use of thrombomodulin, tissue plasminogen activator, and Edaravone® (free radical scavenger), of which treatment method has been useful in adults [4], probably because of the many side effects for neonates. Neonatal cerebral infarction is currently classified into six categories from the viewpoint of pathogenesis, and most of the idiopathic cerebral infarctions belong to the category of ischemic cerebral infarction [1,2]. Similar cerebral ischemia and subsequent inflammation are pathological conditions of cerebral infarction in both newborns and adults, and the indication for treatment depends on how many hours have passed since the onset of cerebral infarction. In newborns, infarct lesions may appear shortly after birth or may have already occurred before birth and may be diagnosed by the detection of postnatal symptomsin the infant. Therefore, it is very difficult to determine the onset of neonatal cerebral infarction compared to that of adults.
We have confirmed and reported in a multicenter cohort study that the effectiveness of brain hypothermia in neonatal asphyxia can be judged by changes in the serum high mobility group box- 1 (HMGB-1) concentration [5]. Furthermore, we observed the serial changes of HMGB-1 in the blood of infants who had already developed fetal asphyxia and suffered severe sequelae even though brain hypothermia was started within 6 hours after birth. We found that a long time had passed following the onset of ischemic lesions and reported that postnatal brain hypothermia may be ineffective for such hypoxic ischemic encephalopathy within utero onset [6]. This time we experienced a case of cerebral infarction in one of two twins. We had the opportunity to simultaneously measure three biomarkers, HMGB-1, histone H3, which is a nuclear protein similar to HMGB-1, and syndecan-1, which is present on the surface of vascular endothelial cells and is thought to be released in the blood during angiopathy. As a result, we report a case in which the onset of cerebral infarction was suspected to have occurred before birth.

Case Presentation

The mother was 35 years old and had one pregnancy and zero deliveries. She had preeclampsia and was indicated for an emergency caesarean section due to exacerbation of her hypertension on the 36th week, 3rd day of her pregnancy. A female with a birth weight of 2420 g was born as the second baby of a diamniotic dichorionic twin pregnancy with an Apgar score of 8-9 and umbilical arterial pH of 7.273. At 2 hours and 54 minutes after her birth, she was admitted to the NICU due to an apneic attack. The infant was given intravenous phenobarbital for pedaling-like and muffled mouth movements on day 1 after birth, but it was ineffective and was changed to continuous administration of midazolam on day 2. Head echo showed no obvious lesions, but computed tomography and magnetic resonance imaging showed extensive cerebral infarction in the left middle cerebral artery region (Figure 1). The patient was diagnosed as having idiopathic cerebral infarction because a coagulation system test, amino acid fraction, and ophthalmologic examination were all negative. Midazolam was used from 2 to 8 days of age. No particular abnormalities such as in oral feeding and muscle tone were observed. An electroencephalogram was performed on day 22, and a decrease in activity on the left side was observed, but no obvious seizures were observed, so the patient was discharged from the hospital on day 31 after birth.

biomedres-openaccess-journal-bjstr

Figure 1: Magnetic resonance imaging at the 2nd day of age above: T2-weighted image, below: Diffusion weighted image; T2-weighted images revealed extensive cerebral infarction in the left middle cerebral arteryregion.

Material and Methods

Measurement of HMGB-1 Levels

The HMGB-1 measurements were performed by technicians with no knowledge of the personal data of the patient providing the samples using a commercially available ELISA kit (Shino-Test Corporation, Sagamihara, Japan). The detection sensitivity of this assay system was 0.2 ng/mL [7].

Measurement of HistoneH3 Levels

Because there is no commercially available ELISA kit, the measurement of histone H3 was performed by the ELISA prepared in our laboratory. The following is the method which had been presented in our laboratory was adopted this time as well [8]. Polystyrene microtiter plates (Nunc, Roskilde, Denmark) were coated with 100μL/well of 1 mg/L anti-histone H3 peptide polyclonal antibody (Shino-Test Corporation) in phosphatebuffered saline (PBS), and incubated overnight at 2–8°C. After three washes with PBS containing 0.05% Tween-20 (washing buffer), the remaining binding sites were blocked by incubation with 400 μL/well of PBS containing 1% bovine serum albumin (BSA) for 2 h. The plates were washed again and incubated with 100 μL/ well of diluted calibrator and serum samples (1:10 dilution in 0.2 mol/L Tris pH 8.5, 0.15 mol/L NaCl, and 1% BSA) for 24 h at room temperature. After washing, the plates were incubated with 100 μL/ well of anti-histoneH3 peroxidase-conjugated peptide polyclonal antibody (Shino-Test Corporation) for 2 hours at room temperature. The plates were washed again, and the chromogenic substrate 3,3′,5,5′-tetra-methylbenzidine (TMBZ; Dojindo Laboratories, Kumamoto, Japan) was added to each well. The reaction was terminated with 0.35 mol/L Na2SO4, and the absorbance at 450 nm was measured with a microplate reader (Model 680; Bio-Rad, Hercules, CA, USA). A standard curve was obtained with purified calf thymus histoneH3 (Roche, Stockholm, Sweden). The amino acid sequence of histone H3 is highly conserved throughout species, and that of the antibody recognition in humans, calves, mice, and rats. This ELISA specifically detects histone H3 and does not react with other histone family proteins, including histone H2A, H2B, and H4, even if 104 times excess proteins are loaded. The detection sensitivity of this assay system was 2.0 ng/mL.

Measurement of Syndecan-1 Levels

The following is the ELISA method which was presented originally in our laboratory. Polystyrene microtiter plates (Nunc, Roskilde, Denmark) were coated with 100 μL anti-syndecan-1 monoclonal antibody (R&D Systems) in PBS, and the plates were sealed with a thin adhesive-coated plastic sheet and incubated overnight at 37°C. The unbound antibodies were removed by washing the plate 3 times with PBS containing 0.05% Tween 20, and the remaining binding sites in the wells were blocked by incubating the plates for 2 h with 400 μL/well of PBS containing 1% BSA. After washing, 100 μL of each dilution of the standard and samples in 0.2 mol/L Tris pH 7.4 and 0.15 mol/L NaCl2 containing 1% BSA was added to the wells. The samples and recombinant syndecan-1 standard were diluted 1:10. The microtiter plates were incubated for 20–24 hours at room temperature. After washing, 100 μL per well of anti-human syndecan-1 peroxidase-conjugated polyclonal antibody (R&D Systems) was added, and the plates were incubated at room temperature for 2 h. After washing, TMBZ (Dojindo Laboratories, Kumamoto, Japan) was added to each well. The enzyme reaction was allowed to proceed for 30 min at room temperature. The chromogenic substrate reaction was stopped by addition of stop solution (0.35 mol/L Na2SO4), and the absorbance was read at 450 nm.

Ethical Approval

This study was approved by the ethics committees of the Japanese Red Cross Musashino Hospital (#28060). Parents of the twins were informed of the study design, and their written informed consent was obtained

Results

HMGB-1 and histone H3, which are common substances as nuclear proteins, showed a fairly strong positive correlation with a correlation coefficient of r = 0.965 (Figure 2). Syndecan-1 was low in both twins at each measurement, and no significant correlation was observed between HMGB-1 and histone H3 as previously reported [14]. HMGB-1 and histone H3 showed no significant variation in their levels in the specimens obtained before and after the onset of the first apneic attack (Figure 3).

biomedres-openaccess-journal-bjstr

Figure 2: Correlations between serum HMGB-1 and histone H3 levels. A significant correlation was observed between HMGB-1 and histone H3 (r = 0.965, p < 0.0001). However, no significant correlations were found between syndecan-1 and HMGB-1 (p = 0.431) or histone H3 (p = 0.373).

biomedres-openaccess-journal-bjstr

Figure 3: Serial changes of serum concentrations of HMGB-
1, histone H3, and syndecan-1 in this case twins.
🔴:this case twin, 〇:this case co-twin.

Discussion

Treatment for ischemic lesions of the brain is more effective the sooner it can be initiated after ischemia onset. Within 6 hours after the onset of ischemic lesions, HMGB-1 released from injured cells disrupts the blood-brain barrier (BBB) [9]. Then, within 24 hours after lesion onset, peroxiredoxin released from the cells acts on macrophages migrating from the injured part of the BBB to release inflammatory cytokines such as IL-1β and TNFα. Then, 24 hours after the onset, IL-17 and IFNγ are released from T cells, exacerbating damage to the BBB and brain cells [4,10]. The indication for brain hypothermia to treat neonatal hypoxic-ischemic encephalopathy is “the treatment can be started within 6 hours after birth”, and that for Edaravone®, a free radical scavenger, as a treatment for cerebral infarction in adults is “the treatment should start within 72 hours of onset”. These indications make sense in terms of protecting against this inflammatory response after temporal ischemia. In adults, the origin of onset is often clear, but in newborns, onset does not always occur at birth. In our previous case, severe ischemic encephalopathy had already occurred before birth, and the ischemic encephalopathy became severe even though brain hypothermia was performed within 6 hours after birth [6]. The serum cytokine profile and HMGB-1 level in this case was measured over time to develop a theoretical diagnosis and report. We consider that asphyxia is a condition of systemic ischemiareperfusion and cerebral infarction is a condition of local ischemiareperfusion.
Both HMGB-1 and histoneH3 are nuclear proteins. It was expected that the blood concentrations of these two substances would correlate, as they would be released into the blood when the cells were injured. Although the number of samples obtained was small, the values showed a very strong positive correlation (r = 0.965). In contrast, the serum HMGB-1 levels in this twin remained within reference values we have already reported for all cord blood and early postnatal specimens [7]. HistoneH3, which showed a strong correlation, was also about the same value. As this baby was born by a caesarean section before labor, it was inferred that the so-called “physiological ischemia-reperfusion stress” associated with birth would be minimal [11]. In addition, the changes in the HMGB-1 level suggested that the probability of developing cerebral infarction from immediately after birth to the onset of the apnea was extremely low. Considering that the apneic attack occurred early after birth, it is highly possible that the onset of cerebral infarction in this baby occurred prenatally and that most of the injury process was already completed by the time of birth. This suggested a reason why the HMGB-1 and histoneH3 levels in the cord blood at birth and as measured after birth did not increase [12,13].
Syndecan-1 is a representative proteoglycan expressed on the surface of vascular endothelial cells. So far, it has been reported that syndecan-1 is released into the bloodstream due to vascular endothelial cell damage in the early stage of sepsis [14]. Furthermore, it has been confirmed to increase in an ischemic perfusion injury model used in the animal experiment of Gayosso et al. [15]. Syndecan-1 did not significantly correlate with HMGB-1 and histone H3. This result was consistent with the report in adults by Ikeda et al. To the best of our knowledge, the Ikeda et al. [14] report, which focused on sepsis rather than cerebral infarction, is the only report of the simultaneous measurement of the three biomarkers HMGB-1, histone H3, and syndecan-1. The presentcase may be the second report and especially the first report in newborns. When released extracellularly, HMGB-1 not only directly damages the BBB but also acts on monocytes to increase the expression level of tissue factor and promote fibrin production by thrombin, thereby promoting thrombus formation [4]. Histone H3, a structure of NETs (neutrophil extracellular traps) released from neutrophils, is also involved in thrombus formation when released extracellularly [8]. Furthermore, syndecan-1 is also exfoliated and released into the bloodstream when vascular endothelial cells are damaged, and a thrombus is formed on the surface of the vascular endothelial cells [14]. If it can be confirmed that the levels of these three substances, which are released in common into the blood stream due to cells damaged by ischemia-reperfusion injury, are elevated in the acute phase, the use of drugs such as thrombomodulin and Edaravone® may be effective for neonatal cerebral infarction. Future accumulation of additional cases and simultaneous measurement of these three biomarkers must be useful for determining the onset of cerebral infarction.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Department of Pediatrics

The Dilemma of Choosing a Vaccine Against SARSCoV2 in Children? /SARSCoV2 Vaccine in Children/

Introduction

Vaccines for children are the basis for the prevention of serious, infectious diseases, which is why, for the last 6-7 decades, health workers have tried to keep the coverage of the population with vaccines over 90%, at least in developed countries. The rapid spread of the covid19 pandemic has posed a dilemma for us – should we vaccinate children against SARSCoV2 infection, with which vaccine (prepared by known technology or new technology), or should children be exposed to natural infection and stay unvaccinated? Pediatricians are daily exposed to pressure from certain pharmaceutical companies to vaccinate children older than 12 years with a certain vaccine, despite published and positive research on a vaccine that is applicable for children older than 3 years. Is this a simple match of pharmaceutical companies or is it a match between “new” and “old” vaccine technology or is it a fair match of scientific facts? Does the scientific and professional public, worldwide, agree that children should be vaccinated against SARSCoV2 infection, is there a safe and protective vaccine, and what age children should be included in the vaccination? Does vaccination of children against SARSCoV2 have a scientific justification after 18 months from the beginning of the COVID19 pandemic and after the arrival of new strains of SARASCoV2 against which the effectiveness of previous vaccines is partly because they don’t protect against infection but protect against a severe clinical picture? Here we consider the achievements so far on the vaccine against SARSCoV2 infection in children.

Children are often asymptomatic COVID19 i.e. Children are significant carriers of SARSCoV2 in the community. Children suffer mainly from mild to moderate clinical pictures of COVID19. According to the American Academy of Pediatrics, so far an extremely small number of children have suffered from a severe clinical picture of COVID19 (2.4% of total patients) or died of COVID19 (0.08% of total patients), and these are children with comorbidities (obesity, diabetes, neurological progressive diseases) [1]. However, children often show the long-COVID19 or post-COVID19, and these are predominantly children who were carriers of SARSCoV2 or suffered a mild clinical picture of COVID19. Long-COVID19 or post- COVID19 in children is mainly presented as a severe clinical picture in the form of the multisystem inflammatory syndrome (MISC) or similar-MISC which includes myocardial dysfunction, shock, and severe respiratory failure whose treatment is carried out in the intensive care unit.

Certainly, the prevention of COVID19 is more effective than the treatment of a child with COVID19 or long-COVID19, which is a kind of recommendation for vaccination of children against SARSCoV2. Indeed, there is an indication that children must be vaccinated against SARSCoV2 infection. We have been waiting for the results of research on adults for 18 months and accordingly, it is necessary to check the effectiveness of COVID19 vaccines in the child population, of course with the implementation of ethical principles of clinical research. A new circumstance is the poor efficacy of previous vaccines, in adults, against new strains of SARSCoV2 (delta, mu) and the fact that, in the September wave of the COVID19 pandemic, a worrying number of children became ill (25.7% of the total number of patients) compared to previous waves [1].

The basic two groups of vaccines against SARSCoV2 infection are known and apply according to the technology of vaccine preparation. A total of 13 different vaccines are used worldwide. One group of vaccines was made by the known technology of vaccine production with whole, purified, inactivated SARSCoV2 (manufacturers: Sinopharm, Sinovac Biotech, Bharat Biotech) [2,3]. The second group of vaccines was made with a new vaccine production technology using:
1. mRNA against spike protein proteins (manufacturer: Pfizer BioNTech, Modern), or
2. Recombinant adenovirus as a vector against spike protein viruses (AstraZeneca, Institute of India, Janssen/ Johnson&Johnson, Gamaleya National Center of Epidemiology and Microbiology, CanSinoBiologics), or
3. Recombinant spike protein with a new adjuvant (manufacturer: Novavax) or DNA plasmid [4,5].

The first three mentioned vaccine platforms have passed phase 3 and their effectiveness in the prevention of SARSCoV2 infection in adults has been confirmed, while research in the pediatric population is in the initial stages. Application of the fourth platform, i.e., the DNA vaccine began to be used in September 2021, in India, in adults and children older than 12 years [5], so we do not have data on its real effectiveness. We evaluate each vaccine according to its effectiveness, immunogenicity, and safety. Table 1 shows the basic characteristics of individual vaccines which are recommended for children. The efficacy of the inactivated vaccine against SARSCoV2 ranges from 50 to 83.5% [6]. The efficacy of a vaccine containing mRNA against the spike protein SARSCoV2 ranges from 94.1 to 95% [6]. The efficiency of the so-called “vector” vaccines against SARSCoV2 ranges from 65.7 to 91.6% [6]. The efficacy of a vaccine containing a recombinant spike protein with a new adjuvant is 89.7% [6]. The efficacy of the DNA vaccine, estimated in the laboratory, is 67% but is aimed at suppressing the delta strain of the SARSCoV2 virus [5,6].

To achieve high efficiency and immunogenicity of the vaccine, it is necessary to establish an efficient and known mechanism of immunization with phagocytosis using antigen-presenting or dendritic cells that activate T-lymphocytes, which will consequently activate B-lymphocytes, thus achieving cellular and humoral immune response. The second dose of the vaccine enhances and prolongs immunity against SARSCoV2 in terms of an increase in IgG antibody titer to the spike protein SARSCoV2 (S1-RBD) with neutralizing capacity, to a lesser extent to the N-protein SARSCoV2, as well as an increase in INF-gamma secretion after recognition of SARSCoV2 antigen, primarily CD4 lymphocytes and a lesser extent CD8 lymphocytes. Since the time of Pasteur, we have been considering the effectiveness of an inactivated (“dead”) vaccine, that contains the entire infectious agent and conjugate vaccine that contains parts instead of the whole live virus. For example, pertussis vaccination coverage is 86% after 3 doses of vaccine (primarily whole-cell vaccine), which provided a low rate of pertussis in children [7]. It is this efficacy of the pertussis vaccine that can be compared with the efficacy of the inactivated SARSCoV2 vaccine [2,3]. Vaccines that use mRNA or DNA provide human cells with genetic information for an important part of SARSCoV2 against which the immune response is elicited. Vector vaccines transmit genetic information, through another virus for part of SARSCoV2, to human cells that produce a viral protein and elicit an immune response. Protein subunit vaccines produce proteins from viruses so that the human immune system learns to attack them.

Immunogenicity in previous studies was estimated as the percentage of seroconversion, i.e. Increase in the titer of neutral antibodies to SARS-CoV-2 after 28 days of vaccine administration. It is still not specified which antibody titer prevents infection or why there is a quantitative but not qualitative increase in anti-SARS-CoV antibodies after vaccination nor which CD4/CD8 lymphocyte ratio protects against SARSCoV2 infection nor how long post vaccination immunity lasts? The seroconversion achieved by the inactivated SARSCoV2 vaccine found in two independent studies in children was approximately the same: 96.8-100% according to the vaccine dose (1.5 and 3.0 mcg, respectively) compared to 100% after 56 days of the first dose, regardless of whether the dose was 4 mcg or 8 mcg and regardless of age group (3-5 years, 6-12 years, 13- 17 years) [2, 3]. In school-age children, seroconversion with the inactivated vaccine against SARSCoV2 is achieved after 28 days from the first dose [2]. The mRNA produced by Pfizer is 100% effective and contributes to a robust response by producing antibodies to SARSCoV2 in children aged 12-15 years after 7 days of the second dose of the vaccine [4,8]. The vaccine safety data in terms of the number of adverse local and systemic reactions in children are shown in Table 1 [6,9,10]. Data on the efficacy of other vaccines for adults are discussed in the English Covid Vaccination Program [6]. To achieve herd immunity, it is crucial to achieving coverage of the population by vaccination of approximately 80%, which has been achieved by several countries in the world (Portugal, Spain, and Denmark). Until the achievement of collective immunity, it is necessary to implement epidemiological protection measures against the SARSCoV2 infection.

biomedres-openaccess-journal-bjstr

Table 1: Vaccines against SARSCoV2 applicable in children.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on environmental Science

Agronomic and Ecosystem Services Potentialities of Green Manure Utilization

Introduction

The global impact of climate change caused by the excessive use of natural resources and the over growth of world population harms biodiversity species and disturbs ecosystem functions. It was identified that land degradation, one of many consequences of poor management. (Holden, S., Shiferaw, B., & Pender, J., 2005) [1] of natural capital (soil, water, and biodiversity such as vegetal and animal organisms) is the problem of worldwide caused anthropological manipulation of land, which altered chemical, physical and biological soil properties (Lal, R, 2001). The Global Assessment of Human-Induced Soil Degradation project (GLASOD) in 1988 estimated 1,964 million (nearly 2 billion) hectares degraded worldwide and more than 22% of combined agricultural land pasture and woodland were destroyed by human-induced soil degradation (Chen H. J. et al. 2014). This problem of land degradation evolved the climate change, biodiversity, food security, quality of water and air. The resource land is characterized by a complex structure of two interlocking systems: a system of natural resources ecosystems and human society. The interaction between them determines how natural resources are managed (Temengsgen G et al. 2014). For example, the findings of Lingling Hou (2012) [2] estimated that 50% of the land in China was degraded and more than 466 million hectares have been affected. That situation caused environmental and ecological damages [2].
However, overuses of agrochemical products [3] were fundamental causes of soil degradation in the worldwide. It would like to cite chemical fertilizers and pesticides. The agricultural industry is ranked first for the consumption of pesticides with 1.84 Mt in 2014 and non-biological pesticides account for 91.78% in China [4]. Many researchers stated that green manure is useful to limit those negative effects of using of artificial products in agriculture. The world community would undertake maintaining soil fertility and biodiversity in order to assume the equilibrium between food supplies, the population growth and safe environment [5]. To reach this objective, many governments have undertaken different measures for conservation and sustainable use of biological diversity [6]. Operationally, many farmers must use green manure to limit the excessive application of chemical products in agriculture. The objective of this review is to give the answers to this big question: How green manure practices could perform farming production and environment damage?.
The environment scientists tried to complete the commitment of the States in implementing the convention on Biological Diversity universally acknowledged in 1992 by discovering that the use of green manure could be one of the solutions of land degradation, agricultural products quality and environment damage. This is why this review paper tried to give different explanations and details about various agricultural performances and ecosystems services provided by green manure. By definition, green manure is produced by ploughing leaves and roots of green plants at maturity into the soil. After a while, they become compost or green fertilizer [7]. According to some writers, green manures are the material of plant-incorporated into the field [7,8] and for others green manure is plants used to produce compost or green fertilizer [5]. That big question of research has been answered through reviewing numerous publications based on the benefits of using green manure. Methodologically, this review summarizes and discusses the finding from the following principal publications reviewed below: (Table 1).

biomedres-openaccess-journal-bjstr

Table 1.

Use of Green Manure

According to Costanza Robert (1997) [12], the natural ecosystem services were defined as the benefits that the nature provided to humans through natural resources transformations into a low of essential goods and services [12]. In this way, green manure is used to fertilize degraded agricultural land and save biological diversity in the soil. To optimize the ecological and economic services value from green manure, farmers used diverse models of planting such as rotation fallowing, inter-cropping and double cropping. Those technologies improve ecosystem services, support agricultural production performance and safe environment.
The use of green manure contributes to maintain ecosystem systems and to give ecosystem services value to human interventions which aim to promote sustainable development. Green manuring increased the CO2 concentrations and providing biological-Nitrogen in the soil [13,14], soil shade, soil organic matter (SOM) and soil organic carbon (SOC) when turned into the soil [14,15] as economic benefits [15], ecosystem services [7], and mitigation of pollution and beautify landscapes.

Green Manure Species

In the world, it is observed many kinds of green manure and cover crops those farmers use as green fertilizer in the fields for different reasons. However, the ryegrass (Lolium perenne), winter oilseeds (Brassica napus ssp. oleifera var. biennis), and winter rye (Secale cereale L.) were used in mixture to control the biomass and weeds and to increase crops yields in rotation crops [15]. In South Africa, according to the evaluation of green manure legumes have the potentiality to increase the crops yields in smallholder farms (Jude, J.O.O., 2011) [16]. It can be used in tropical region to increase the yield and agronomic performance of common beans [17]. Then, green manure plants play a significant role in farming systems management through its functions, such as the financial services, ecosystem/ecological services, and cultural landscape services functions [18] (Rovanovskaya A.A 2008).

Economic Services Functions

The practice of green manuring reduces the biomass, the density of the weeds, and increases at maximum the crop yields and the green manure crops could improve the health of soil when it was turning into the ground after maturity [7,8]. Green manure practices bring economic benefits [9] through reducing the costs of inputs of the farmers, increase yields. It is not easy to capture all of these gaps of costs during the process of the market transaction [19] without adapted scientific methods. It is observed the indirect effects of green manure practices on the cost of disease damages management (McGuire, A., 2016). With the incorporation of different leaves of green manure (Abelmoschus esculentus) in Bhendi cropping systems, the crops growth parameters performed well and yield quality, as well as high net profit and benefit-cost ratio were observed [20]. The evaluation of economic services of green manure is very important when the research needs to understand the contribution of green manure in ecosystem system. It is used different potential applications to evaluate the ecosystems services values namely the Life Cycle Assessment (LCA), Contingent Valuation Method (CVM), Willingness -To -Pay (WTP) and Willingness-To-Accept (WTA), bio-economic model (Wang T. et al., 2018), and economic benefits analysis [21]. Currently, those techniques of assessment were applied in environmental impacts assessment (EIA) [20]. Referencing to the capacity of fixing nitrogen, the low-cost nitrogen, a bio-fertilizer, is provided by green manure. Those plants are good patterns of wetland rice cultivation [22].

Ecological Services

Green manure crops have the potentiality to maintain ecosystem systems [9]. Ecosystem systems provide a variety of essential services, including water, air, and health, livelihood, and well-being (Barvanera Patricia Sandras Quijas, Karp Daniel S. et al., 2016). In France, near Toulouse and Orleans, crucifer’s species, grown in the form of green manure (catch crops) released a large amount of mineral nitrogen (N) for later commercial crops and legumes cropped as green manure plants decreased the leaching of nitrates in extraordinary ways. The same authors indicated that mixing crucifers and legumes in farming systems was an operational solution to obtain multiples ecosystem services from both catch crops for providing nitrate and legumes for providing nitrogen as green manure services [23]. Specifically, Azolla, a tiny aquatic fern, was used as green manure in flooded rice planting [24]. Then, green manure crops can regulate multiples environmental problems such as pest and diseases control [25], carbon sequestration [18,26]. It contributes to water filtration mitigation, climate control [26], and beautification of the living environment. However, green manure improves air purification and quality of agricultural products [27,28] because conventional farming systems are the sources of Green Houses Gases (GHG) emissions [26].

Plant Health Improvement

The weeds are unwanted plants that hosted pathogens (Rodgers V.L, Stinson K.A and Finzi A.C. 2008) and play a principal role in various ecosystems. Many of those weeds led to direct and indirect damages in farming ecosystems such as the losses of fertile agricultural lands, biodiversity, areas for grazing animals, and production of livestock, chocking of navigation and canals of irrigation and diminution of the availability of water in the rivers. However, green manuring, by adding carbon into the soil (Blumenthel, D. M. Jordan, N.R. and Russelle, M.P 2003), is one of the different sustainable farming systems which can successfully bring sustainable weed control for environmental, social, and economic benefits and wellbeing (Garnavel L and Natarajan S K 2014). Green manure /covers crops destroy weeds, which could theoretically act as causes of pathogen inoculum for crops and make returning accumulated nitrogen to the soil, reduced nitrogen leaching, avoid erosion, and improve soil structure [23]. In Pacific North- West of United States of America, the green manure (Cover crops, Mustard, Sudan grass, Lupine, and Marigold used as biological control and canola, Crambe, meadowfoam, Milkweed, seed meals organic amendment) reduced the nematode impact (Meloidogyne chitwoodi) on potatoes by 50-80%, provide nematode control comparable to fumigation and improve soil physical characteristics especially water infiltration and penetration of resources [29]. Then, green manure crops and cover crops, used as green manure, played a significant role in controlling diseases and nematodes, which harm cropping patterns.

Carbon Sequestration

Farming systems can be a source of dioxide of carbon (CO2), and when it surpasses plant carbon fixation by photosynthesis, CO2 contributes to climate variability. In 2002, Reicosky estimated that tillage of the soil led to carbon losses between 30% and 50 %. However, when farmers incorporated green manure crops into the soil, they captured CO2 through the humification of soil organic matter (SOM) fractions after the mineralization process and the content of soil organic carbon (SOC) increased [11]. However, green manure displayed a significantly greater soil organic carbon (SOC) than the crops taken as a reference [30]. Thus, green manure crops and cover crops contain the potential capacity to sequestrate carbon and improve smallholder farmers resilience with minimum trade-offs [31]. Allowing a fallow period between two seasons of cropping can increase the soil organic carbon (SOC). Then, the SOC is an effective measurement to compensate for anthropogenic GHG emissions [11,32]. In this context, Yang (2014) found that green manuring is a management strategy for mitigating soil degradation, increasing nutrient levels (nitrogen, carbon, and other micro-elements [33]. The same study indicated that green manure legumes increased significantly in the long term, the total carbon (C) and nitrogen (N), and the formation of the stable aggregate portion of the water measuring from 2 to 5 mm in the soil. Specifically, the nitrogen is the element of nutrients supplied chiefly by green manuring, since nearly all the soil, turning under the crop-legumes increased the nitrogen associated with organic matter [5].

Quality and Quantity of Yield Improvement

Reducing-cost-technology (RCT) consisted of the soil nontillage and decreasing nitrogen over-fertilizing doses with green manures before crop patterns. Those plants have high potential for restoration of soil fertility and enhancement of terrestrial crop production [34] and provided best profits compared with the other plants (Whitmore A. P. et al, 2000) [35], also contributed significantly to the nutritional demand of green manure legumes, thereby providing an agro-ecological and sustainable production [36]. The grazing of green manure, especially oat (Avena sativa), pea/oat (A. Sativa/Pisum sativum) contributed to improving the available nitrogen production in integrated crop-livestock systems [37]. This system of production of agriculture consisted of capturing ecological interactions among different systems of landuse, making agricultural ecosystems more proficient at cycling nourishment, preserved the natural resources and environment and improved the quality of the soil and enhanced biodiversity (Franzluebbers, A. J., 2007;) [26,38]. The green manure vetch (Vicia villosa) more affected the quality of biological maize (Zea mays) compared to that of fallow lands associated with organic fertilizers (phosphorus supplement) on in a field experiment of two years in central Italy [39]. The rotation of crops (Bullock, D.G., 1992) is one of the different modes of planting green manure that brings high yield of crops.

Mitigation of Soil Water Losses, Air Pollution, and Environmental Degradation

Green manuring is one of the ways of moisture conservation in the soil. Thus, the adoption of moisture conservation techniques, in situ, was increasing the moisture availability. However, the growth of green manure crops after cereal harvesting had reduced infiltration from rainfall during the autumn season in Lituany of an average between 19.4 % and 21.7 % (2003) and between 7.0 % and 8.3% (2004) such as clover produced more biomass (0.407 g/m2) with more nitrogen (7.35 g/m2) when clover incorporated into the soil and it increased nitrogen concentration.

Improvement Of Soil and Biodiversity’s Health

The soil organic matter (SOM) plays a central role in the function of farms and particularly in biologic farming [40] (Morton A. C., 2008). Green manure contributes, transformed by ploughing into the soil, to improve the physical and chemical properties and plant growth (Hrishan Chandra, 2005), [41,42]. Then, depending on its potential to fixe biological nitrogen, green manure legumes are providing nutrients to crops in cropping systems [43], The soil organic matter content is the home of millions of microorganisms which brooked down by bacteria and makes nitrogen available to plants (Pieters A.J 1927) [44]. The retention of plant nutrients (carbon, nitrogen, zinc, etc.) from organic inputs depends on the microbial community under environmental conditions [45,46] and bio-chars [47]. By using green manure, nitrogenous compounds and carbons are transformed by soil microorganisms into elements absorbed by crops [5]. In the plants, roots absorbed more nourishment concentration than shoots [48].

Why it is Necessary to Plant Green Manure in Agricultural Land?

The green manure practices safeguarded biodiversity and provided ecosystem services to agricultural systems by transforming nitrogenous compounds and carbons into elements absorbed by the next crops (Thomas Oladeji Fabunmi et al 2012; Pieters A, J., 1927). In the study conducted by Zandvakili showed that the roots had higher concentrations of nourishments than shoots [36,48]. found that the use of native species of the Caatinga Biome could affect the nutritional demand of the Market of Garden Crops significantly, thus providing a form of agro-ecological and sustainable production. Consequently, soil fertility in organic matter is managed mainly by planting green manure. For example, China milk vetch (Astragalus sinicus L.) planted and mixed with chemical nitrogen fertilizer reduced the application of chemical fertilizer (Ma Yanqin and Huang Guiqin, 2019) and increased yield of rice by 28.7% in southern China (Qin, W. et al.2012) [49] and decreased seasonal methane (CH4) flux in the mono-rice cultivation system [49]. When the application rate of milk vetch is increased, also, the yield and production of rice increased (Chang H. L., et al, 2010). The combined utilization of vegetable-green manure and phosphorus-enriched compost (P) can then be considered a reliable option for managing N and P fertility in the short and long term and maintaining plant needs [39]. Green manure and cover crops are well recognized in many systems of agriculture. The application of green manure in smallholdings provides multiples profits. Those benefits are nitrogen fixation, soil organic matter content, and weeds control, the management of disease and pests, and soil erosion control. It is a significant low-cost added-value to technological options that integrate the consideration of nature conservation and productivity of agriculture (Pratt, O.J., 2016).

Ecological Compensation is Needed to Support Planting Green Manure

Although green manuring is one of good practices for sustainable development of agriculture, the increased cost of production of green manure reduces the willingness of farmers to plant green manure. Another method to promote farmers to adopt green manure is ecological compensation [9]. The eco-compensation is an approach like a trade-off in which compensatory laws attribute different values of the benefits of ecosystem services or the damage of the loss of the natural environment. That compensatory act corresponding to those goods or services lost or gains by the environment. The compensatory law can be a mechanism to ensure the ecosystem services flow (restoration of resources, recreation, or conservation of nature) and maintaining the flow of natural capital on which depending on the economy. The eco-compensation policy aims to encourage people (He, K. et al. 2016) to participate in sustainable agriculture.
A study conducted in Spain showed that the eco-compensation practices in environmental impact assessment (EIA) is much fewer because of only 407 of 1302 records of decisions (RoDs) reviewed (31%) mention eco-compensation and only 117 of 1302 RoDs (9%) described the measures of compensation mechanism (Villaroya, A. and Puig, J., 2013. The Willingness-To-Accept (WTA) the eco-compensation standards for farmers of fallow winter wheat in Hengshui, Hebei province, was 0.00095$/hm2 [50]. Many factors affected significantly and positively the willingness of farmers to reduce pesticides, namely the farmers’ environmental concern, cognition of pesticide residues, the quality of agroproducts interest, and controls of inputs. The study also noted that regulations and countermeasures and enhancing farmers’ selfcontrol were essential to guide farmers toward environmentallyfriendly measures in agricultural production (Zhang L et al 2018). On the contrary, to initiate a Pigouvian-tax, tax paid by economic actors when their activities generate negative externalities implies to proceed of actors’ Willingness-To-Pay (WTP) for the negative externalities from agriculture. A study carried out using a model of dynamic equilibrium to assess the effects of the welfare of subsidy estimated that the impact of Pigouvian-Tax on the intensity of financial support was negative (Yang, L. et al., 2018). The ecocompensation based on financial support could give added-value on the well-being of human, maintaining dynamic effects of ecosystems and nature conservation because the Pigouvian-tax alone cannot play a significant role to correct the most considerable externalities in the long-run (Dennis W. C. and Glenn C. L., 1980; Kohn, R. E., 1986) [51-55].

Conclusion

The use of green manure in agricultural fields brings various profits in terms of economic benefits and environmental safeguarding. However, green manure practices must be adopted as a new sustainable development approach of agriculture. The study reviewed various papers related to green manure benefits. It was observed that green manure technologies help farmers to various advantages namely economic benefits, carbon sequestration, nitrogen fixation, SOC content improvement, biodiversity safeguard, etc. Because of high ecosystems services values and few economic profits of farmers, an ecological compensation system could be adopted widely as a new sustainable development approach in farm systems.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access Journals on Biochemistry and Molecular Biology

A New Method for in vivo Targeted Gene Transfer into Oligodendrocytes using Adenoviral and HIV Vectors

Introduction

Viral vectors such as retrovirus, adenovirus (Ad), lentivirus, and adeno-associated virus (AAV) are widely used in clinical gene therapy protocols as vehicles for the delivery of genes into mammalian cells [1-5]. One advantage of these vectors is their ability to transduce a wide range of cell types, but this lack of specificity is also a distinct disadvantage, especially for in vivo gene transfer. This is because transduction of both target and nontarget cells results in massive wastage of the vector, and stable gene transfer into non-target cells by retroviral vectors may increase the likelihood of insertional mutagenesis in the transduced cells [6,7]. An efficient technique for targeted gene transfer would thus be a highly desirable.

Several strategies for targeting cells for gene transfer have been reported. One approach is to achieve transcriptional targeting using a tissue-specific promoter [8,9]. With this strategy, however, non-target cells are also transduced, even though the promoter is silent, resulting in massive wastage of the vectors. Another strategy is receptor-mediated targeting. For example, many investigators have been able to achieve targeted retroviral gene transfer through modification of the vector particles using single chain antibody fragments [10,11] and ligand molecules [12, 13], and by using pseudo type viruses [14,15]. The low efficiency of gene transfer is a serious disadvantage of this approach, however. To overcome these problems, we developed a novel strategy for cell targeting based on tissue-specific expression of an ecotropic retroviral receptor gene using Ad and ecotropic retroviral vectors [16]. With this approach, we achieved efficient targeted retroviral transduction through Ad-mediated, tissue-specific expression of a retrovirus receptor. Unfortunately, non-dividing cells could not be transduced using this method because cell division is required for the retrovirally mediated gene transfer [17].

Human immunodeficiency virus (HIV)-based retroviral vectors have several interesting features that make it potentially useful for targeted gene transfer. Because CD4 antigen is a major receptor for HIV entry, HIV vectors transduce only human CD4-expressing cells [18]. In addition, the HIV vector, itself, provides receptormediated targeting based on the natural tropism of viruses, and it can transduce non-dividing cells [19]. To make full use of these features of the HIV vector, we developed a new method that expands the host range of the HIV vector through a two-step gene transfer protocol [20]. Using this protocol, we were able to stably transduce such non-dividing cells as neurons and terminally differentiated macrophages [21], suggesting a combination of Ad and HIV vectors is potentially useful for transduction of a variety of non-dividing cell types. To further develop this strategy for targeted gene transfer into non-dividing cells, in the present study we constructed an Ad vector expressing the CD4 gene under the control of the oligodendrocyte (OL)-specific myelin basic protein (MBP) promoter [22,23]. We then tested whether OL-specific gene transfer could be achieved using Ad and HIV vectors, and whether this new method could be used for targeting gene transfer into nondividing cells in vivo.

Materials and Methods

Cells

Cos, 3T3, HEK293 and CD4H (CD4+ HeLa) cells [20] were grown in Dulbecco modified Eagle medium (DMEM) supplemented with 10% heat-inactivated fetal bovine serum (FBS) and 100 units/ml penicillin and 100 μg/ml streptomycin (GIBCO-BRL, Gaithersburg, MD) at 37 °C in 5% carbon dioxide. The CG4 OL cell line was maintained and differentiated as described previously in growth medium or differentiation medium [24]. Primary cultures of mixed rat brain cells were prepared as previously described [25] with some modification. Briefly, embryonic brains were minced and dissociated by pipetting after treatment with 0.25% trypsin. After low speed centrifugation (500rpm, 3min) to remove the debris and filtration through a 70mm filter, the cells were grown in MEM/F12 medium (GIBCO-BRL, Gaithersburg, MD) supplemented with 10% FBS to obtain a mixed glial culture.

Production of Ad and HIV vectors

A replication-defective Ad vector was generated using the Saito method [20]. An Ad vector containing the CD4 gene under the control of the MBP promoter (kindly provided by Dr. Ikenaka, National Institute for Physiological Sciences, Okazaki, Japan) was constructed by inserting the expression unit containing the MBP promoter (1.3-kb HindIII fragment) [26], the coding sequence of CD4, and the rabbit ß-globin polyadenylation signal into pAdex1w [27]. Another Ad vector containing the CXCR4 gene [28,29] was constructed by inserting the coding sequence of CXCR4 (kindly provided by Dr. Matsushima, University of Tokyo, Tokyo, Japan) into pAdex1wCA [27]. These constructs and EcoT221 digested Ad DNA terminal protein complex were introduced together into HEK293 cells. The recombinant Ad, Ad/MBPCD4 and Ad/ CAGCXCR4 were isolated, purified and concentrated through cesium chloride gradient centrifugation. The titers of Ad/MBPCD4 and Ad/CAGCXCR4 were 3×1010 and 1×1010 PFU/ml, respectively. An HIV vector carrying the enhanced green fluorescent protein (EGFP) gene (HXGFP) was generated by transient transfection of Cos cells with packaging (pCGPE) and vector (pHXGFP) plasmids as described previously [30]. Two days after transfection, the viruscontaining supernatant was concentrated by ultrafiltration using a CENTRIPREP 50 (Millipore Corporation, Bedford, MA) [30]. The biological titer of the concentrated HIV vector was approximately 108 transducing units (TU)/ml when CD4H cells were used as the target cells.

Targeted Gene Transfer into OLs

To achieve OL-specific gene transfer, we applied the two-step gene transfer method using Ad and HIV vectors as described previously [20]. In brief, rat brain primary mixed cultures were incubated with Ad/MBPCD4 and Ad/CAGCXCR4 for 60min at an MOI of 10. After two days of culture in complete medium, the cells were incubated with HXGFP for 48 h at an MOI of 100. For in vivo targeted gene transfer, 6- to 8-week-old Fischer 334 female rats (Japan Clea Co. LTD., Tokyo, Japan) were anesthetized with ketamine (100mg/kg) and nembutal (50mg/kg) and placed into a stereotaxic frame. Thereafter, the skull was exposed, a hole was drilled over the injection site (0.7mm anterior to bregma, 2.0mm lateral, 4.0mm vertical) [31], and 1μl of Ad/MBPCD4 and Ad/CAGCXCR4 was infused over period of 10min using a Hamilton syringe with a 26 gage needle. Three days later, 2μl of HXGFP were infused into the same site. As a control, Ad/CAGLacZ (kindly provided by Dr. Saito, University of Tokyo, Tokyo, Japan) plus Ad/CAGCXCR4 and HXGFP or only HXGFP were injected. All experiments involving animals were conducted according to the institutional guidelines of the Nippon Medical School.

Flow Cytometric Analysis

Expression of CD4 or CXCR4 was analyzed by flow cytometry (FACS Calibur, Becton Dickinson, Franklin Lakes, NJ) after staining with fluorescein isothiocyanate (FITC)-conjugated anti-human CD4 or phycoerythrin (PE)-conjugated anti-human CXCR4 (BD Pharmingen, San Diego, CA).

Immunocytochemistry and Immunohistochemistry

To identify OLs, we used anti-carbonic anhydrase II (CAII) and anti-MBP antibodies (Dako, Hamburg, Germany) as described previously [32,33]. The transduced mixed rat brain cells were fixed in 4% paraformaldehyde for 15min at room temperature. After washing three times with PBS, the cells were incubated with anti-CAII antibody and normal rabbit serum (Dako, Hamburg, Germany) for 2h at room temperature. The cells were then washed three times in PBS containing 0.03% Triton X and exposed to Texas Red-conjugated secondary antibody with normal rat serum (Dako, Hamburg, Germany). The stained cells were examined under a IX/70 inverted fluorescence microscope (Olympus, Tokyo, Japan) or analyzed by FACS Calibur. To analyze in vivo transduction of rat brain, 5 days or 3 months after injection of HXGFP, the rats were anesthetized and perfused with 4% paraformaldehyde. The brains were then removed and transferred to PBS solution containing 20% sucrose and stored overnight at 4 °C. The next day, 40-μm-thick tissue sections were cut using a cryostat, after which the sections were blocked for 1.5h in PBS containing 10% normal rabbit serum with 0.03% Triton X and incubated with anti-CAII or anti- MBP antibody overnight at 4 °C. The sections were then washed three times in PBS containing 0.03%. Triton X and incubated with rhodamine (TRITC)-conjugated secondary antibody (Dako, Hamburg, Germany). After immunostaining, the tissue sections were mounted on slides and visualized and photographed using a confocal laser-scanning microscope (Leica TCSSP, Heidelberg, Germany) as described previously [32].

Results

biomedres-openaccess-journal-bjstr

Figure 1: OL-specific CD4 expression by Ad/MBPCD4.
A. FACS analysis of Ad/MBPCD4 transduced cells.
• The upper panel shows a FACS analysis of Ad/MBPCD4-transduced 3T3 (a) and CG4 (b) cells
• As a control, the lower panel shows Ad/CAGCXCR4-transduced 3T3 (c) and CG4 (d)cells.
B. Immunocytochemical analysis of mixed primary rat brain cells transduced with Ad/MBPCD4.
• (a) Bright-field image.
• (b and c) Fluorescence microscopic images of cells double immuno stained with anti-CD4 (FITC) (b) and anti-CAII (Texas-Red) (c)
• (d) Merged image combining b with c.

Oligodendrocytes, which are known to be myelin-forming cells, are an important target cell for gene therapy aimed at treating such demyelinating diseases as multiple sclerosis and metachromatic leukodystrophy. Because OLs are postmitotic, they cannot be transduced using moloney murine leukemia virus-based retroviral vectors. On the other hand, both Ad and HIV vectors are able to transduce non-dividing cells. Furthermore, Ad vectors with the MBP promoter have proven useful for OL-specific gene expression both in vitro and in vivo [33]. To assess OL-specific expression, we first examined CD4 and CXCR4 expression in 3T3 and CG4 cells transduced with Ad/MBPCD4 or Ad/CAGCXCR4. Expression of CXCR4 was detected in both cell types transduced with Ad/CAGCXCR4, while expression of CD4 was detected only in Ad/ MBPCD4-transduced CG4 cells (Figure 1A). We then confirmed the OL-specific expression of CD4 using primary mixed rat brain cell cultures. Among the different cell types transduced with Ad/ MBPCD4 only CAII-positive cells expressed CD4 (Figure 1B), indicating that Ad/MBPCD4 selectively mediated expression in OLs. We next evaluated the utility of our two-step gene transfer system using mixed rat brain cells first incubated with Ad/MBPCD4. And because non-human cells do not express CXCR4, which is a coreceptor for HIV, these primary cells were also incubated with Ad/ CAGCXCR4. Two days later, the cells were incubated with HXGFP and, after an additional 2 days, they were stained with anti-CAII or anti-MBP antibody. We found that all EGFP-positive cells were also CAII-positive (Figure 2A). In addition, FACS analysis showed that only MBP-positive cells could be transduced with HXGFP (Figure 2B), indicating that OLs first transduced with Ad vectors were then selectively transduced with the HIV vector. Thus transient selective expression of CD4 molecules using Ad/MBPCD4 is apparently sufficient to render OLs susceptible to HIV-mediated gene transfer. To then determine whether this new method could be used in vivo to target gene transfer into OLs, we injected the Ad and HIV vectors into the brains of adult rats. Five days after injection of the HIV vector, some of the rats were fixed and their brains were examined using confocal laser-scanning microscopy.

biomedres-openaccess-journal-bjstr

Figure 2: OL-specific transduction using a two-step gene transfer method.
A. Immunocytochemical analysis of primary rat brain cells transduced using a two-step gene transfer method.
• (a) Bright-field image.
• (b and c) Fluorescence microscopic images of GFP (b) and CAII (Texas-Red) (c)
• (d) Merged image combining (b) with (c).
B. FACS analysis of primary rat brain cells transduced by two-step gene transfer method.
• FACS analysis of primary rat brain cells transduced with Ad/CAGLacZ (a) or Ad/MBPCD4 (b) plus Ad/CAGCXCR4 and HXGFP. After immunostaining with anti-MBP, the cells were analyzed by flow cytometry.

EGFP-positive cells were not be detected in rats injected with the control vector Ad/CAGLacZ plus Ad/CAGCXCR4 and HXGFP or with HXGFP alone (data not shown). On the other hand, we were able to detect EGFP-positive cells in rats injected with Ad/ MBPCD4 plus Ad/CAGCXCR4 and HXGFP (Figure 3A). Moreover, immunohistochemical staining using anti-CAII and anti-MBP (not shown) antibodies revealed that all of the EGFP-positive cells were also CAII-positive (Figures 3B & 3C) and MBP-positive, indicating that only OLs were transduced using this two-step gene transfer method. In some cases, the rats were not sacrificed and analyzed until 3 months after vector injection. Notably, the results obtained 3 months after vector injection were nearly the same as those obtained 5 days after vector injection (Figures 3D-3F). This strongly suggests we were able to integrate the transgene into the host genome and obtain sustained transgene expression

biomedres-openaccess-journal-bjstr

Figure 3: Immunohistochemical analysis of rat brain stained with anti-CAII 5 days
• 5 days (A, B and C) or 3 months (D, E and F) after transduction using the two-step gene transfer method.
• Confocal microscopic images of GFP-positive cells (A and B), CAII-positive cells (C, and D), and merged images (C and F) of the transduced brain tissue sections.

Discussion

Cell targeting is particularly important for in vivo gene transfer into brain, as stable genetic modification of some neurons or neuronal networks could cause serious psychological changes. Therefore, if the targeted cells are glia, undesirable gene transfer into neurons must be avoided. Our findings show that by using a two-step gene transfer system with Ad and HIV vectors we could selectively transduce OLs both in vitro and in vivo. Moreover, these findings imply that with the appropriate combination of vectors and promoters, one could also selectively transfer genes into neuronal cells.

The transduction efficiency for mixed primary rat brain cells was only 4% to 5% (Figure 2B). One likely reason for the low transduction efficiency is that the HIV vector cannot transduce non-human cells, which do not express CXCR4. We therefore had to use two Ad vectors to target rat OLs. On the other hand, only one Ad vector, Ad/MBPCD4, is needed for HIV vector-mediated gene transfer in human brain, which we would expect to increase transduction efficiency. Another possible reason for the low transduction efficiency is the toxicity of Ad vectors [34]. Using Ad vectors it is difficult to achieve highly efficient transgene expression without toxicity [35]. To overcome this problem, to used gutless Ad vectors, which retain no viral genes and have proven to be highly efficient with little toxicity or immunogenicity [36]. In addition, group D Ad reportedly infects primary central nervous system cells more efficiently than group C [37]. Thus, by using recombinant gutless Ad vectors generated from type 17 (group D) Ad, the efficiency of transduction into central nervous system cells using the two-step gene transfer method may be increased.

In summary, we have developed a new method of targeted gene transfer into OLs using Ad vectors with a tissue-specific promoter and an HIV vector. This new method can be used with nondividing cells both in vitro and in vivo. Furthermore, by choosing the appropriate promoters, this method may be useful for in vivo targeted gene transfer into any type of non-dividing cells.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us