Journals on Medical Microbiology

The Availability, the Price, the Tradition, the Religion, the Income, the Social, the Development and the Economic Influences on the Meat Consumption

Introduction

Up to a certain level of the income, the amount of the meat eaten varies with the income, in the relatively affluent the western world where the proportion of available income spent on the food has been steadily falling over the past generation, there is now a little if any difference between the amounts of the meat eaten by the different income groups. This contrasts with the Third World countries in the world [1-6]. The meat consumption is very large in the meat-producing areas such as Uruguay, Argentina, Australia and New Zealand, at three hundred grams per head per the day compared with an average of ten grams in India, Indonesia and Sri Lanka, the contrast between the total meat supplies in the developed and the developing countries in the world, allowing for the exports, the imports and the stock changes, and the production per capita in the former is five times as much as in the developing countries in the world. The relative size of the production of the different types of animals involved. The role of meat in the diet of the undeveloped and the developing countries in the world. The meat is held in high esteem in the most communities. It has prestige value, it is often regarded as the central food round which the meals are planned, various types of the meat are sometimes made the basis of the festive and the celebratory occasions, and from the popular as well as the scientific point of view, it is regarded as a food of high nutritive value [7-12]. While meat is not essential in the diet, as witnessed by the large number of the vegetarians who have a nutritionally adequate diet, the inclusion of the animal products makes it easier to ensure a good diet.

The marked difference at the present time in the attitudes towards the meat between the people of the developing and the industrialized communities in the former where the meat is in short supply it can be taken as a measure of the nutritional quality of the diet. Where a typical diet is heavily dependent on one type of cereal or the root crop, the meat, even in the small amounts, the complements of the staple food. The meat provides a relatively rich source of the well absorbed iron and also improves the absorption of iron from the other foods, its amino acid composition complements that of the many plant foods, and it is a concentrated source of the B vitamins, including the vitamin B12 which is absent from the plant foods. The pressure to increase the availability of meat products [13-18].

The Social Effect on the Meat Consumption

In the industrialized countries in the world where the food of all kinds is plentiful and cheap there is concern, whether or not misplaced, about the potentially harmful effects of a high intake of the saturated fat from the animal foods, the emphasis on the continuous development of the regulations dealing with the hygiene in the abattoirs and during the subsequent handling, concern about the hormones administered to the cattle, what is the perceived as the excessive addition of the water to some processed products, the concerns that can scarcely be afforded in the developing countries in the world when the balanced against the food supplies. The increasing of the mechanization in the industrialized communities the steady fall in the human energy expenditure and the consequently in the per capita the food consumption poses a potential problem in the achieving an adequate intake of the nutrients even where there is an abundance of the food availability. The variety of the food available a diet of eight Ml (2000 kcal) or more per the day is likely to supply enough of all the nutrients, but when the intake is 6.5 to 7 MJ (1600-1800 kcal) per day the consumer needs to make an informed choice of the food to ensure an adequate intake of the nutrients. The Western Europe countries where the daily average energy intake of the women is about 6.5 Ml and that of the men eight MJ and there are reports of the biochemical signs of the deficiencies of the several B vitamins and the iron. It is not clear whether this is accompanied by the functional defects [19-25]. The industrialized countries in the world there have been slow but the continuous changes over the years in the relative amounts of the different types of meat consumed depending partly on the price and influenced by the fashion, the advertising, etc. During recent years the health aspects, the more correctly, the perceived health aspects, have become a factor. The concerns about the public health in the industrialized countries in the world where the coronary heart disease and the other diseases of affluence are common have led to the recommendations to the public to modify their diet, the popularized as the dietary Guidelines. These particularly recommend a reduction in fat consumption, especially the saturated fatty acids and consequently, even if incorrectly, in the red meat. This has led in some sections of their populations to a relative increase in the consumption of poultry and the fish at the expense of the red meat. In addition there is concern, whether or not misplaced, about the presence in the meat of the pesticides, the residues of the hormones and the growth promoters used to increase the yields, and the concern about the human diseases thought to be transmitted by the beef, together with an increase, for the many reasons, in the vegetarianism [26-31].

The Meat as a Source of the Protein for the Human Protein Requirements

The human requirements for the protein have been thoroughly investigated over the years and are currently estimated to be fifty-five grams per the day for the adult man and fortyfive grams for the woman. There is a higher requirement in the various disease states and the conditions of stress. These amounts refer to the protein of what is termed the good quality and the highly digestible, otherwise the amount ingested must be increased proportionately to compensate for the lower quality and the lower digestibility [32-37].

The Protein Quality

The quality of a protein is a measure of its ability to satisfy the human requirements for amino acids. All the proteins, both the dietary and the tissue proteins, consist of two groups of amino acids – those that must be ingested ready-made, i.e. are essential in the diet, and those that can be synthesized in the body in adequate amounts from the essential amino acids. Eight of the twenty food amino acids are essential for adults and the ten for children. The quality of the dietary protein can be measured in various ways but basically it is the ratio of the available amino acids in the food, or the diet compared with the needs. In earlier literature this was expressed on a percentage scale but with the adoption of the S.I. system of nomenclature it is expressed as a ratio. Thus a ratio of 1.0 means that the amino acids available from the dietary proteins are in the exact proportions needed to satisfy the human needs; a ratio of 05 means that the amount of one of the essential amino acids present is only half of that required. If one essential amino acid is completely absent the protein quality would be zero. There is a popular impression, originating at one time from nutrition textbooks, that the qualities of the proteins from the animal sources are greatly superior to those from the plant sources. This is true only to the extent that many of the animal sources have the Net Protein Utilization, NPU, around 0.75 while that of many, but not all the plant foods are 0.5-0.6.

However, after infancy the people consume a wide variety of the proteins from the different foods and a shortfall in any essential amino acids in one food is usually made good, at least in part, by a relative surplus from another food, this is termed the complementation. As a result, the protein quality of the whole diets even in the developing countries in the world rarely falls below NPU of 0.7, a value that can be compared with the average of 0.8 in industrialized countries in the world. The value of the meat in this respect is that it is a relatively concentrated source of the protein, of the high quality, the highly digestible, about 0.95 compared with 0.8-0.9 for the many plant foods, and it supplies a relative surplus of one essential amino acid, the lysine which is in relatively short supply in the most cereals [38-44].

The Effect of Cooking on the Protein Quality

Apart from the inherent quality of the various proteins, a reduction in the quality takes place if there is damage to the amino acids when the food is cooked. At a temperature below 100°C when the proteins are coagulated, there is no change in the nutritional quality of the meat. The first changes take place when the food is heated to temperatures around 100°C in the presence of moisture and reducing the sugars, the present naturally or added to the food. There is a chemical reaction between the part of one essential amino acid, the lysine and a sugar to form a bond that cannot be broken during the digestion, and so the part of the lysine is rendered unavailable. When the proteins are analyzed to determine their amino acid composition the procedure involves a preliminary hydrolysis with the strong acid which does break the lysine sugar bond, so the chemical analysis does not reveal this type of the damage and the special methods are needed. At a higher temperature or with more prolonged heating, the lysine in the food protein can react with the other chemical groupings within the protein itself and more becomes unavailable. In addition, the Sulphur amino acids are rendered partly unavailable. The lysine-sugar reaction results in a brown-colored compound which produces an attractive flavour in the food and is the main cause of the color of the bread crust and the roast meat. While such severe heating reduces the amount of the lysine available in these foods the loss is nutritionally insignificant since it affects only a very small fraction of the total amount present. At the temperature needed to cook the meat there is little loss of the available lysine or the Sulphur amino acids but there can be some loss if the meat is heated together with the reducing substances, as may be present when the meat is canned with the addition of the starch-containing gravy or other ingredients. Overall the damage to the protein caused by cooking is of little practical significance and it can be argued that if there is meat in the diet it is likely that the quantity of the protein would compensate for any shortfall in the quality. The nutritional quality of the proteins of the meat rich in connective tissue is low since collagen and elastin are poor in the sulphur amino acids – there is only 0.8 g of each per 100 g of the total protein compared with values of 2.6 and 1.3 of each respectively in “The good meat. The meat is tough to eat when it is rich in the connective tissue and such meat is often used for the canning since the relatively high temperature involved in the sterilization process partly hydrolyses the collagen so making the product more palatable. However, it still results in a product with NPU as low as 0.5 compared with a value of 0.75 – 0.8 for the good quality meat [45-51].

The Adequacy of the Dietary Protein

The protein requirement of an individual is defined as the lowest level of the protein intake that will balance the loss of the nitrogen from the body in the persons maintaining the energy balance at the modest levels of the physical activity. The “requirement” must allow for desirable rates of deposition of the protein during growth and pregnancy. When the energy intake is inadequate some of the dietary protein is diverted from the tissue synthesis to supply the energy for the general physical activity – this occurs at the times of the food shortage and also in the disease states where the food is incompletely absorbed and utilized. A diet adequate in the energy is almost always adequate in the protein – both in the quantity and the quality. For example, an adult needs an amount of the protein that is equivalent to 7 – 8% of the total energy intake, and since most cereals contain 8 – 12% protein even a diet composed entirely of the cereal would, if enough were available and could be consumed to satisfy the energy needs, satisfy the protein needs at the same time. The growing children and the pregnant and the nursing mothers have higher protein requirements as do the people suffering from the infections, the intestinal parasites and the conditions in which the protein catabolism is enhanced. During the stress that accompanies the fevers, the broken bones, the burns and other traumas there is considerable loss of the protein from the tissues which has to be restored during the convalescence and so the high intakes of the protein are needed at this time together with an adequate intake of the energy.

The digestibility of the proteins of various diets varies considerably. For example, the digestibility of the typical Western diets and the Chinese diets is 0.95. That of the Indian rice diet and the Brazilian mixed diet is 0.8. Digestibility is high in the diets that include the meat and low when the maize and the beans predominate. An increase in the amount of the protein eaten beyond the requirement the figures compensates for any shortfall in the digestibility and the protein quality [52-58].

The Meat as a Source of the Vitamins and the Minerals

The meat and the meat products are important sources of all the B-complex vitamins including the thiamin, the riboflavin, the niacin, the biotin, the vitamins B6 and B12, the pantothenic acid and the folacin. The last two are especially abundant in the liver which, together with the certain other organs is rich in the vitamin A and supplies appreciable amounts of the vitamins D, E and K. The meat is an excellent sources of some of the minerals, such as the iron, the copper, the zinc and the manganese, and play an important role in the prevention of the zinc deficiency, and particularly of the iron deficiency which is widespread [59-66].

The Meat Iron: The amount of the iron absorbed from the diet depends on a variety of factors including its the chemical form, the simultaneous presence of the other food ingredients that can enhance or inhibit the absorption, and the various physiological factors of the individual including his/her iron status. Overall, in setting Recommended Daily Intakes of nutrients the proportion of iron absorbed from a mixed diet is usually taken as ten percentage. Half of the iron in the meat is present as the hemoglobin. This is well absorbed, about fifteen to thirty five percentage, a figure that can be contrasted with the other forms of iron, such as that from the plant foods, at one to ten percentage. Not only is the iron of the meat well absorbed but it enhances the absorption of the iron from the other sources – e.g. the addition of the meat to a legume/cereal diet can double the amount of the iron absorbed and so contribute significantly to the prevention of the anemia, which is so widespread in the developing countries in the world. The Zinc is present in all tissues of the body and is a component of more than fifty enzymes in the body. The meat is the richest source of the zinc in the diet and supplies one third to one half of the total zinc intake of the meat-eaters. A dietary deficiency is uncommon but has been found in the adolescent boys in the Middle East eating a poor diet based largely on the unleavened bread. The public health concerns associated with the consumption of the meat [67-75].

The Coronary or the Ischaemic Heart Disease

A major cause of death in some parts of the industrialised world is coronary heart disease (CHD) and the saturated fatty acids have been implicated as an important dietary risk factor. Since about a quarter of the saturated fatty acids in the diet is supplied by the meat fat, the consumption of the meat itself has come under the fire. The first stage of development of the disease is a narrowing of the coronary arteries by deposition of a complex fatty mixture on the walls – a process termed atherosclerosis. The fatal stage is the formation of a blood clot that blocks the narrowed artery thrombosis. Even if the thrombosis is not fatal the reduced blood flow to the heart muscle deprives it of oxygen and can lead to extensive damage – myocardial infarction. Despite many years of intensive investigation the real cause of CHD is not known but a large number of what are termed risk factors have been identified, including a family history of CHD, smoking, lack of exercise, various types of stress and certain disease states together with a number of dietary factors. The saturated fatty acids, the myristic and the palmitic, have been established as the most important of the dietary risk factors in the coronary heart disease. There are three types of the lipoproteins in the blood; the low density lipoproteins (LDL) in which 46% of the molecule is the cholesterol; the high density lipoproteins (HDL) which include twenty percentage as cholesterol; and the very low density lipoproteins (VLDL) which have eight percentage cholesterol.

The high levels of the total blood cholesterol are associated with the incidence of CHD and the high intakes of the saturated fatty acids elevate the blood cholesterol levels: hence the association between the dietary saturated fatty acids and the CHD. It is the LDL that appears to be the main problem and HDL appear to be protective. This lipid hypothesis of causation of CHD has led to the adoption in many countries in the world of dietary guidelines which, among other objectives, are intended to reduce the intake of the saturated fatty acids as compared with the unsaturated fatty acids and so reduce the blood levels of the LDL [76-82].

Types of the Fatty Acids

The Saturated Fatty Acids (SFA): Two of the saturated fatty acids, the myristic and the palmitic acids, appear to be the principal dietary factors that increase the blood cholesterol and do so by increasing the LDL The other main SFA in the diet, the stearic acid, does not have the same effect apparently because it is converted to oleic acid which is monounsaturated; the fatty acids of the shorter chain length appear to have no effect. In order to explain the terms saturated and unsaturated fatty acids to the consumer, SFA have been equated with the animal fats so meat fat is perceived as being saturated, but, in fact, this is only relative. For example the pork lard is 40% SFA, the beef tallow is 43-50% SFA, depending on the part of the body from which it is derived. These figures can be compared with 20 – 25% SFA in the vegetable oils which are perceived as unsaturated. The lamb fat the proportion of SFA is about 40% or less. In four of the six samples of the meat listed there is a higher proportion of the monounsaturates than SFA. This perception of the meat fat as being saturated has led to the belief that meat, particularly the red meat, should be avoided. In fact it has been shown that a reduction of the total fat intake while still including in the diet 180 g of the lean meat containing 8.5% fat can result in a reduction in the blood cholesterol levels. The relation between the diet and the coronary heart disease is not only a subject of considerable misunderstanding in the minds of the consumers but also a subject of some controversy among the medical scientists [83-90].

The Monounsaturated Fatty Acids (MUFA): The fatty acid of main interest is oleic acid, the plentiful in the olive, the rape seed and the higholeic safflower oils. The relatively high intake of the olive oil and consequently the proportionately low intake of the SFA are believed to be important dietary factors in the low incidence of the CHD in the Mediterranean countries in the world compared with the northern Europe. It is not clear whether the oleic acid confers direct protection or simply replaces the SFA in the diet [91-98].

The Polyunsaturated Fatty Acids (PUFA): These are fatty acids with between 2 and 6 double bonds and long carbon chains of 18 to 22 carbon atoms. The Linoleic acid ,18 carbon atoms and 2 double bonds and the linolenic acid are plentiful in many vegetable oils. The very long chain fatty acids, the eicosatetraenoic and the docosapentaenoic are plentiful in the fish oils and smaller amounts are present in some meat fats. These very long chain the PUFA appear to offer direct protection against ” the heart disease”, particularly against the thrombosis, but it is not clear whether the other PUFA in the diet ,from the vegetable oils, offer protection or simply displace the SFA. Consequently it is often recommended that the vegetable oils, rich in the PUFA should not simply be added to a diet but should be used to replace other fats when there is a need for the fat in formulating the food products. The Linoleic and the linolenic acids are essential in the diet, they were at one time termed the vitamin F and the very long chain Fatty Acids is formed from them in the body. It is possible that the rate of their formation may not be adequate under all circumstances and so there may be benefit from consuming some of these very long chain PUFA ready-made in the diet [99-106].

The Trans Fatty Acids: The PUFA exist in nature in two structural forms, termed cis and bans forms. It is the cis forms that are used in the production of the fatty products such as special margarines. The other forms, bans, are formed when the oils are hydrogenated to make the hard fats for some margarines, and small amounts are found in the fats of the ruminants where they are formed by the bacterial hydrogenation in the rumen.The bans fatty acids have been shown to have an adverse effect on both the LDL and the HDL and so are considered potentially harmful. When calculating the ratio of the PUFA and the SFA in the diets, the bans fatty acids are often included with the SFA [107-111].

The Cholesterol: The Cholesterol is a fatty compound involved in the transport of the fat in the blood stream and is also part of the structure of the cell membranes of the tissues of the body. It is not a dietary essential since the adequate amounts are synthesized in the body from other dietary ingredients. The Confusion has arisen between the terms the blood cholesterol and the dietary cholesterol. For most individuals the dietary cholesterol has little or no effect on the blood cholesterol levels because reduced the synthesis in the body compensates for increased the dietary intake. However, there are individuals who are sensitive to the dietary cholesterol and most authorities advise a general reduction in the cholesterol intake for everyone. The meat supplies about one third of the dietary cholesterol in many western diets with the remainder from the eggs and the dairy products. Since all these foods are valuable sources of the nutrients there could be some nutritional risk in restricting their intake. In addition to playing an important role in the CHD dietary saturated fats have been implicated in the hypertension, the stroke, the diabetes and certain forms of the cancer, so all dietary guidelines include the recommendations to reduce the total fat intake and especially that of the saturated fats. The total fat should be reduced to 20-30% of the total energy intake, with not more than 10% from the saturated, 10-15% from the MUFA and with the PUFA at 3% or more; this results in a P/S ratio of 1.0. The reduction in the dietary cholesterol to around 300 mg or less per day [112-120].

The Poultry Meat Versus the Read Meat

The dietary guidelines sometimes include the advice to substitute, at least in part, the chicken for the red meat. The chicken meat including its skin contains about the same amount of the fat as does medium-fat red meat, twenty percentages; it is important to remove the skin with the adhering subcutaneous fat, to reduce the fat content to around 5% – which is no lower than the figure for the lean meat. However, the chicken flesh has less saturated fatty acids and more PUFA, fourteen percentages than the lean meat with forty five percentage and four percentage, respectively. The duck flesh is very fat, containing about ten percentage fats – forty five percentages when the skin and the subcutaneous fat are included; only twenty seven percentage of the duck fat is saturated. The meat from the game birds, the grouse, the partridge, the pheasant and the pigeon, contains about five, seven, nine and thirteen percentage fat, respectively, of which about one quarter is saturated. Apart from the differences in the amounts and types of the fatty acids in the various kinds of the meat, the poultry and the game their nutrient compositions are similar [76-83].

The Toxic Compounds Formed During the Processing and the Cooking of the Meat

While the cooking is necessary to develop the desirable flavours in the meat the oxidation of the fats, especially at the frying temperatures, can give rise to the compounds that decompose to the aldehydes, the esters, the alcohols and the short chain carboxylic acids with the undesirable flavours. The meats are particularly susceptible because of the unsaturated lipids present which are more readily oxidised and because of catalysis by the haeme and the non-haeme iron. The more PUFA present the greater the likelihood of the oxidation, and the pork, the duck and the chicken are the most susceptible. Other types of the meat are less susceptible, e.g. the Iamb, the turkey, and the beef. The adverse effect of these oxidation products on the eating quality is well recognized but more recently it has been suggested that some of them may be carcinogenic, and also may be involved in the ageing process and the CHD. However, it is possible or even likely that the unpleasant flavours would cause the rejection of the food at the levels below the harmful ranges. The cholesterol can also be oxidized and the oxidation product has been suggested as a possible factor in the CHD [84-92].

The Nitrosamines: the Nitrites, used in the curing salts can react with the amines commonly present in the food, to form the nitrosamines. These have been shown to be carcinogenic in all species of the animals examined but it is not clear, despite years of the intensive research, whether the amounts present in the cured meat affect the human beings. The problem is particularly difficult because the nitrosamines have been found in the human gastric juice, the possibly formed from the nitrites and the amines naturally present in the diet. As a precaution, the legally enforced in some countries in the world, there is a tendency to reduce the amount of the nitrite used in the curing mixture and to add the vitamin C which inhibits the formation of the nitrosamines [115-120]. The erythorbic acid and the tocopherol are also effective in reducing the nitrosamine formation. The problem is complex since the process of the curing is designed to prevent the growth of the Clostridium botulinum which is responsible for the botulism, and the risk of the botulism is increased if the concentration of the nitrate-nitrite is reduced too far. Moreover, the cigarettes contribute far greater amounts of the nitrosamines, up to one hundred times as much as the cured meat [93-101].

The Residues of the Drugs. The Pesticides: The residues of the drugs, the pesticides and the agricultural chemicals can be found in small amounts in the meat and the meat products. The pesticides, for example, may be applied specifically to the animals to control the insects or the intestinal parasites but may also be present in the meat as a result of exposure of the animals to the chemicals used on the buildings, the grazing areas and the crops. While there is no clear evidence that these small amounts cause harm to the consumer they are perceived as a risk. For this reason there is widespread legislation to the test for and the control a range of the chemical substances that may be present in the meat. The problem is complicated because the several hundred substances are used to treat the animals, to preserve the animal health and to improve the animal production [110-116]. These include the antimicrobial agents, the beta-adrenoreceptor blocking agents the anti-helminthic, the tranquillizers, the anti-coccidial agents, the vasodilators and the anesthetics. The potential safety problems arise from the possibility of the residues of these drugs and their metabolites remaining in the tissues consumed by the human beings. Some tranquillizers, for example, are used in the pigs in the immediate pre-slaughter period when there is no time for their removal through the normal metabolic processes. They can persist in the human body so that repeated intakes could possibly result in the accumulation of the drugs. In order to protect the consumers from such as risks, Practice for control of the use of the veterinary drugs. These provide guidelines for the prescription, the application, the distribution and the control of the drugs. Where there is sufficient scientific information available about the drug, the Acceptable Daily Intake as a measure of the amount of a veterinary drug, expressed on a body weight basis, that can be ingested over a life-time without appreciable health risk and the food additives [102-109].

Conclusion

The meat is not an essential part of the diet but without the animal products it is necessary to have some reasonable knowledge of the nutrition in order to select an adequate diet. Even the small quantities of the animal products supplement and complement a diet based on the plant food so that it is nutritionally adequate, whether or not there is informed selection of the food. The Side by side with these known benefits of including the meat and the meat products in the diet are problems associated with the excessive intakes of the saturated fats, the risks of the food poisoning from the improperly processed products, the residues of the chemicals used in the agriculture and the animal production and other potentially adverse aspects. Within these concepts is the major problem of the meat production under the conditions that used to avoid the food poisoning and satisfy the economic demands of the profitability with the traditional, the cultural and the religious concerns of the community. There is a steadily increasing demand for the meat in the developing countries in the world which can be satisfied by increased the domestic consumption and the increased imports. It is thought that the major increase in the domestic production will come from the small producers rather than from creating the large production units but these lack the essential facilities for producing the safe and wholesome products. If there is to be a significant increase in the meat production it will require clear policy decisions with the necessary financial, the legislative and the technical support. There is considerable potential for the increased supplies through the better management, selection of the animals, the avoidance of the waste and making use of the indigenous species. If the exports are to be considered then the attention has to be paid to the strict hygienic and the safety requirements involved, whatever the domestic market might tolerate.

Conflicts of Interest

The author declare no conflicts of interest.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Free medical journal

Reaction to Azithromycin, Between Hypereosinophilia and Charcot- Leyden Crystals: Case Report of a Pediatric Case

Introduction

The Charcot-Leyden (CL) crystals, needle-shaped bipyramidal crystals, are present contextually with a high tissue infiltration of eosinophils. They are formed when galectin-10 crystalises at an intracellular or extracellular level during eosinophilic ETosis* (*Extracellular Trap cell death), an active cell death process involving the destruction of the cell nucleus and plasma membrane with the release of chromatin structures similar to cobwebs [1-3]. Hypereosinophilia, or rather, an eosinophilic count greater than 1.5 x 10^9/L, is typical in cases of hypersensitivity. An exemplary case that caught our attention in July 2023, was that of M., a 4-year-old girl with gastrointestinal symptoms combined with an elevated degree of eosinophilia equal to a count of 18.66*10^9/L and Charcot-Leyden crystals taken from a stool sample.

Anamnesis and Objectivity

M.’s story began in May 2023 with the intake of azithromycin and cortisone for frequent episodes of otitis and continued into the following June with a new azithromycin therapy for tracheitis and fever. A few days after the end of the last cycle of antibiotic therapy, abdominal pain, episodes of vomit, asthenia and loss of appetite began. Following the onset of bilious vomit, on 4 July, the little girl was taken to emergency room, where, upon initial assessment, the only finding was a bilateral laterocervical and inguinal micro polyadenopathy and a small right supraclavicular lymph node. From an anamnestic point of view, the family and physiological history resulted to be normal. There was no evidence of recent travel or intake of new foods or raw fish. The only information concerning the last six months, regarded the child coming into contact with her grandmother who had arrived from Paraguay.

Diagnostic Evaluation

The first tests highlighted: leucocytosis (32.8 *10^9/L, range 5.0- 17.0) with a percentage of eosinophils of 57% (18.66 *10^9/L, range 0.00-1.00), normal liver, kidney and pancreatic function, electrolytes in range and normal coagulation. Fecal Calprotectin equal to 1300 mcg/g (negative if <40). Urine test within normality. From a diagnostic point of view, the first hypothesis investigated was that of an infectious/ parasitological. However, the search for microorganisms didn’t lead to any findings, more specifically: negative multiplex PCR in faeces for bacteria, parasites, protozoa and gastrointestinal viruses, negative blood test for CMV, EBV, HIV, Bartonella, Borrelia, Toxoplasma, Echinococcus, Toxocara Canis and Strongyloidiasis. Negative Schotch test. Mantoux negative. The microscopic examination of the faeces which was conducted with the aim of highlighting parasite eggs, with great surprise, showed a high number of CL crystals in the absence of parasites (Figure 1). After having abandoned the possible infectious/ parasitic ethology, with the suspicion of a haematological cause, the immunophenotype evaluation was carried out, but it didn’t highlight the presence of immature CD117+ populations and/or CD34+ myeloid or lymphoid blasts. From an immunological point of view, the tests for the evaluation of celiac disease were negative.

Figure 1

During the assessment, radiological tests were carried out, such as a chest x-ray, which was normal, and a complete abdomen ultrasound, which was negative except for the finding of multiple lymphadenopathies in the mesenteric fan, which had reduced in the subsequent abdomen ultrasound. Considering the negativity of the aforementioned tests, upon a new analysis of the medical history, the previous intake of azithromycin was taken into consideration as the cause of the hypereosinophilia. This hypothesis was in agreement with the case described by Kobayashi et al. about eosinophilia following the intake of azithromycin in a 17-month-old patient, where all infectious, immunological and onco-haematological tests were negative and the symptoms in question were ascribed to a DRESS due to azithromycin and pranlukast [4] or by Schmutz and Trechot who analysed a case of DRESS related to the use of azithromycin and characterized by fever, hypotension and hypereosinophilia during EBV infection [5]. The finding of Charcot-Leyden crystals in the faeces, initially attributed to a parasitic infection, at the end of the diagnostic process was correlated to a transient massive infiltration of the gastrointestinal mucosa on behalf of the eosinophils, the blood count of which had significantly increased following the intake of azithromycin (Table 1).

Table 1: Multiplex PCR performed during hospitalization.

Follow up and Results

After discharge, the child presented good clinical conditions with resumption of normal daily activities and nutrition. The blood tests carried out post-discharge on 24 July showed normalization of the leukocyte count (WBC 11.9 *10^9/L) and of the eosinophils (1.68 *10^9/L), which resulted further reduced in the tests which took place on 22 August. Even the value of fecal calprotectin returned within normal range.

Discussion

In the light of the results obtained from the various investigations carried out, the hypereosinophilia found in M. would appear to be related to the intake of azithromycin. Two findings are of the greatest importance: a particularly high eosinophil count and the finding of Charcot-Leyden crystals in faeces, an unusual site for this finding and probably correlated to the high concentration of eosinophils in the intestinal mucosa of the case under examination. The case, therefore, leads to some considerations. That of which, at first sight could have seemed to be a parasitic gastroenteritis, instead appeared to be a transient eosinophilic gastroenteropathy with hypereosinophilia related to the intake of azithromycin (as also supported by the score of 7 on the Naranjo nomogram). More specifically, azithromycin, a widely used antibiotic with a good safety profile, in this case was the most likely cause of a rare hypersensitivity reaction characterized by acute gastrointestinal symptoms accompanied by high degree of eosinophilia and massive eosinophilic infiltration of the digestive mucosa as shown by the finding of numerous Charcot-Leyden crystals in the faeces. Thus, this clinical case offers various points of analysis: in the presence of a series of symptoms of gastrointestinal nature, it is necessary to carry out various investigations, from the search for microorganisms to haematological and immunological evaluations, as made by us. On the other hand, the evaluation of the medical history from the beginning is of fundamental importance, so the correlations between the intake of a specific drug and the clinical state can be highlighted. Compared to other cases present in literature, in which the azithromycin-induced hypereosinophilia is related to a real DRESS, in the case we analysed, the symptoms stopped at a previous level, without incurring the severity of the other cases described and therefore requiring a therapy based only on Pediatric Electrolyte Solution, without the use of steroids, resulting in complete resolution of the symptoms and haematological findings with the suspension of the drug.

Conclusion

The presentation of this clinical case aims to emphasize the importance of a correct anamnesis and the need of not excluding any diagnostic hypothesis, even in cases in which the symptoms seem to bring exclusively towards a specific diagnosis. We also want to underline how certain laboratory findings can direct the diagnosis towards the correct ethology. Indeed, as highlighted in the presentation of the case, the diagnostic suspicion was supported by the use of very modern techniques and ancient techniques (as in the microscopic examination of the faeces) together with a careful anamnestic investigation. Although limitations of this case report are the singularity of the condition analysed and the lack of literature, above all in the pediatric field, which does not allow a complete comparison with similar cases, a greater study on hypersensitivity due to azithromycin, drug particularly dear in pediatric use, is of fundamental importance.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open access medical journal

What Provokes Constant Changes in the Etiology of Pneumonia?

Opinion

Throughout the centuries-old history of acute pneumonia (AP), this disease was considered exclusively as an inflammatory process in the lung tissue, but in the second half of the 19th century, the intensive development of microbiology marked the beginning of the study of the etiology of AP. The first results of the study of pathogens of inflammation of the lung tissue have already identified the main etiological features of this disease. For example, C. Gram, the founder of one of the directions of microbiological diagnostics, in 1884, based on the results of his work, proved that AP can be caused by more than one microorganism, which excluded the specificity of inflammation in this disease [1]. 3 years after the publication of this article, materials appeared that pneumonia can be caused by opportunistic bacteria which are always present in the body, which confirmed the ancient postulate that people get pneumonia, not get infected with it [2]. And although Streptococcus pneumoniae (SP) or Pneumococcus (P) was isolated in 1886, which prevailed among the pathogens of AP and got its name because of this exceptional propensity [3], but the fundamental foundations of the etiology of this disease and its main properties – non-specificity and non-contagiousness – were formulated already at the dawn of the development of microbiology. It should be noted that the dominant role of SP among the pathogens of AP remained stable for a long period. Periodic statistics on the etiology of this disease consistently showed the presence of P as the causative agent in 95 percent or more of cases. Such figures were presented in 1917 [4], in 1927 [5], in 1933 [6], in 1939 [7] and in 1948 [8].

According to the materials of the presented statistical data, P remained the leader among the pathogens of AP for more than 30 years, without reducing the level of its prevalence is below 95%. And if we take into account the leadership of SP in inflammation of lung tissue since it was first discovered and, due to its superiority, received its name, then the duration of stable statistics of the etiology of AP exceeds at least six decades.

If we focus on the preserved statistics of the etiology of AP, then starting from the first results of studying this characteristic of the disease in the 19th century and up to the 40s of the last centuries, its main causative agent was SP, which consistently maintained almost one hundred percent participation in this inflammatory process. However, as the subsequent course of events has shown, the efforts of medicine to widely use etiotropic therapies have changed the usual proportions of the etiology of AP. In 1929, A. Fleming [9] reported the discovery of penicillin, but it was only in 1942 that he successfully used pure penicillin in clinical practice for the first time [10]. Even before the use of this drug in medical practice, the development of resistance of microorganisms to it was noted and proved. In early 1940, the developers of penicillin for its industrial release published data that the strain E. coli is able to inactivate penicillin by producing penicillinase [11], and in 1942 information was made public about the development of resistance of four strains of Staphylococcus aureus (SA) to penicillin [12].

Although the first report of tetracycline-resistant strains of P appeared only in 1963 [13], and to penicillin – in 1967 [14], the rapid development of resistance of CA as part of the body’s symbionts to penicillin contributed to an increase in its aggressiveness. Small outbreaks of staphylococcal infection began to be observed already in the late 40s, and in the 60s and 70s there was a peak in inflammatory processes of staphylococcal etiology, including severe pneumonia, especially in childhood. By this time, more than 80% of the CA strains were resistant to penicillin [15]. The increasing role of CA in the etiology of AP was so impressive in those years that severe forms of the disease were considered and began to receive treatment as conditionally staphylococcal even before receiving the results of microbiological studies at the initial diagnosis. With the increase in the number of cases of staphylococcal pneumonia, the percentage of SP began to decrease and, starting from this period, it no longer returned to its original indicator. It is very curious that in 1960 a synthetic analogue of penicillin, methicillin, appeared, to which CA had no resistance [16], but a year later a new form of the pathogen, Methicillin-Resistant Staphylococcus Aureus – MRSA [17] was described. In this situation, the SA showed its extreme aggressiveness, displacing SP from its usual leading positions among AP pathogens.

Moreover, if you wish, you can find data on this issue for the period when CA reached almost one hundred percent (mainly in children) among the pathogens of AP. In this case, we are not talking about presenting the details of the history of the etiology of AP, but about the causes that violated the primary persistent proportions between pathogens of acute nonspecific inflammation of the lung tissue and have since constantly maintained the dynamics of changing priorities in this list. SA was the first to break the hegemony of SP as the permanent leader of AP pathogens. In parallel with the increase in the aggressiveness of staphylococci and the increase in cases of staphylococcal pneumonia, antibiotic resistance of other microorganisms increased, which led to increased virulence and difficulty in neutralization. The frequency of detection of SP against the background of these processes was constantly decreasing and by the end of the 80s it had decreased to 15% [18]. However, by this time, as is known, SA had also ceased to play the role of a “leading monster” not only in the development of severe forms of AP, but also in the etiology of this disease as a whole. In the period preceding this time, the change of leaders among the pathogens of AP and the decrease in the effectiveness of the antibiotics used required the development and release of new drugs and periodic revision of therapeutic tactics.

The classic development of events “in a spiral” began to be noted in the 2010s, when, according to some statistics, SP in the etiology of AP increased to a third among all bacterial pathogens, but by this period it already occupied the second position after Haemophilus influenzae [19-22]. In the last specified period, long-term attempts continued to solve the problem of successful treatment of AP with the help of early microbiological diagnosis of the pathogen and targeted exposure to this disease factor using antibiotics. The stereotype that has developed over many decades about the dominant role of antimicrobial therapy has persisted and continues to dominate professional ideas about the essence of the problem of AP. Only a few experts have begun to pay attention to the fact that changes in the etiology of AP are more profound than existing impressions. In fact, by this time, the problem of antibiotics losing their prescription in patients with AP began to worsen. More than two decades ago, some experts expressed concern about the growth of viral forms of AP [23-25]. Viral pneumonia was first described in 1938 [26], but for a long time remained a rare variant of the disease. However, already a decade and a half ago, the number of cases of viral pneumonia accounted for almost half of all cases of AP in the world [25].

Such a number of severe inflammatory processes that go beyond the traditional prescription of antibiotics and exclude hopes for the success of such therapy required new views on the problem and new solutions. Two epidemics of coronavirus at the beginning of this century (SARS and MERS) did not lead to a revision of the treatment strategy, although the coronavirus remained on the list of pathogens for all subsequent years, and pneumonia of this etiology continued to be registered until the outbreak of the SARS-CoV-2 pandemic [27,28]. In the light of significant shifts in the etiology of AP towards viruses and a steady decrease in the effectiveness of medical care for this contingent of patients, prolonged monitoring of changes in the conditions of development of this disease without attempts to radically revise the decision-making strategy is surprising and perplexing. However, the reason for the observed stagnation in solving the AP problem became more obvious with the onset of the SARS-CoV-2 pandemic and the referral of assistance to a large flow of patients with COVID-19 pneumonia. When a large number of patients with viral lung tissue damage were admitted, when bacterial coinfection was detected only in a few percent of cases, representatives of modern medicine did not find a better way to provide medical care, as a continuation of declarations of the undoubted need for the use of antibiotics, the rate of prescribing which in this group of patients approached almost one hundred percent [29-31].

Another landmark event that makes it possible to assess the cause of the stagnation in solving the AP problem is the official announcement of antibiotic-resistant microflora as one of the global health disasters [32]. The very fact of confirming these severe consequences of prolonged antibiotic use is welcome. But, from my point of view, the time for such a statement was not chosen by chance. As noted above, the development of microflora resistance to antimicrobial drugs was known even before the clinical use of antibiotics. The entire subsequent period of antibacterial therapy consisted of the generation and release of new drugs due to the development of resistance to previous drugs and a decrease in their effectiveness, which was especially intensively observed until 1970 [33]. The reason for such painstaking work of pharmacists and microbiologists was only the effect of antibiotics. The rapidly emerging resistance of microorganisms to the drugs used, as well as the replacement of some pathogens by others, which eventually led to the entry into the arena of viruses.

Thus, not only the fact of the development of microbial resistance was known, but also its specific manifestations were constantly subject to possible correction for at least 80 years. During this entire period, the obvious side effects of antibiotics were not taken as seriously as they were three years ago. The beginning of the SARSCoV- 2 pandemic showed the lack in modern medicine of not only effective ways to help patients with severe lung lesions, but also ideas for a way out of a situation where the basis of treatment was auxiliary and symptomatic means. The concentration of such patients in specialized departments significantly increased the burden on their staff and at the same time increased the feeling of ineffectiveness of the efforts made and the inability to avoid further deterioration and deaths during treatment. The studied and habitual hope for antibiotics, which in recent years have significantly lost their indications for use, has completely disappeared. Medical journals have even published articles accusing government officials of such a turn of events [34], as well as an unprecedented number of publications with confessions of authors in their depressive states that arose in the course of professional work [35-38]. It was a period when the opinion began to spread and strengthen that even in the most advanced health systems, medicine cannot guarantee a successful outcome in case of illness. The announcement of bacterial resistance as a global catastrophe explained the sudden loss of antibiotics as the main and usual treatment for pneumonia, while at the same time allowing the “honor of the uniform” to be preserved. At the same time, the described events of recent years have shown how narrowed the ideas of modern professionals about the problem of AP are by a learned template from the university about the complete dependence of the development of the disease on its pathogen and the decisive role of etiotropic drugs in achieving success.

Meanwhile, it is no secret to anyone that, despite significant transformations of the etiological list of AP, this disease has retained its unique features, and attempts to carry out differential diagnosis of variants based on etiological signs have failed and continue to fail [39-41]. The materials of a brief analysis of data on changes in the etiology of AP observed over several decades leave no doubt that the ongoing transformations among the active pathogens of the disease are the result of prolonged exposure to antibiotics on the microflora surrounding us. The purpose of antibiotics was initially to eliminate only pathogenic microorganisms, but not the inflammatory processes themselves. This allows us to consider antibiotics as a kind of “biological cleanser”. For example, the spectrum of action of penicillin did not extend to all varieties of microflora. Some bacteria, as biological objects, have demonstrated their adaptive capabilities, avoiding complete destruction during the application of therapy. Since a vacuum cannot exist in the wild, more viable microorganisms begin to replace the destroyed microflora. The first manifestation of this antibiotic effect was a surge in staphylococcal infections, including severe forms of staphylococcal pneumonia. Subsequently, antimicrobial therapy was constantly improved, which was required due to a decrease in its effectiveness and an expansion of the spectrum of resistant strains. All these efforts continued to support the dynamic process of changes in the etiology of AP. By now, the process of restructuring the primary relatively stable variant of the etiology of AP has reached a level where the influence of antibiotics on it as the main cause of such dynamics becomes less significant. A sharp increase in the proportion of viral variants of inflammation and their continued growth are a reaction of nature to external interference in its processes. If we return to the issue of microbial resistance from these positions, then this consequence of prolonged antibacterial therapy seems to be less significant than the observed shift in the list of leading pathogens of AP.

In addition, there are already quite a few real facts that the presence of antibiotic-resistant strains among the body’s symbionts is not a sign of imminent disaster and such parity of existence can last indefinitely. The problem with resistant strains of microorganisms arises in the case of disease development, when such pathogens are the cause of inflammation, and etiotropic therapy continues to be considered as the only option for decent treatment. The last feature of the affected problem, when stable and significant side effects of the drugs used are not subjected to a full-fledged critical analysis, and the drugs themselves, contrary to the logic of the observed facts, continue to be considered as a life-saving panacea, indicates another consequence of the use of antibiotics, which is a persistent mental distortion of professional ideas about the essence of the whole AP problem. It is sad to state this, but there is simply no other explanation for several facts, except for the mental distortion of the problem under the influence of a firmly internalized belief in the therapeutic superiority of antibiotics. On the one hand, since the discovery of antibiotics, it has been well known that they can only act against bacterial pathogens, but do not have a direct effect on the emerging inflammatory process, which can progress in cases of aggressive development under the influence of their own mechanisms. On the other hand, long-term attempts at differential diagnosis of AP according to the etiological principle did not bring the desired results, but the AP clinic continues to maintain its uniqueness due to the localization of the lesion and its effect on the function of the affected organ, despite various pathogens of AP. Due to this feature, AP differs from other localizations of inflammation of the same etiology. In addition, in the current professional discussions on this topic, experts are trying to combine two mutually exclusive areas.

On the one hand, measures to reduce unjustified prescribing of antibiotics to reduce the further development of resistant bacterial strains are widely discussed, but, on the other hand, the search for the most effective antibiotics in the treatment of patients with AP and broad support for their use in viral pneumonia, when they have lost indications for their appointment, continues. If we summarize the available information on the problem raised, then such a consequence of prolonged use of antibiotics as the resistance of microorganisms seems to be much less of a nuisance than didactic deformations of professional ideas. In connection with the latter, it should be particularly noted that the WHO statement on microbial resistance as a global catastrophe proposes a solution to this problem by developing more advanced forms of antimicrobials [32]. In other words, it is proposed to continue the development and improvement of the factors that caused the disaster under discussion.

Recently, despite the predominance of viral forms of AP in many statistical data, the search for means of rapid diagnosis of bacterial pathogens and determination of the most effective antibiotics for their neutralization continues, as it did many years ago [42-46]. There is a very clear impression that many experts have not revealed to themselves the deep meaning of the observed etiological changes in AP. As long as such points of view dominate professional perceptions of the essence of the AP problem, the hope of finding a rational and scientifically sound solution that can finally bring long-awaited success in the treatment of these patients will remain an unrealized dream. Only a radical revision of the concept of the disease will make it possible to move this problem off the ground. A ready-made example of such a solution can serve as excellent results in the prevention of complications and rapid elimination of the focus of AP, which the author of these lines received when he radically changed his own view of the problem and gave a leading role in the treatment process to timely pathogenetic therapy [47]. This work was carried out at a time when antibiotics still retained their former prestige, and viral pneumonia did not pose the tasks that have appeared today. Unfortunately, emigration delayed further research and earlier publication of data in the public domain. Summing up a brief analysis of the facts that are known and documented in available publications regarding the variability of the etiology of AP in recent decades, it should be noted that this phenomenon undoubtedly arose with the beginning of widespread use of antibiotics. To date, in fact, the only side effect of antibiotics remains the development of resistance of microorganisms and a steady decrease in the activity of these drugs.

A significant change in the etiology of the disease, which every year reduces the meaning of using this therapy in patients with AP, is a more severe consequence of taking antibiotics, but the causes and mechanism of this process remain beyond the due attention of specialists and are not subject to discussion. The efforts being made today in search of a solution to an ever-deepening problem repeat those attempts that have been made over the years and whose maximum success has been short periods of slowing down the development of the situation. However, in order to fully understand the scale and significance of the changes that have already occurred, it is necessary, first of all, to abstract from many years of training in the clearly exaggerated importance of antibiotics. The didactic role of this stamp today plays the role of the main brake in solving the problem.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open access clinical and medical journal

Scientific and Technical Application of Cost-Effective Ginger-Bio-Medicines Use as Pandemic-Preventive Natural-Immunity-Booster ‘Family-Vaccine’!

Introduction

Monkeypox, Ebola, Polio, etc., with the most infectious fast-spreading- multiple-variant-pathogens-Omicron-SARS-CoV-2 coronavirus causing COVID-19 pandemic, has severely affected our-lives-nursing- healthcare-socioeconomic and education which has led highestnumber- of-Covid-19-deaths due to lack-of-vaccines. Recent ‘Flue, Respiratory syncytial virus (RSV)’ efforts to prevent infections and keep vulnerable people out of hospital are beginning to pay off, but deploying these strategies presents new challenges [1-8]. So it is an urgent need to be solved urgently. Primarily, it has been reported that the household common rhizome spice, ginger (Figure 1) is very effective against coronavirus pathogens controlling COVID-19 diseases by boosting natural immunity. It has already been reported that the presence of more than 400 bioactive pharmaceutical compounds, literature collected from the ‘Veda’ and different sources including PubMed [1-8].

Figure 1

Methodology

Study Samples (Clinical)

Here, the COVID-19 case studies in a five-member vaccinated- families (Figure 2) aged 78+, 67+, 54+, 50+, and 21+ respectively. All have been suffering in ordinary COVID-19 [aged 78+, 67+, 21+] or Long-Covid-19 [50+] except the 54+ year old follow-up colorectal cancer patient [1-8].

Figure 1

Application of Bio-Medicines Doses: Zingiber officinale

It has been shown that the total cured or prevented coronavirus pathogens by using high-diluted ginger biomedicines prepared from Zingiber officinale Rosc. @1g/100 ml moderately hot drinking water in a big cup, orally administered 3 to 6 times/day depending on the intensity of infection at an interval of 3 hours for at least 15 days with intensive healthcare and nursing [1-8].>

Results and Discussion

Interestingly, the 78+ aged co-morbid cardiovascular urine-and lung-infected pancreatic multiple suffered drug-resistant fevered patient recovered from ICU were staying for 11 days for critical special treatments. The weak patient stayed 5 days in ITU and a general two-bed-cabin for a complete cure. Sixteen days after discharge, the patient was infected with COVID-19 in the home with other three family members and started with high-diluted ginger biomedicines maintaining the schedule so all four family members could prevent the disease due to the presence of more than 400 bioactive pharmaceutical compounds [9-12].

Conclusion

So, it is proved, “Scientific and Technical Application of Cost-Effective Ginger-Bio-Medicines Use as Pandemic-Preventive Natural-Immunity- Booster ‘Family-Vaccine’ Against Future Pathogens Respiratory Syncytial Virus (RSV) also”. And it’s extremely-ultra-low-doses, popular-regular-use-as-traditional-tropical-medicines, easily tackle many diseases and complications, use in different pharmacological medicines preparations, extremely low-toxicity, and potential efficacy, many potential different phytochemicals in a diverse range, eco-friendly-side effects-free, cost-effective, easily prepare-able, easily- available, easily-manufacture-able, easily-equitable, easily-marketable and easily-supply-able, etc. and develop the best quality biomedical and life-scientific information on all aspects of pharmacology and its analytical nanoparticles studies or proper side-effects free effective medicines or drugs also.

Future Prospects

It may aspire to foster collaborative research advancing disease mechanisms, diagnosing diseases using biotechnological tools, developing therapeutic interventions, and addressing future global health challenges. It is also worth mentioning that the government may consider the “Ginger-Biomedicines as ‘Family-Vaccine’ Against Any Future Health Diseases” for future scientific & technical research, future biodiversity-green environments, preventing future epidemics, agriculture, and future socio-economy, or human health economy, ultimately providing scientific healthcare, and skill development with job facilities where we live is important to ensure proper living conditions for the ‘Future India as well as the ‘Whole World Improving World Policy’[1-8].

Acknowledgment

I am thankful to the eminent Senior Consultant Physician, Dr. Ranjan Mukherjee, M.B.B.S., M.D., Ex-District Coordinator, M. O., MHT, H. O. D., Cardiac care, RTC, Reader (Pathology), MKHMCH (JKD), and Dr. Dipanitwa Malik, M.B.B.S. of Sishu Sathi Scheme at Department of Health and Family Welfare, India for health- check-up.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Medical humanities

Effects of a Calcium-Deficient Diet and Resistance Exercise on Bone in Male and Female Rats of Different Biological Maturity

Introduction

Calcium (Ca) is the most abundant mineral in the body, accounting for approximately 2% of an adult’s body weight, and approximately 99% of it is found in the bones. Factors that determine bone mass include body size, growth rate, and genetics [1]. It has been suggested that there is a correlation between the bone mass and muscle mass, and also between bone density and back muscle strength [2], and that the higher the muscle mass, the higher the bone mineral density [3]. It has been reported that bone mass is also related to exercise and hormonal status [4,5]. Jumping exercise training increases bone mass to a greater extent than aerobic exercise training in growing rats [6]. It has been shown that the mineral density and amount in the tibia, as well as the weight of the femur, were higher in rats given food immediately after squat exercise than in rats given food 4 h later [7]. The effect of resistance exercise on increasing bone mass has also been observed in adult rats [8]. On the other hand, Rubin et al. showed that the effect of physical stimuli on stimulating new bone formation was greater in young adult turkeys (age: 1 year) than in older turkeys (age: 3 years) [9], suggesting that the effect of Ca deficiency on bone differs between young and mature turkeys.

A breastfeeding mother loses approximately 200 mg of Ca per day from her bones to produce milk [10], reducing her bone mineral density (BMD) by 3-9% over a 6-month period [11]. Lovelady et al. reported that 16 weeks of resistance and aerobic exercise from 4 to 20 weeks postpartum reduced BMD loss in the lumbar spine of breastfeeding mothers [10]. Fujii et al. reported that resistance exercise, which induced muscle hypertrophy, increased hemoglobin synthesis and promoted iron recycling in the body, mitigating anemia in rats fed an iron-deficient diet [12]. Collagen, a protein synthesized in the body, is one of the main components of bones. Thus, if resistance exercise stimulates bone collagen synthesis, accumulation of Ca in the bone may increase through Ca recycling in rats fed a Ca-deficient diet.

When rats were subjected to running exercise, the diameter of the tibia increased in males, while the weight and diameter of the tibia increased in females, and the effect of increasing bone weight and bone diameter was more pronounced in females than in males [13]. Muscle hypertrophy occurs in both male and female rats that perform spontaneous exercise; however, it has been reported that although females exercise more than males, muscle hypertrophy is greater in males than in females [14]. Thus, there are sex differences in the effects of exercise on bones and muscles. Therefore, the effects of a Ca-deficient diet and exercise on bones would differ between the growth and maturation stages and between males and females. Accordingly, we investigated the effects of a Ca-deficient diet and resistance exercise on rat bones in growing and mature male and female rats. We hypothesized that resistance exercise would alleviate the bone fragility that occurs due to a Ca-deficient state.

Materials and Method

Diet

Both Ca-sufficient and Ca-deficient diets were prepared according to AIN-93G [15]. The Ca content in the Ca-sufficient diet was 5 g/kg and that in the Ca-deficient diet was 1 g/kg.

Animals

Growing Stage: Twenty-two 4-week-old male Sprague-Dawley rats (SD) rats and 22 female rats (CLEA Japan, Osaka, Japan) were used. Room temperature was 23±1°C, the dark period was from 8:00 to 20:00, and the light period was from 20:00 to 8:00. Rats of both sexes were divided into a sedentary group (n = 5 for males and n = 5 for females) and an exercise group (n = 6 for males and n = 6 for females) fed a Ca-sufficient diet, a sedentary group (n = 5 for males and n = 5 for females), and an exercise group (n = 6 for males and n = 6 for females) fed a Ca-deficient diet. The study period was four weeks. The exercise group performed climbing exercises three times a week, every other day, from 17:00 to 18:00. The exercise protocol has previously been reported [16]. The rats were fed from 18:00 to 10:00 the following day. Food was given from 18:00 because ingesting a meal immediately after exercise has been regarded as effective for muscle hypertrophy due to exercise [17], and the rats were hungry at the end of the exercise session because they fasted from 10:00 am. Deionized water was provided to prevent the rats from ingesting Ca from their drinking water. Body weight, food intake, and water consumption were recorded daily. This study was approved by the Experimental Animals Subcommittee of the Research Integrity Committee of the Osaka University of Health and Sport Sciences (Approval numbers 22-1 and 22-2).

Mature Stage: Twelve-week-old sexually mature male (n =21) and female (n= 22) SD rats (CLEA Japan) were used in this study. All rats performed the climbing exercise described below from seven weeks of age to acclimatize themselves to exercise. All rats received a Ca-sufficient diet from 10 to 11 weeks of age. At 12 weeks of age, rats of both sexes were divided into a sedentary group (n = 5 for males and n = 5 for females), and an exercise group (n = 5 for males and n = 6 for females) fed a Ca-sufficient diet, a sedentary group (n = 5 for males and n = 5 for females), and an exercise group (n = 6 for males and n = 6 for females) fed a Ca-deficient diet. The study period was four weeks. The protocol was the same as that used in the growing stage experiment. This study was approved by the Experimental Animals Subcommittee of the Research Integrity of the Osaka University of Health and Sport Sciences (Approval numbers: 21-3 and 21-5).

Dissection

On the last day of the experiment, the rats were sacrificed under isoflurane anesthesia. Organs (liver, heart, kidneys, adrenals, spleen, digestive tract), skeletal muscle (flexor hallucis longus, soleus, gastrocnemius, plantaris), and adipose tissue (perirenal, parametrial, retroperitoneal, mesenteric) were excised and weighed.

The weight, length, and central axis width of the femurs on both sides were measured.

Analysis of Femur and Whole-Body Bones

The length and width of the central axis of the femur on the left and right sides were measured using a digital caliper (220-150-S; As One, Tokyo, Japan), and the rupture energy and rigidity (stiffness) were measured using a bone strength tester (TK-252C; Muromachi Kikai, Tokyo, Japan). The mean values of the left and right sides were used for the statistical analysis. For a whole-body bone analysis, after the removal of epidermal tissues, the bones of the whole body were incinerated using an oven (HTO-300S, AS ONE) at 550°C and weighed. Femora were also incinerated.

Statistics

A two-way analysis of variance (ANOVA) with diet and exercise as factors was performed. P < 0.05 were considered statistically significant (IBM SPSS Statistics Version 27.0.1.0).

Results

The Ca-deficient diet only decreased body weight and food intake in growing males (Table 1). The effect of exercise was observed only in mature male rats, which decreased both body weight and food intake. Exercise increased the weight of the FHL in growing males (Table 2). The weight of the FHL per 100 g of body weight in growing males and females and the weight of the gastrocnemius per 100 g of body weight in growing males were higher in the exercise group than in the sedentary group (Table 3). Exercise did not affect the other muscles. In growing rats, the Ca-deficient diet decreased weight, central width, rupture energy, and stiffness of the femora, whereas the diet did not affect length in either sex (Table 4). Exercise did not affect the measurements, except for the rupture energy in males, which was decreased by exercise. In mature rats, males were not affected by diet or exercise. In females, no effects of diet or exercise were observed, except for stiffness, which was higher in the exercise group than in the sedentary group.

Table 1: Body weight and Food intake (g).

Note: Mean (SD)

Table 2: Skeletal muscle weight (g).

Note: Mean (SD).

FHL; flexor hallucis longus.

Table 3: Skeletal muscle weight (g/300g body weight).

Note: Mean (SD).

FHL; flexor hallucis longus.

Table 4: Femur.

Note: Mean (SD).

The weight of the femur ash decreased with the Ca-deficient diet, except for the weight per 100 g of body weight in mature males. Although exercise increased the femur ash weight per 100 g of body weight in mature males, no other effects of exercise were observed (Table 5). The weight of femur ash in the Ca-deficient diet group was approximately 50% of that in the normal diet group of growing rats, while that of mature rats was approximately 90%, indicating that the decrease in mature rats was smaller than that in growing rats. (Table 6) shows that the whole-body ash weight was lower in the Ca-deficient diet group than in the normal diet group, except for the mature females. The effect of exercise was not consistent. The whole-body ash weight of the Ca deficient group was approximately 50% of that of the normal diet group in growing rats, while it was 88-102% in mature rats, which was similar to that observed in the femur.

Table 5: Femur ash weight.

Note: Mean (SD).

Table 6: Whole-body ash weight.

Note: Mean (SD).

Discussion

It has been reported that Ca deficiency reduces bone mineral density [18,19] and bone strength [20] in growing rats. The results of the present study are consistent with these previous reports. On the other hand, in mature male and female rats, the Ca-deficient diet decreased the femoral weight and width of the central axis and reduced bone strength, as assessed by the rupture energy and stiffness. These results suggest that Ca deficiency weakens bones during the growing stage, but not during the mature stage. Bone mineral density, bone rupture energy, and stiffness are believed to be correlated. The ash weights of the femur and whole body were reduced by the Ca-deficient diet in both growing and mature rats. However, during the growth stage, the weight of ash in the deficient diet group was approximately 50% of that in the sufficient diet group, whereas in the mature stage, it was approximately 90% or more. Therefore, during the mature stage, the decrease in bone Ca due to the Ca-deficient diet was not large; therefore, the rupture energy and stiffness did not decrease.

The reason for the lack of a large decrease in bone Ca due to the Ca-deficient diet in mature rats is that, in growing rats, Ca was deficient during the active period of bone formation. In contrast, in mature rats, Ca had accumulated in the bones by 12 weeks of age; therefore, the Ca-deficient diet did not cause a significant decrease in bone Ca. Saville et al. reported that applying a weight load to the hindlimb increased the bone Ca concentration in the hindlimb of rats [20]. Welten et al. showed that both Ca intake and exercise load contribute to maximum bone mass acquisition [21]. Lovelady et al. reported that exercise during breastfeeding reduced bone density loss without increasing Ca intake [10]. Frost suggested that bones adapt to loads, such as body weight and muscle forces, and increase their mineral content and strength [22]. Welch et al. stated that mechanical stimulation caused by exercise increases bone mass, and that mechanical stimulation above a certain threshold (minimum effective strain) is necessary during the bone mass acquisition phase [23].

In the present study, no effects of exercise were observed in the femur, except for a lower rupture energy in growing males and increased stiffness in mature females. Dalsky et al. observed that the endocrine system is involved in the effect of exercise load on bone metabolism [24]. Estrogen has the function of transmitting the stimulation caused by exercise to osteoblasts, and it has been suggested that when ovarian function declines, even with exercise, sufficient effects on the bones cannot be obtained [24]. The effect of exercise on stiffness observed only in adult female rats in the present study could be attributed to an estrogen-mediated effect. The exercise protocol used in this study has been reported to increase FHL in growing male rats [16]. In this study, an increase in the skeletal muscle weight due to exercise was observed in the FHL and the FHL per 100 g of body weight and gastrocnemius muscle in growing males, whereas it was only observed in the FHL per 100 g of body weight in growing females.

Therefore, muscle hypertrophy due to exercise was thus suggested to be greater in males than in females. On the other hand, in mature rats, no increase in skeletal muscle weight due to exercise was observed in rats of either sex. In the exercise regimen employed in this study, the heavier the weight, the higher the load on the body. However, even in males whose muscle weight increased with exercise during the growth stage, no increase in muscle weight due to exercise was observed in the mature stage. These results suggest that exercise-induced muscle hypertrophy is less likely to occur after maturation. Okano et al. [7] suggested that exercise-induced muscle hypertrophy may be partially responsible for the increase in bone mineral density and content observed in rats subjected to squat training. Fujii et al. [16] reported that the exercise used in the present study increased muscle mass, increased aminolevulinic acid dehydratase activity (the rate-limiting enzyme in hemoglobin synthesis), and mitigated anemia in rats fed an iron-deficient diet. In the present study, we assumed that resistance exercise promoted the synthesis of bone collagen, which is synthesized in the body in the same way as hemoglobin, resulting in the accumulation of Ca in the bones and a reduction in mineral content and bone fragility in rats fed a Ca-deficient diet. In the present study, the collagen content of the bones was not measured to prioritize their ash weight. However, the difference between the weight of the femur shown in (Table 4) and the weight of the ash of the femur shown in (Table 5) can be considered to represent the weight of the organic matter in the femur. The main components of organic matter are collagen. Thus, this can be an indicator of collagen weight. No effect of exercise was observed when weight was calculated. Therefore, bone collagen was unlikely to increase with exercise in the present study.

In this study, we investigated the effects of a Ca-deficient diet and resistance exercise on rat bones in growing and mature male and female rats. As a result, the bones of both male and female rats were weakened by a Ca-deficient diet in growing rats but not in mature rats. Furthermore, no obvious effects of exercise on bones were observed in male or female rats, in either growing or mature animals. A possible reason why exercise did not reduce bone fragility due to the Ca-deficient diet in growing rats could be that the Ca level in the diet was too low or that the effect of exercise was insufficient. In conclusion, bone weakness caused by a Ca-deficient diet was observed in growing rats; however, exercise did not alleviate this weakness.

Authors Contributions

Kawada and Kitaguchi performed experiments and analysis. Fujii performed autopsies, analyzed data, and summarized. Okamura reviewed and wrote the paper.

Competing Interests

The authors declare that they have no competing interests.

Funding

No external funding supported this research.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Biomedical Engineering

Negative Association between Diabetes Mellitus and the Efficacy of Cancer Immunotherapy

Introduction

In recent years, immune checkpoint inhibitors (ICIs) therapy has made breakthrough progress in the field of tumor treatment, and it has demonstrated better anti-tumor therapeutic effects during clinical treatment, significantly extending the overall survival of cancer patients [1-3]. Immunotherapeutic target the T cells inhibitory receptors CTLA and PD1 and restore classically defined antitumor immune responses in tumor microenvironment [1]. However, metabolic diseases such as diabetes may negatively affect the immune system, thereby interfering with the efficacy of immune checkpoint blockade therapy [4]. In the past decades, a large number of clinical studies have evaluated the efficacy of immunotherapy in various malignancies, but few stratified analyses have been conducted specifically for patients with tumor-combined diabetes, and there is no clear clinical evidence for the efficacy of tumor immunotherapy in tumor patients with combined diabetes [5,6]. Therefore, the aim of this meta-analysis was to comprehensively analyze existing relevant studies to assess the therapeutic efficacy of tumor immunotherapy in prolonging the survival of patients with tumor-combined diabetes.

Materials and Methods

Search Strategy

We searched all the articles in PubMed, Embase and Web of Science from January 01,2009 to January 01, 2023. Based on PICOS (participants, interventions, comparisons, outcomes, and study design) guidelines, Systematic reviews and meta-analyses have become increasingly important in health care. Clinicians read them to keep up to date with their field, and they are often used as a starting point for developing clinical practice guidelines. Granting agencies may require a systematic review to ensure there is justification for further research, and some health care journals are moving in this direction [7], using keywords ((diabetes OR diabetic) AND (immunotherapy OR immune checkpoint inhibitors OR PD-1 OR PDL1 OR CTLA4 OR atezolizumab OR nivolumab OR pembrolizumab OR ipilimumab OR durvalumab OR avelumab OR telimomab OR santolina OR tropaia) AND (cancer OR neoplasm OR tumor OR carcinoma) AND (randomized controlled trial OR controlled clinical trial OR randomized OR placebo OR clinical trials as topic OR randomly OR trial)).

Inclusion and Exclusion Criteria

Inclusion criteria:

1) Subjects were diagnosed with malignant tumors without limitation of tumor type.
2) Subjects had comorbid diabetes mellitus and were receiving treatments including immune checkpoint inhibitors (e.g., PD-1 inhibitors, PD-L1 inhibitors, CTLA-4 inhibitors) and other tumor immunotherapies.
3) The included studies are publicly available randomized controlled trials (RCTs), cohort studies and clinical trials, etc., and the language is limited to English.
4) Primary data are provided in the included studies and include the main indicators in terms of efficacy, such as tumor remission rate, progression survival, and overall survival. Exclusion Criteria:

1) Studies animal subjects.
2) Oncology patients treated with tumor immunotherapy who were also receiving other oncology treatment regimens (e.g., chemotherapy, radiotherapy, and targeted therapy).
3) Exclusion of non-original research articles, such as conference abstracts, book reviews, editorial commentaries, case reports, reviews, and meta-analyses, as well as duplicates of the published literature
4) Incomplete data or inaccessibility of the full text of the literature.

Assessment of Risk of Bias

Two researchers utilized the Cochrane manual to conduct independent assessments of the bias risk associated with each of the seventeen included articles. The evaluation of bias risk for the included studies was performed using Rev Man 5.4.1 Software. This assessment encompassed various aspects, including random sequence generation and allocation concealment (selection bias), blinding of participants and personnel (performance bias), blinding during outcome assessment (detection bias), handling of incomplete outcome data (attrition bias), selective reporting of outcomes (reporting bias), as well as other potential sources of bias [7,8].

Data Extraction

Articles were reviewed and screened according to inclusion and exclusion criteria. Two researchers independently extracted useful data using a standardized data extraction form, which included the following information: name of the first author, country or region of study, year of publication, number of patients, number of males and females, mean age, type of tumor, type of diabetes (e.g., type 1 diabetes, type 2 diabetes), interventions, and the primary outcome of ICI-treated overall survival (OS). The secondary outcome was progression-free survival (PFS) after ICI treatment. If disagreement occurred, it was resolved by discussion with the corresponding author.

Statistical Analysis

Meta-analysis was performed using Rev Man 5.4.1 statistical software. The effect indicators of this meta-analysis included tumor remission rate, progression survival, overall survival, and quality of life assessment, etc., and the dominance ratio, risk ratio (Hazard ratio, HR), and 95% CI were used as the statistic of the effect analysis, and the forest plot was drawn. According to the recommendation of the Cochrane collaboration Network, I2 values were used to evaluate the heterogeneity among the included studies, which were divided into two grades: low and high according to the I2 value (<50% or ≥50%). P < 0.05 or I2> 50% was considered significant heterogeneity. A random-effects model was used when significant heterogeneity existed, otherwise, a fixed-effects model was used. A funnel plot was used to evaluate publication bias. P values were two-tailed and statistical significance was set at 0.05 [9,10].

Results

Study Selection

The systematic search yielded 169 results: 107 publications were identified after the removal of duplicates. 84 pieces of literature were eliminated by reading the titles and abstracts of the literature, and 4 pieces of literature were finally included after reading the full text of the remaining 23 pieces of literature and carefully checking the inclusion criteria and exclusion criteria. A screening tree of the selection process is displayed in Figure 1.

Figure 1

In-Depth Study Characteristics Description

Finally, four papers were included in the Meta-analysis had a total of 3065 cancer patients of which 1509 cancer patients had diabetes mellitus and a form of cancer while 1556 patients did not have diabetes mellitus. Although the total population is large, disparity in terms of representations across the four studies is evident [11-14]. For example, study selection and contribution to this meta-analysis leads to one study contributing to the majority of participants so that 2600 cancer patients and 1395 DM patients are from a single study. As such, the other three studies contribute only 1009 cancer patients and 304 DM patients. Due to the huge disparity in participants selected for met-analysis across the four studies, generalization of the findings to the populations of patients with cancer and DM that are on immunotherapy leads to effect size that cause selection bias because when the overrepresented study is removed, the effect on the other three studies is significant such that it cannot be a representative of the population of DM and cancer patients on immunotherapy. The characteristics of the literature are shown in Table 1. The quality of the four included literature was evaluated using the Cochrane Risk of Bias Evaluation Tool, and all of them were of grade II, and the quality evaluation of the included literature is shown in Figure 2.

Table 1: Characteristics of the included literature.

Note: ① =Over Survival; ② =Progression survival. N/A= Not Mentioned.

Figure 2

Effects of DM on OS of Patients Treated with Immunotherapy

Four studies reported the overall survival of patients, and a total of 3065 cancer patients were included, including 1509 patients with comorbid diabetes. The results of heterogeneity test showed that there was no heterogeneity among the studies (I2=0%, P=0.61), and the HR and 95% CI of OS of each study were combined using a fixedeffects model. Meta-analysis showed that cancer patients with DM had a significant shortening of overall survival (HR=1.54, 95% CI: 1.27-1.87, P<0.0001), as shown in Figure 3.

Figure 3

Effects of DM on PFS of Patients Treated with Immunotherapy

Four studies reported the progression-free survival of patients and included a total of 3065 cancer patients, including 1509 patients with comorbid diabetes. The results of heterogeneity test showed that there was no heterogeneity among the studies (I2 = 0%, P=0.45), and the HR and 95% CI of PFS of each study were combined using a fixed-effects model. Meta-analysis showed that the PFS was significantly shorter in cancer patients with DM (HR = 1.48, 95% CI: 1.24-1.78, P<0.0001), as shown in Figure 4. Sensitivity analysis was done to eliminate the doubt that over representation of participants in the meta-analysis by one study could cause selection bias from the effect size. The findings show that removal of the overrepresented study and analysis of the other three shows minimal effect on the results because the y = 488.06 in the sensitivity test when the large population is included but this reduced to y = 314.08 when it is removed, which means that all the four studies show a common trend in terms of the positive effect size.

Figure 4

Discussion

Immune checkpoint blockade therapy is a revolutionary cancer treatment that attacks and inhibits tumor growth by activating the patient›s own immune system [15,16]. This therapy has shown extremely significant potential to effectively inhibit tumor growth and progression, while prolonging the overall survival of patients [17,18]. Although the exact cellular mechanisms that predict patient responses to immune checkpoint blockade therapy have not yet to be well defined, patients with increased T cell infiltration into the tumor microenvironment and heightened activation of intratumorally cytotoxic T lymphocytes might benefit form immune checkpoint blockade therapy [19, 20]. Numerous studies have shown that diabetic patients are often accompanied by abnormalities of the immune system, including alterations in the number and function of immune cells and the development of autoimmune diseases [21,22]. To date, many preclinical and clinical studies have been conducted on the effects of hyperglycemia on the immune system [23]. These studies concluded that hyperglycemia induces T-cell dysfunction, increases M2 macrophages in the tumor microenvironment, and decreases natural killer cell-mediated tumor death [24]. Researchers demonstrated that hyperglycemia significantly reduces in vivo immune function of memory CD8+ T cells, which results in diminished immune cell killing of tumor cells and consequently tumor proliferation [22,25].

In addition, hyperglycemia may also affect the tumor microenvironment and inhibit the infiltration and activity of immune cells [26]. The impairment of recruitment of CD8+T cells was correlated with attenuated expression of cell adhesion molecules in mice with diabetes [27]. In this study, we conducted a systematic evaluation and meta-analysis to assess the impact of hyperglycemia on the outcome of ICIs in patients with advanced cancer. Metaanalysis showed that cancer patients with diabetes mellitus had a significantly lower overall survival and progression-free period after immunotherapy compared to patients without diabetes mellitus. These results suggest that diabetes has a negative effect on tumor patients placed on immunotherapy. Leshem Y et al found that DM was an independent risk factor for shorter PFS and OS in patients with non-small cell lung cancer treated with pembrolizumab [28]. Tortellini et al found that patient with type 2 diabetes mellitus and solid tumor displayed an reduced overall survival benefits from immune checkpoint inhibitors [29]. Exposure to metformin, but not other glucose-lowering medications, was associated with an increased risk of death and disease progression. Literature review also showed that diabetes mellitus has a significant impact on patients with hepatocellular carcinoma receiving immune checkpoint blockade therapy [30]. The mechanism of action of immune checkpoint blocking drugs relies on the normal functioning of the immune system; however, diabetes-induced impairment of immune function may reduce the efficacy of the drugs, and the effectiveness of this therapeutic strategy may be limited.

Comorbidities and complications often negatively impact the therapeutic efficacy of immune checkpoint blockade therapy [31]. Some studies have shown that diabetic patients also face a higher risk of autoimmune diabetes and immune-related adverse effects such as immune inflammatory response and immune-related thyroid disease when receiving immune checkpoint blockade therapy, and these immune disorders may lead to decreased responsiveness to immune checkpoint blockade therapies and diminish the effectiveness of the anti-tumor immune response [32-35]. Also, these side effects may further exacerbate a patient›s pre-existing diabetic condition, leading to a variety of cytokines releasing, some of which may affect insulin secretion and use. Therefore, insulin therapy in diabetic patients may need to be adjusted accordingly when immune checkpoint blockade therapy is administered. In view of the above implications, clinicians need to pay more attention to the management and monitoring of diabetes in diabetic patients receiving immune checkpoint blockade therapy. This includes closely monitoring blood glucose levels and making timely adjustments to insulin regimens to ensure good control of diabetes; regularly assessing immune-related side effects for early detection and management; and weighing efficacy against risk by integrating the patient›s diabetic status into the treatment regimen. Cofounding factors in the study such as comorbidities could influence patient outcomes because their effect on treatment outcomes has not been documented.

For example, some of the patients have other comorbidities such as obesity, cardiovascular disease, hypertension, or dyslipidemia, which are affect the choice of treatment of diabetes and decrease the quality of life in a patient by increasing the risk of multiorgan failure and even death. However, because these comorbidities are not variables in the current study their effect on the participants cannot be isolated from the outcomes because the occurrence of some of them such as cardiovascular disease, hypertension, and dyslipidemia alongside diabetes and cancers diminishes the chances of survival depending on the stage of each disease the patient has and prognosis. In addition, the four studies do not mention the stage of the cancer (stage of disease), and this is a vital component in the study because advanced stages of the disease are associated with poor prognosis and low survival rates while early stages of the disease coupled with comorbidities are also associated with poor prognosis and low survival rates. As such, understanding the effect of the comorbidities and stage of the disease on the cancer treatment by immunotherapy among cancer patients with diabetes would influence the results if such components of the patient’s condition were considered as variable. Hence the cofounders in the study including the presence of comorbidities and stage of the disease are variables that influence disease prognosis and outcome in terms of survival, but their effects have not been assessed despite their presence being mentioned.

Conclusion

Overall, DM was significantly associated with poor survival in cancer patients receiving immunotherapy. However, the effect of the stage of the cancer, presence of comorbidities needs to be assessed by comparing data of the outcomes from non-DM patients with cancer on immunotherapy to DM patients. Further study is needed to confirm these findings.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Medical Casereports

Limiting Stress-Induced Myocardial Damage by Adapting the Organism to Physical Exertion

Introduction

The formation of a certain systemic structural “trace” of adaptation to physical load is the basis for increasing the performance of the organism. On the other hand, it is known that adaptation to physical activity has both positive and negative cross-effects. The existence of positive cross-effects of adaptation to physical activity is that it increases the body’s resistance not only to physical activity, but also to the action of other environmental factors and diseases, i.e., it is a means of preventing or correcting damage caused by these factors. This effect of adaptation has recently attracted increasing attention. However, for successful and safe use of training for the prevention of exacerbations and treatment of diseases of the circulatory system, etc., along with an understanding of the pathogenesis of a particular disease, it is necessary to have a clear understanding of the degree of suitability of a particular type of training to influence this pathogenesis. In this regard, when considering various cross-effects of training, some scientists have revealed the connection of these effects with the peculiarities of the structural basis of adaptation. However, in the present communication, we will pay more attention to the consideration of the most essential effect of training for humans, namely, the increase in the body’s resistance to factors that cause damage to the heart and circulatory system, among which stressor situations and ischemia occupy an important place.

The influence of physically active lifestyle, training to physical loads on the resistance of human organism was analyzed in trained and untrained people on the expression of stress-response in response to 4 types of stressors: cold stress test (immersion of feet in ice water for l min); physical load, standard and to exhaustion; passive psychological stress (watching a film causing irritation, depression, anger); active psychological stress (solving a problem in conflict conditions). It was found out that people with better physical training had significantly lower stress reaction to 3 types of stress out of 4 (reaction to active psychological stress-test did not depend on the level of training) than untrained people. This was expressed in a smaller increase in heart rate and BP, as well as a smaller “release” of catecholamines into the blood. It is significant that in the most trained people at the action of physical test-load the reduced reaction was observed in response to the load of standard intensity; at the maximum load (“up to failure”) they, on the contrary, had more powerful “release” of catecholamines and a greater rise in heart rate than in untrained people at the load “up to failure”. This is due to the fact that in trained people the value of maximum physical work was much greater than that which could be overcome by untrained people, and its performance was ensured by greater mobilization of the organism. Thus, it can be noted that under the action of the same stressor factors the stress-response of the organism in people trained to physical loads is less pronounced than in sedentary, untrained people, and consequently their resistance to stressor influences is higher.

The preventive effect of adaptation to physical exercise was also observed in the study of cardiac contractile dysfunctions caused by stressors. Adaptation to physical loads can play an important role as a means of prevention of cardiac contractile function disorders associated with cationic shifts in myocardium in cardiovascular diseases and as a means of prevention of potentialization of these disorders under stress. When analyzing the mechanisms of prophylactic cross-effect it should be kept in mind that under stressor influence, intensive and prolonged influence of catecholamines on the heart leads to excessive activation of free-radical oxidation, including lipid peroxidation, the products of which damage the membranes of cardiomyocytes and cells of the cardiac conducting system. This leads to disruption of mechanisms responsible for energy supply of cardiomyocytes and ion transport. Thus, in particular, the resulting damage of lipid bilayer of cardiomyocyte membranes and loss of sialic acid by glycocalyx lead to increase of membrane permeability for Ca-channels, decrease of Ca2+ content in phospholipid sites of its binding in sarcolemma, impaired ability of membranes to bind Ca2+ and, in general, to impaired transport of this cation in cardiomyocytes and destabilization of calcium homeostasis of cardiac muscle [1]. These changes together cause disturbance of excitation, contraction and relaxation of cardiomyocytes and, as a consequence, depression of amplitude and velocity of myocardial contraction and relaxation.

In this regard, it can be assumed that the preventive anti-stressor effect of adaptation to physical exercise is primarily due to the prevention of stressor activation of free-radical oxidation of lipids. Due to what can the limitation of this activation in the adapted organism be realized? Firstly, due to the fact that in such an organism, the stress reaction arising in response to the impact of environmental factors is expressed to a much lesser extent than in the untrained organism. Accordingly, in a trained organism, the “release” of catecholamines into the blood in the executive organs, including the heart, is significantly reduced during stress and, as a consequence, their activating effect on the processes of free-radical oxidation is limited. Secondly, in the adapted organism the capacity of antioxidant enzyme systems is increased, which can also limit the activation of lipid peroxidation under stress. Indeed, in the process of adaptation to endurance exercise, which includes swimming, the activity of antioxidant enzymes in skeletal muscles increases, which is accompanied by a less pronounced, than in untrained people, activation of free-radical oxidation that develops at maximum physical loads. Similar relations are observed in the myocardium of trepinated people; the increase in the activity of antioxidant enzymes in it is accompanied by the absence in these people of cardiomyocyte membrane damage and fermentemia, which naturally occur at maximum physical loads in non-adapted people. Thus, a significant role in the considered preventive effect of training under stressor influences along with the reduction of stress-response severity is played by the increase in functional capabilities of stress- limiting antioxidant system in myocardium, which develops in the process of adaptation.

As shown above, stress increases the dependence of cardiomyocytes on changes in the concentration of Ca2+ in the intracellular medium and sensitivity to the action of competitors of Ca2+ for binding sites on membranes, i.e. stress reduces the ability of membrane mechanisms to bind and transport Ca2+ [2,3]. Since these phenomena are directly related to the state of membrane mechanisms responsible for Ca transport and binding2+, it can be assumed that the protective effect of adaptation is due to some specific changes occurring during exercise in the lipid bilayer or glycocalyx of sarcolemma and sarcoplasmic reticulum membranes (SRMs) of cardiomyocytes that increase the capacity of Ca2+ binding and transport mechanisms [4]. In this context, we want to emphasize that the role of adaptation to exercise is capable of preventing or limiting the stress response and, consequently, its damaging effects, it can be assumed that such adaptation can also prevent stressor-induced damage to the coronary bed and impairment of its adaptive response to ischemia. This position is particularly important for understanding the protective effects of adaptation in acute ischemia and myocardial infarction. Summarizing the above, we can conclude that certain components of the branched structural “trace” of this adaptation underlie the cross- prophylactic effect of adaptation to physical loads in cardiac contractile dysfunction caused by stressor exposure.

This is, first of all, the adaptation restructuring of central and peripheral regulatory mechanisms, leading to a more economical functioning of the stress- realizing adrenergic system under extreme influences and, as a consequence, to the limitation of the stress response. This restructuring leads, in particular, to an increase in the activity of the opioid peptide system, an important stress-limiting system, which also contributes to limiting the stress response in adapted individuals. Second, is the increased capacity of the antioxidant system in the myocardium, which inhibits lipid peroxidation and therefore limits the activation of this process and the damaging effects of its products on cardiomyocytes. Thirdly, these structural changes formed in the process of adaptation at the level of cardiomyocyte membranes, which lead to an increase in the the level of cardiomyocyte membranes, which lead to an increase in the power of mechanisms responsible for binding and transport of Ca2+, increase in the resistance of membranes to ionic loads and the damaging effect of products of activation of lipid peroxidation (POL abbreviation to enter at the first mention). The preventive effect of exercise training in cardiovascular diseases is characterized by two main features:

1) Preliminary adaptation of the organism to physical loads can contribute to an easier course of the disease, for example, already “accomplished” myocardial infarction or acute transient ischemia, and faster recovery;

2) Training is a factor that prevents the very occurrence of the disease, which is statistically expressed by a higher incidence of cardiovascular and other diseases among persons untrained in physical loads [5].

These features of adaptation are associated to a large extent with a decrease in the number of patients with cardiovascular and other diseases. Participation in leisure-time physical activity, even below recommended levels, has been shown to be associated with a lower risk of mortality compared with participation in no leisure-time physical activity [6]. It has also been shown that individuals who regularly engage in dynamic leisure-time physical activity have twice as few first myocardial infarctions and fatal cases of acute coronary insufficiency as those who rarely spend their leisure time actively [7]. Due to the fact that in ischemic lesions caused by myocardial infarction, an increased load falls on the non-ischemic parts of the heart. Thus, the protective effect of adaptation in ischemic heart damage can be realized by increasing the resistance of non-ischemic myocardial sections to the increased load. Thus, the preventive cross-effect of training in ischemic heart injuries is provided mainly due to the following components of the structural “trace” of adaptation. First, due to the enhancement of anti-stressor components, which limit the realization of the stressor link of the pathogenesis of these lesions. It should be emphasized that the increase in the power of antioxidant system in myocardium limits the activation of free-radical oxidation caused both by stress reaction accompanying ischemic impact and ischemia itself. Secondly, a significant place in ensuring the preventive effect of training is occupied by the components of the “trace”, which increase the power of the mechanisms responsible for the blood supply to the heart muscle and its energy supply.

These are, first of all, an increase in the density of coronary vessels per unit volume of myocardium and growth of coronary channel capacity, which are realized in the process of adaptation due to the new formation o arterioles, capillaries and collaterals. As a result, the coronary reserve increases significantly in the adapted organism, and during the occlusion of small coronary vessels, the size of ischemic areas will be smaller, and during the occlusion of large vessels, the increase in blood flow in myocardial areas bordering the ischemic zone will be significantly greater than in the non-adapted organism. An important role is also played by the increased content in the “adapted” myocardium of myoglobin, a protein responsible for oxygen binding and transport, as well as by the increased capacity of the systems of aerobic and anaerobic energy conversion and energy utilization in such myocardium. Mitochondria are centers of energy metabolism, and decreased mitochondrial function, may play a key role in myocardial damage caused by impaired energy metabolism [8]. It is these changes formed in the process of adaptation in mitochondria, glycogenolysis and glycolysis apparatus, the system of energy utilization enzymes in the contractile apparatus of cardiomyocytes that increase the resistance of cardiac muscle to oxygen deficiency and, consequently, to hypoxic and ischemic effects. An important place in the preventive effect of training in ischemic lesions is occupied by the increase in the resistance of “adapted” myocardium to increased load.

This feature increases the ability of non-ischemic heart parts to carry out compensatory hyperfunction. This advantage of the “adapted” myocardium is based on all components of the structural “trace” of adaptation, providing an increase in contractile capabilities of cardiomyocytes and the heart as a whole, namely, structural changes in the systems responsible for blood supply to the heart, energy conversion and utilization, and ion transport. Even more important is the fact that exercise training leads to various changes in cardiovascular function, including decreased heart rate, decreased blood pressure, increased maximal myocardial oxygen consumption, and adaptations affecting skeletal muscle, heart muscle, circulating blood volume, and various metabolic modifications [9]. The second important feature of the preventive effect of training in cardiovascular diseases is the ability to prevent the very occurrence of these diseases. Apart from its direct effects on the heart, it is definitely known that regular exercise leads to a better quality of life and longer life [10]. This effect is determined by a decrease in the probability of development of risk factors in trained people, which currently include atherosclerosis, disorders of carbohydrate metabolism and including changes in tolerance to carbohydrates, disorders of fat metabolism and obesity, hypercholesterolemia. In addition, stressful conditions are associated with higher rates of smoking and sleep disorders [11]. When evaluating the above-mentioned positive cross-effects of adaptation, it should be taken into account that they are realized only at rational dosage and adequate selection of physical loads.

When adapting to excessive loads for a given organism, the general biological regularity is fully realized, which consists in the fact that all adaptive reactions of the organism have only relative expediency, i.e. even stable adaptation to physical load can have its biological “price”, which can manifest itself in two different forms: in the direct “wear and tear” of the functional system, on which the main load falls during adaptation; 2) in the phenomena of negative cross-adaptation – in the violation of the adapted organism’s ability to adapt to the physical load; 3) in the phenomena of negative cross-adaptation, i.e. in the violation of the adapted organism’s ability to adapt to the physical load; 4) in the phenomena of adaptation to the physical load. Direct functional insufficiency can develop in conditions of acute high load, in which direct damage to heart structures and other changes are described, which are both the result of the overload itself and of the stress-response arising in this case. This “price” of urgent adaptation is clearly manifested at the first loads of untrained people and is eliminated by the development of training. Thus, the above suggests that adaptation to dosed exercise is a important factor in preventing or limiting stress-induced cardiac injury.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on medical genetics

Introducing Electronic Health Records to Automate Medical Research

Introduction

As the scientific enterprise has grown and diverse, we need empirical evidence on the research process to test and apply interventions that make it more efficient and its results more reliable. Meta-research is the study of research using research methods. Also known as “research on research” aims to reduce waste and increase research quality in all fields. Meta-research concerns itself with research efficiency improvement and the detection of bias, methodological flaws, and other errors and inefficiencies [1]. Medical research (or biomedical research), also known as experimental medicine, encompasses a wide array of research, extending from fundamental research – involving fundamental scientific principles that may apply to a preclinical understanding – to clinical research, which involves studies of people who may be subjects in clinical trials [2]. Medical research often involves fundamental sciences such as mathematics, physics, chemistry, and philosophy throughout its lifecycle and as a paradigm. Medical research typology is either interventional or non-interventional research; Research is said to be interventional if it interferes with patient management or requires an additional or unusual monitoring or diagnostic procedure. Interventional research involves biomedical research and healthcare interventions; as of non-interventional research, it often applies statistical inference, in the form of prospective, retrospective, and clinical essay trials [3].

Each type of typologies is conducted upon data collected or centered on patients, which puts data collection and the quality of data at the utmost priority of every medical research; it also determines the quality of the research, and how long it would take to complete, as well as how much it would cost (financial spending) [4]. Health informatics is the interdisciplinary study of the design, development, adoption, and application of IT-based innovations in health care services delivery, management, and planning. Health care informatics includes sub-fields of clinical informatics, such as pathology informatics, clinical research informatics, imaging informatics, public health informatics, community health informatics, home health informatics, nursing informatics, medical informatics, consumer health informatics, clinical bioinformatics, and informatics for education and research in health and medicine, pharmacy informatics. Several systems exist that can be classified as clinical informatics; these systems interleave with each other in terms of concern; these systems help improve patients care by digitizing paper trails, and thus they offer more transparency, legibility, portability, accessibility as well as it can assert certain data quality guarantees depending on the implementation. Health information systems are available to and accessed by healthcare professionals. These include those who deal directly with patients, clinicians, and public health officials. Healthcare professionals collect data and compile it to make health care decisions for individual clients, client groups, and the public.

Objectives

Automating clinical research is a multi-faceted task; most importantly, we aimed to shorten the data lifecycle by applying data governance guidelines and asserting data quality guarantees. A continuous clinical research pipeline would result in shorter iteration cycles, lower financial budgets, higher outcomes, faster feedback loops, and lower churn ratios.

System Containers: The system would eventually consist of these major containers:

 Electronic Health Record System.

 Clinical Data Warehouse.

 Hospital Management System.

 Practice Management System.

Continuous clinical research requires more than basic data governance infrastructure; it would also require multilingual capabilities (mainly English publishing ability), publishing features, security assertions, and conformity to local laws and regulations.

Key Objectives: The key objectives for such systems would be:

 Shorten the lifecycle of clinical research.

 Minimal data capture duration.

 Accurate data capture.

 Increase clinical research outcome.

 Ensure data quality guarantees.

 Offer a friction-less experience to lower the barrier of entry for researchers.

 Ensure the ease of data exploitation. Democratize and secure access to clinical data.

 Anonymize access to clinical data.

 Lower the linguistic overhead and offer a multilingual experience.

 Ensure conformance with personal data protection regulations and laws.

 Being able to conduct highly customizable data queries.

 Multiple charting and data visualization solutions.

 Being exhaustive in clinical data capture.

 Schema flexibility, given that clinical data is always subject to updates.

Materials and Methods

Software Development Lifecycle

Systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation.

Agile Methodology

In software development, agile approaches development requirements and solutions through the collaborative effort of self-organizing and cross-functional teams and their stakeholders. It advocates adaptive planning, evolutionary development, early delivery, and continual improvement, encouraging flexible responses to change.

Results

Visualizing a software architecture and decomposing it into components can be tedious; the C4 model is the most used model to visualize hierarchies in software engineering. The C4 model is an “abstraction-first” approach to diagramming software architecture, based upon abstractions that reflect how software architects and developers think about and build software. The small set of abstractions and diagram types makes the C4 model easy to learn and use.

Level 1: System Diagram

A System Context diagram is a good starting point for diagramming and documenting a software system, allowing you to step back and see the big picture. Draw a diagram showing your system as a box in the center, surrounded by its users and the other systems it interacts with.

• Core Context: Universal content management system (CMS), which theoretically can be tailored to any need and accommodate any schema.

• Major Contexts: include hospital management system, practice management system, electronic health record, and clinical data warehouse contexts.

• Person: the intended audience is the whole medical and paramedical staff.

• Technology: the technologies used are mainly web technologies, following the latest trends, we used Typescript – a statically type-checked superset of JavaScript – in both backend (through NodeJS), and frontend development, NPM (Node Package Manager) was used to manage external dependencies (Table 1 & Figure 1).

Table 1: Major application contexts.

Figure 1

Level 2: Container Diagram

The Container diagram shows the high-level shape of the software architecture and how responsibilities are distributed across it. It also shows the major technological choices and how the containers communicate with one another. It’s a simple, high-level technology focused diagram that is useful for software developers and support/operations staff alike.

A. Parsing Container

Parsing, syntax analysis, or syntactic analysis is the process of analyzing a string of symbols, either in natural language, computer languages, or data structures, conforming to the rules of formal grammar. Parsing consists of many steps before obtaining the abstract syntax tree, these steps are tokenization, lexing, parsing, desugarifying. The parsing container is the core of every operation in our application; it is used to parse datatypes, expressions, schemes, and language templates. Parsing syntax is an intensely repetitive task, and therefore it must be optimized, cached, and predictable. Our parser is a top-down LL parser-combinators parser that syntactically analyzes expressions using custom grammar trees (Figure 2).

Figure 2

B. Datatype Container

A data type or simply type is an attribute of data that tells the compiler or interpreter how the programmer intends to use the data. A data type constrains the values that expression, such as a variable or a function, might take. This data type defines the operations that can be done on the data, the meaning of the data, and how that type can be stored. A data type provides a set of values from which an expression (i.e., variable, function, etc.) may take its values. Datatypes are an essential concept in our application; datatypes are the cornerstone of expressions, segments, and conditions. Datatype engine was supplied by metatype, which is a proprietary datatype engine that parses datatypes, validates data against datatypes, generate fake realistic data based on datatypes, generate empty deterministic placeholders, can be used to infer, and apply arithmetics on datatypes among other features (Figure 3). It comes with native support for 22 primitive types, with support for compound types such as arrays and objects, and merge types such as union and intersection. Primitive types are extended with custom attributes and pattern names and flags with over 40 attributes and 60 patterns supported. For example, to come up with a datatype that matches emails between 10 and 60 characters, one can use “string (min=10, max=60) as email”. It supports type coercion, normalization, sanitization, synonym down-sampling, and fault tolerance through custom tolerance flags. Type coercion stands for type conversion capability on inter-coercible types, such as the coercion of a string of digits into an integer. Data normalization or canonicalization is the operation of normalizing similar data into a lookalike pattern such as lower-casing all letters; normalization can be contextual e.g., name cases must not be normalized, whereas usernames must be all lower-cased. Synonym downsampling is the process of normalizing synonyms of the same concept into one concept, e.g., “true,” “yes,” “on” and “valid” would be downsampled into TRUE Boolean value.         Sanitization is the process of stripping out unwanted characters to clean up data, e.g., removing white spaces out passwords, emails, or usernames. Type arithmetic consists of union and intersection merges and subset and superset checks. Also, it supports file manipulation, including file trans-coding, file validation, and asynchronous file transformation and coercion of image, video, audio, and document files. Every feature of metatype has been concisely used in our application; for example, type arithmetic’s and type checking were used in the inference engine and the validation engine respectively of the Expression container. File manipulation capabilities of metatype were mandatory to allow file upload. Metatype also allows access to datatype metadata by accessing raw parsed data; this property is potent to generate input and form interface in complete synchrony with the datatype counterpart in real time.

Figure 3

Discussion

In academic publishing, a paper is an academic work that is usually published in an academic journal. It contains original research results or reviews of existing results. Such a paper is also called an article [5]. The production process, controlled by a production editor or publisher, then takes an article through copy editing, typesetting, inclusion in a specific issue of a journal, and then printing and online publication. Academic copy editing seeks to ensure that an article conforms to the journal’s house style, that all of the referencing and labeling are correct, and that the text is consistent and legible; often, this work involves substantive editing and negotiating with the authors [6]. The author will review and correct proofs at one or more stages in the production process. The proof correction cycle has historically been labor-intensive as handwritten comments by authors and editors are manually transcribed by a proofreader onto a clean version of the proof. In the early 21st century, this process was streamlined by the introduction of e-annotations in Microsoft Word, Adobe Acrobat, and other programs, but it remained a time-consuming and error-prone process. The full automation of the proof correction cycles has only become possible with the onset of online collaborative writing platforms, such as Authorea, Google Docs, and various others, where a remote service oversees the copy-editing interactions of multiple authors and exposes them as explicit, actionable historical events [7].

Several technologies exist that can automate the process report writing, Document formatting standards like LaTeX or markdown that enable creating standardized rich documents using plain text instructions. Markdown is a lightweight markup language with plain-text-formatting syntax, created in 2004 by John Gruber with Aaron Swartz. Markdown is often used for formatting readme files, for writing messages in online discussion forums, and to create rich text using a plain text editor [8]. LaTeX is a software system for document preparation. When writing, the writer uses plain text instead of the formatted text found in “What You See Is What You Get” word processors like Microsoft Word, LibreOffice Writer, and Apple Pages. The writer uses markup tagging conventions to define the general structure of a document (such as article, book, and letter), to stylize text throughout a document (such as bold and italics), and to add citations and cross-references [9]. Markdown documents allow the automated extraction of headlines, tables, figures; tables and figures can be auto-incremented automatically according to any numbering format; table of contents can be automatically generated with links to the corresponding paragraph. LaTeX can display any mathematical formula, in addition to its being the publishing standard for many journals. Natural language processing algorithms can be used to write paragraphs, summarize headlines, abstracts, detect plagiarism, speech tone, voice, or compute metrics such as reading time, reading difficulty, word count, or auto-complete sentence, spell check, suggest synonyms, simplify phrases … etc (Figure 4).

Figure 4

Citations Management

Enumerative bibliographies are based on a unifying principle such as creator, subject, date, topic, or other characteristics. An entry in an enumerative bibliography provides the core elements of a text resource, including a title, the creator(s), publication date, and place of publication [10]. A bibliography may be arranged by the author, topic, or some other scheme. Annotated bibliographies give descriptions about how each source is useful to an author in constructing a paper or argument. These descriptions, usually a few sentences long, summarize the source and describe its relevance. Reference management software may be used to track references and generate bibliographies as required [10]. Reference management software, citation management software, or bibliographic management software is software for scholars and authors to use for recording and utilizing bibliographic citations (references) and managing project references either as a company or an individual. Once a citation has been recorded, it can be used repeatedly to generate bibliographies, such as lists of references in scholarly books, articles, and essays. The development of reference management packages has been driven by the rapid expansion of scientific literature [10]. These software packages typically consist of a database in which full bibliographic references can be entered, plus a system for generating selective lists of articles in the different formats required by publishers and scholarly journals. Modern reference management packages can usually be integrated with word processors so that a reference list in the appropriate format is produced automatically as an article is written, reducing the risk that a cited source is not included in the reference list. They will also have a facility for importing the details of publications from bibliographic databases [10].

Research Quality and Peer-Reviewing

Quality research most commonly denotes the scientific process, including all aspects of study design; in particular, it relates to the judgment regarding the match between the methods and questions, selection of subjects, measurement of outcomes, and protection against systematic bias, nonsystematic bias, and inferential error. Principles and standards for quality research designs are commonly found in texts, reports, essays, and guides to research design and methodology, and so on [11]. Besides, quality assessment plays a vital role in the research community. It enlightens crucial decisions on the funding of projects, teams, and whole institutions, on how research is conducted, on recruitment and promotion, on what is published or disseminated, and on what researchers and others choose to read. It makes trust in the work of the research community. Quality is, of course, not a straightforward concept. The Oxford English Dictionary (OED) defines it as the nature or standard of something as measured against other things of a similar kind, and especially the degree of excellence it possesses [11]. Research does investigate ideas and uncovers useful knowledge. But research can be abused through bad assessing of research work. An assessment process implies a review – involving human judgments and/or quantitative scores – which may find work of varying quality, from the poor or mediocre to the excellent or upstanding. So, there are guidelines for standards for research quality [11].

Standards for Assessing the Quality of Research

 Pose a significant, important question that can be investigated empirically, and that contributes to the knowledge base.

 A well‐defined research topic and a clear hypothesis.

 Test questions that are linked to relevant theory.

 Apply methods that best address the research questions of interest.

 Base research on transparent chains of inferential reasoning supported and justified by complete coverage of the relevant literature.

 Provide the necessary information to reproduce or replicate the study.

 Ensure the study design, methods, and procedures are sufficiently transparent and ensure an independent, balanced, and objective approach to the research.

 Provide a sufficient description of the sample, the intervention, and any comparison groups.

 Use appropriate and reliable conceptualization and measurement of variables.

 Evaluate alternative explanations for any findings.

 High-quality data were fit for their intended use and reliable, valid, relevant, and accurate.

 Findings of the study written in a way which brings clarity to important issues.

 Tables and graphics which are clear, accurate, and understandable with appropriate labeling of data values, cut points, and thresholds.

 Include both statistical significance results and effect sizes when possible.

 The conclusions and recommendations both logical and consistent with the findings.

 Assess the possible impact of systematic bias.

 Submit research to a peer-review process.

 Adhere to quality standards for reporting (i.e., clear, cogent, complete).

 Is respectful to people with other perspectives.

 Provides adequate references.

 Attempts to honestly present all perspectives.

Natural-language processing algorithms can be used to compute metrics such as voice, tone, formality, conciseness; NLP algorithms can detect plagiarism, classify deception, spot fraud, detected incited sentences, and detect stealing. Citations can be checked for their style, their publisher’s impact factor that can be crossed over some predefined threshold … etc.

Overlooked Features for Assessing the Quality of Research [12]

 Research questions are designed to reach a particular conclusion.

 Alternative perspectives or contrary findings are ignored or suppressed.

 Data and analysis methods are biased.

 Conclusions are based on faulty logic.

 Limitations of analysis are ignored, and the implications of results are exaggerated.

 Critical data and analysis details are unavailable for review by others.

 Researchers are unqualified and unfamiliar with specialized issues.

 Citations are primarily from special interest groups or popular media, rather than from peer-reviewed professional and academic organizations.

Conclusion

Every medical research makes use and is centered on data, particularly patient record data; these data are then explored, queried, visualized, and used to draw new conclusions; Data is an essential aspect of any research, yet automating the data life cycle helps shorten the research life cycle and thus helps establish a continuous research pipeline. To automate the data life cycle, it is mandatory to define data quality metrics to assert quality guarantees; those assertions can then be identified by accessing data quality dimensions. Current medical record formats, which are mainly paper-based format, are proven to suffer from free-entry system drawbacks. In addition to being prone to error, it is maybe also unreadable. Our objective was to develop a system with four major containers: Hospital management system, practice management system, electronic medical record, and a clinical data warehouse. We used “meta” suite to create a multi-purpose content management system and a universal data management system, and we’ve encoded content schemes to instruct visual components. The resulting system consists of 8 major containers: hospital management system, practice management system, electronic medical record, multi-purpose content management system, universal data management system, identity, and access management system, internationalization system in addition to the ontology container.

We’ve developed an ontology container to augment our entry system capabilities, which represent concepts in a tree-like structure; each concept is encoded with a prefix string following a Trie data structure, which enables static operations in one time; We’ve added to the database 270,516 concepts for anatomy, diagnoses, findings, interventions, procedures, medications, organisms, substances in addition to dozen other attributes. The software was thoroughly tested using unit tests, integration tests, and acceptance tests; we’ve conducted an acceptance end-test: we’ve submitted 19 real medical records picked form two months period, and we’ve conducted data exploitation and analysis trial. Our solution can be considered as Big Data solution as it asserts all its attributes; Automation has a beneficial impact on research and healthcare and can be applied through many research processes: Data quality and governance, Headline suggestion, data querying and exploration, reports writing, citations management, and research quality. Major automation frameworks are data quality, artificial intelligence, and natural language processing.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Biomedical Science

The Effectiveness of Laboratory Interventions on TAT

Introduction

The emergency department (ED) provides clinical services to anyone who needs immediate care regardless of the ability to pay. EDs around the United States are increasingly serving as the shelter for medically underserved patients and are responsible for the sharp increase of visits from 1997 to 2007 (Tang, et al. [1]). The increased visits brought ED overcrowding and long wait times that became major concerns affecting the throughput nationwide. The Centers for Disease Control and Prevention (CDC) reported that the U.S. had 139.0 million ED visits in 2017, representing 43.3 visits per 100 persons [2]. U.S. hospitals need to continually adjust to the challenges at the usual patient entry point, which is the ED and the supporting ancillary departments. Most laboratories support the ED by timely reporting of laboratory results and monitoring the turnaround time (TAT) from the time the laboratory received the specimens to the time the Clinical Laboratory Technologists (CLTs) posted the results in HIS. The clinical and anatomic departments of laboratory medicine are major contributors to the diagnosis and treatment of patients. Clinical Laboratory News reported in 2004 statements from Forsman of MAYO Clinic that the laboratory represents only 5% of healthcare costs, yet it contributes to between 60 to 70% of all critical decisions (Hallworth [3]). There are three stages of the laboratory testing, namely:

a) Pre-analytical,
b) Analytical, and
c) Post-analytical

The pre-analytical phase involves all aspects of specimen collection, specimen labeling, test order entry, and delivery to the testing station before laboratory testing can take place (Dasgupta [4]). Many research findings indicated that the pre-analytical phase of the lab testing workflow has the highest rate of errors, resulting in longer TAT of the specimen workflow (Baer, et al. [5-15]), which in turn may prolong the ED length of stay (LOS) (Blick [16-21]). The Institute of Medicine (IOM) has recommended, through three reports, the nation’s widespread use of health information system (HIS) or device to improve patient safety and reduce medical errors (Aspden [22, 23]). Yet, there has been little effort to prevent simple pre-analytical errors from recurring. The ED of Mount Sinai Queens (MSQ), New York, completed the major construction of a new ED in 2016, further raising the importance of efficiency at ED and other hospital services. Clinical and anatomic laboratories of the pathology department at MSQ are some of several ancillary services that support the ED to reduce ED LOS and eventually increase throughput.

A recent study observed a positive correlation between the laboratory TAT and ED LOS and based on the calculations from the efficiency model wherein 5, 10, and 15-minute TAT reduction can potentially admit an annual total of 127, 256, and 386 additional patients, respectively (Kaushik [19]). Unfortunately, laboratory data on specimen TAT commensurate when the specimens arrive in the lab. TAT on specimens between collection to delivery to the lab is not a standard laboratory metric because the procurement process happens in a department outside the laboratory’s jurisdiction. Both the ED and the laboratory conduct thorough investigations on any delays between specimen collection at the ED to specimen delivery to the laboratory. Even though most cases were identified as the common causes of delays such as distracted nurses who left specimens in a pocket, collectors forgot to send to activate the pneumatic tube system (PTS), and distracted lab clerks who incorrectly prioritized the influx of specimens, got reconciled, the damage has already been done. Delays of specimen submission in this pre-analytical phase prompted the conduction of this field experiment that allowed the laboratory personnel to intervene when the ED specimens took an unusually long time to arrive at the laboratory. This study›s focus was the collaborative efforts between the ED and the laboratory to improve TAT between specimen collection and specimen delivery to the lab in the pre-analytical phase. This study›s research question was: Is there a statistically significant difference in the average receive time of specimens from ED and the average time of disposition by ED providers between the periods of the field experiment?

Methods and Materials

Research Design

We conducted the study at the ED and laboratory departments of Mount Sinai hospital at Queens (MSQ), a community hospital in a middle-class, commercial neighborhood of western Queens, New York, and part of the eight-hospital Mount Sinai Health System. As the only hospital in Queens designated by both the New York State Department of Health and the Joint Commission as a primary stroke center, MSQ became the first choice of stroke patients, so the response time and throughput are closely monitored. A four-month (from August to November 2018) joint field experiment between the ED and the lab prompted laboratory clerks to document each call to ED nurses for any collected specimens that remain unreceived after 21 minutes was conducted at MSQ, using real-time data posted in HIS by the Epic Rover once ED HCWs collected the specimens. Through secondary data collection, we used the pretest-posttest research design to determine the effectiveness of calling ED nurses when the lab has not received specimens they collected after 21 minutes. The laboratory monitored the ED patient visit time in minutes from arrival to ED provider disposition and from specimen collection at ED to receive time in the laboratory. We chose to utilize the time it took from the moment the patient arrives in the ED to the time the ED provider makes a disposition. The decision to use the time of disposition instead of the length of stay was because of the several other factors that impact the length of stay, such as delayed services in radiology, cardiology, pharmacy, and respiratory, and the availability of specialists were not of interest in this study.

Furthermore, we selected the same range of months when we conducted the interventions in the previous year (pre-intervention) and the year after the interventions (post-intervention) to obtain the TAT data. According to the National Hospital Ambulatory Medical Care Survey of 2017 conducted by the CDC, the ED visits varied by season, with the winter having the highest number of visits in 2017 at over 43 million, followed by summer (Kang, 2017). The variation in ED visits by season was the main reason we selected to compare data from different years in the same duration of months.

The Conceptual Model

The conceptual model of the study was the brain-to-brain loop concept of laboratory testing (Figure 1), adapted from the Plebani, et al. [14] and summarized by Dasgupta [4]. There are eight steps from the brain to brain loop concept of laboratory testing. In step 1, the right question was asked from the patient by the clinician or physician. In step 2, the proper test was ordered by the physician. In step 3, the Epic Rover used ED HCW to identify the patient and corresponding lab orders positively. In step 4, the right sample was collected at the correct time, with appropriate patient preparation. In step 5, the proper technique was used to collect the sample to avoid contamination with intravenous fluids, tissue damage, prolonged venous stasis, or hemolysis. In step 6, the sample was transported adequately to the laboratory, stored at the right temperature, processed for analysis, and analyzed in a manner that avoids artifactual changes in the measured analyte levels. In step 7, the lab clerk processed the specimens in preparation for analysis. In step 8, the analytical assay measured the concentration of the analyte corresponding to its “true” level (compared to a “gold standard” measurement) within a clinically acceptable margin of error, also known as the total acceptable analytical error (TAAE). At step 8, the raw data was verified by a lab tech, reaching the clinician contained the right result, together with interpretative information, such as a reference range and other comments, aiding clinicians in the decision-making process.

Figure 1

Instruments

The laboratory used real-time collection information transmitted by the Epic Rover to the HIS and displayed it in a laboratory monitor refreshed every three minutes. The Epic Rover is a cell phone device with a mounted barcode scanner that the treatment team uses to positively identify patients, in real-time, medication administration, update vital signs, chart review, and specimen collection documentation, among many other features. The sizeable flat-screen monitor mounted on the wall at the Central Processing Area (CPA) shows every specimen collected by ED staff, which changes color depending on the time that the collected specimens remained un-received at the laboratory. Laboratory clerks used the ED nurse user code to directly communicate with ED staff via the Vocera phones regarding any specimen collected but never received in the laboratory after 21 minutes.

Study Participants

We filtered study subjects using secondary data based on two requirements:

1) Patients seen in the ED between August to November of three different years (2017, 2018, and 2019) who required only laboratory tests based on clinical manifestations and did not have any service from ancillary departments such as cardiology, pharmacy, physical therapy, or radiology.

2) Laboratory results from patients that met requirement number one, also generated critical value that prompted notification to an ED treatment team member. After the exclusion of incomplete records on acuity level and arrival method, the final data set contained 552 patients wherein 177, 194, and 181 were seen in the ED in 2017 (pre-intervention), 2018 (intervention), and 2019 (postintervention), respectively.

Study Variables

This study focused on the TAT from the time of collection to the time the specimen was received in the laboratory. Johnson [18] reported that the laboratory consistently met the target TAT 93% of the time while MSQ consistently met the TAT target 92 % of the time; however, the study overlooked the variables in the pre-analytical phase, specifically the times from patient arrival to provider order computer entry (door to order). Previously, Holland [17] concluded that the elimination of batch testing using an automated line that continuously processed specimens improved the laboratory TAT, decreasing the occurrence of contributing to extended ED LOS (Holland [17]). Even though both previous studies were in the preanalytical phase, the difference with our study was the focus on the delayed specimen handoff (outliers) from the ED collectors to the laboratory. Thus, in this study, there were two dependent variables (both continuous variables):

a. TAT from test order entry to laboratory specimen receipt (TAT on COLREC), i.e., the time span (in minutes) from the time the ED provider placed the lab order to the time the Central Processing Area (CPA) department of the lab received the specimens, and

b. TAT from ED arrival to disposition (TAT on DISPO), i.e., the time span (in minutes) from the arrival of the ED patient to the time the ED provider made a disposition. The hospital information system (HIS) provided TAT in minutes from ED arrival to disposition, and the Laboratory Information System (LIS) provided TAT in minutes from test order entry to laboratory specimen receipt.

The independent variable was period, a categorical variable with 3 levels:

1) Pre-intervention,
2) Intervention, and
3) Post-intervention.

The control variables were:

a. Mode of arrival, a categorical variable with two levels (EMS vs. walk-in or self),
b. Type of test, a categorical variable with four levels (CMP, ETOH, CBC/Coag, and other tests which comprises lactic acid and troponin), and
c. Acuity level, a categorical variable with four levels (1 = immediate, 2 = emergent, 3 = urgent, and 4 = less urgent, representing acuity level from high to low).

Statistical Analysis

Data were imported into and analyzed using SPSS version 23 (IBM Corp., Armonk, NY). We examined the data for missing values. Subjects with missing values in any of the study variables were excluded from the data analysis. Frequency tables (for categorical variables) and descriptive statistics (for continuous variables) were used to summarize the data. Histogram plots were used to examine the distribution of the dependent variables.

To answer the research question, a multivariate analysis of variance (MANOVA) (Johnson [24,25]) was proposed as MANOVA can be used to determine the relationship between multiple dependent variables and independent variables. There were two dependent variables.

a. TAT on DISPO and
b. TAT on COLREC, one independent variable (period), and three control variables (mode of arrival, type of test, and acuity level).

As suggested by Olson [25,26], Pillai-Bartlett trace statistic is more robust than other multivariate statistics and hence was used as the test statistic in this study to test the hypotheses that if there was a relationship between the independent variable and the dependent variables, after controlling for the control variables. A p-value < 0.05 indicated significance at the 0.05 level. If the multivariate test results are significant, then two analysis of variances (ANOVA) (one for each dependent variable) were conducted to investigate the effects of the independent variable on each dependent variable. To ensure the validity of the analysis results, we examined the following three assumptions of the MANOVA:

1) Independence of observations,
2) Multivariate normality of the dependent variables, and
3) Equality of variance-covariance matrices of the dependent variables (Johnson [18,25]). For this study, we observed data from three different periods (pre-intervention, intervention, and post-intervention), and hence it was reasonable to assume the independence of observations.

To achieve normality, we performed the data transformation to both dependent variables. Specifically, square root transformation was applied to TAT on DISPO, and log transformation was applied to TAT on COLREC. For transformed TAT on DISPO, the skewness was 0.94, and kurtosis was 0.91, indicating the data were very close to normal. Figure 2 shows these features in the histogram plot for transformed TAT DISPO. The QQ plot for transformed TAT on DISPO (Figure 3) suggested that the data for transformed TAT DISPO were normally distributed as the data points were very close to the 45-degree line. For transformed TAT on COLREC, the skewness was 0.04, and kurtosis was 0.72, indicating the data were also very close to normal. Figure 4 shows these features in the histogram plot for transformed TAT on COLREC. The QQ plot for transformed TAT on COLREC (Figure 5) also suggested that the data were normally distributed as the data points were very close to the 45-degree line. Therefore, we conclude that the data for transformed TAT on DISPO and transformed TAT on COLREC were normally distributed, and hence univariate normality was attained. Since the univariate normality of each dependent variable was attained (normally distributed), the multivariate normality was determined. Multivariate normality was assessed via chi squared QQ plots based on the Mahala Nobis distances squared. According to Burdenski [27], Mahala Nobis distances are the generalized squared distances of the data points from the means. When the data are multivariate and are normally distributed, the squared Mahala Nobis distances had the chi-squared distribution with p-degrees of freedom (p = 2 as there were two dependent variables). The data points in the chi squared QQ plot (Figure 6) formed an approximate a line, and hence it was concluded that multivariate normality was attained for transformed TAT on DISPO and transformed TAT on COLREC (dependent variables after data transformation). The Box’s M value of 11.465 was associated with p = 0.077 (Table 1), which was interpreted as nonsignificant based on Hair [25]. Thus, the covariance matrices between the groups were assumed to be equal for the MANOVA. Thus, all three model assumptions for MANOVA

a. Independence of observations,
b. Multivariate normality of the dependent variables, and
c. Equality of variance-covariance matrices of the dependent variables, were satisfied, and it was adequate to analyze the data using MANOVA.

Figure 2

Figure 3

Figure 4

Figure 5

Figure 6

Table 1: Box’s M Test of Equality of Covariance Matrices.

Note: F = F-statistic; df1 = numerator degrees of freedom for the F-statistic; df2 = denominator degrees of freedom for the F-statistic; p = p-value.

Results

Characteristics of Patients

MSQ retained almost 80% (79/100) of the nurses during the intervention period until 2019, according to the MSQ payroll department. After the exclusion of incomplete records on acuity level and arrival method, the final data set contained 552 patients, wherein approximately one-third of the patients were in each intervention period (32.1% for pre-intervention, 35.1% for intervention, and 32.8% for post-intervention), as shown in Table 2. Table 2 also presents the characteristics of these patients in terms of the study variables of interest, including.

a. Mode of arrival,
b. Type of test, and
c. Acuity level.
Nearly 60% of the subjects arrived with EMS (59.2%). Slightly under half of the subjects underwent the ETOH test (48.0%). Approximately half of the subjects were considered urgently in terms of the acuity level (50.7%).

Table 2: Summary of Categorical Study Variables on the Effectiveness of Laboratory Interventions.

Note: n = number of patients seen in the Emergency Department who only had laboratory services and generated critical laboratory values.

Descriptive Statistics of the Dependent Variables

Summary statistics of TAT on DISPO and TAT on COLREC are presented in Table 3. Overall, before data transformation, the average TAT on DISPO was 318.51 minutes (SD = 165.46); the average TAT on COLREC was 20.00 minutes (SD = 21.73). Overall, after data transformation, the average TAT on DISPO was 318.51 minutes (SD = 165.46); the TAT on COLREC was 20.00 minutes (SD = 21.73). When examining the TAT on DISPO by period, the average TAT DISPO seemed to be highest in the pre-intervention period (M = 350.12, SD = 178.26) and lowest in the intervention period (M = 291.94, SD = 151.35). Similarly, the average TAT COLREC seemed to be highest in the pre-intervention period (M = 31.61, SD = 23.27) and lowest in the intervention period (M = 14.07, SD = 14.90).

Table 3: Summary of Statistics of Time of Disposition by ED Providers (in minutes) and Receive Time of Specimens from ED.

Results of MANOVA

We conducted a MANOVA with two dependent variables (transformed TAT on DISPO and transformed TAT on COLREC), one independent variable (period), and three control variables (mode of arrival, type of test, and acuity level) to examine if there was a relationship between TAT on DISPO and TAT on COLREC and intervention period. Table 3 shows the multivariate test results for testing the effects of period, arrival method, type of test, acuity level on the two dependent variables (transformed TAT on DISPO and transformed TAT on COLREC). According to the MANOVA results (Table 3), the effect of period on the dependent variables was statistically significant (Pillai›s Trace = 0.211, F (4, 1092) = 32.124, p < 0.001, multivariate η2= .105). The effect of arrival method on the dependent variables was not statistically significant (Pillai›s Trace = 0.010, F (2, 545) = 2.732, p = 0.066, multivariate η2= 0.010). The effect of the test on the dependent variables was not statistically significant (Pillai›s Trace = 0.002, F (2, 545) = 0.571, p = 0.565, multivariate η2 = .002). The effect of acuity level on the dependent variables was not statistically significant (Pillai›s Trace = 0.007, F (2, 545) = 1.969, p = 0.141, multivariate η2 = 0.007). Note that multivariate η2 represents the effect size of each variable. For example, for the period, multivariate η2 = 0.105, indicates that approximately 10.5% of the multivariate variance of the dependent variables was associated with the variable, period.

Results of ANOVA

Because the results of MANOVA were significant, we will now examine the univariate ANOVA results for each dependent variable. Two ANOVAs were conducted, one for each dependent variable (transformed TAT on DISPO and transformed TAT on COLREC). There was one independent variable (period) in each ANOVA and three control variables (mode of arrival, type of test, and acuity level).

Results of ANOVA for Transformed TAT on DISPO

We conducted an ANOVA with a dependent variable = transformed disposition time, one independent variable (period), and three control variables (mode of arrival, type of test, and acuity level). Tables 4-6 presented the analysis results. The R2 = 0.089 (Adjusted R2 = 0.074; Table 6) indicated 8.9% of the total variation in the dependent variable, transformed TAT on DISPO, can be explained by the variables in the model, including period, mode of arrival, type of test, and acuity level. Partial eta squared (Table 6) represented the effect size, which measured the amount of the variability in the dependent variable (transformed TAT on DISPO) attributed to each variable in the model (period, mode of arrival, type of test, and acuity level). The partial eta squared was 0.019, 0.001, 0.032, and 0.023 for the period, mode of arrival, type of test, and acuity level, respectively. This partial eta squared results indicated that 1.9%, 0.1%, 3.2%, and 2.3% of the variability in the dependent variable (transformed TAT on DISPO) could be explained by period, mode of arrival, type of test, and acuity level, respectively.

Effects of Intervention Period. According to the analysis results of the ANOVA, there was a statistically significant difference in the dependent variable (transformed TAT DISPO) based on period (F (2, 542) = 5.254, p = 0.005; Table 6). The estimated marginal means of transformed disposition time were 15.917, 14.407, and 15.187 for pre-intervention, intervention, and post-intervention, respectively (Table 7). These values can be transformed back to the original scale by taking the square of the number (Altman [28]).

Table 4: Multivariate Correlation Between the Effects of Period, Arrival Method, Type of Test, Acuity Level on the Dependent Variables (Transformed TAT on DISPO and Transformed TAT on COLREC).

Note: F = F-statistic; df1 = numerator degrees of freedom for the F-statistic; df2 = denominator degrees of freedom for the F-statistic; p = p-value; η2 = Partial Eta squared.

Table 5: Tests of Between-Subjects Effects (Transformed TAT on DISPO).

Note: R2 = 0.089 (Adjusted R2 = 0.074); df = degrees of freedom; MS = mean square: F = F-statistic; η2 = Partial Eta squared; transformed data.

Table 6: Estimated Marginal Means of the Dependent Variable (Transformed TAT on DISPO).

Table 7: Results of Pairwise Comparisons on the Dependent Variable (Transformed TAT on DISPO).

Note: SE= standard error; p = p-value; CI = confidence interval; lower = lower bound of CI; upper = upper bound of the CI. P-values were adjusted using Bonferroni’s method for pairwise comparisons; transformed data.

Table 8: Tests of Between-Subjects Effects (Transformed TAT on COLREC).

Note: R2 = 0.209 (Adjusted R2 = 0.196); df = degrees of freedom; MS = mean square: F = F-statistic; η2 = Partial Eta squared; transformed data.

Nonetheless, according to the results of pairwise comparisons presented in Table 8, the transformed TAT on DISPO was statistically significantly higher in pre-intervention than in intervention (Mean difference = 1.511, SE = 0.467, p = .001). There was no statistically significant difference in transformed disposition time between pre-intervention and post-intervention (p = 0.130) and between intervention and post-intervention (p = 0.093).

Effects of Arrival Method. There was no statistically significant difference in transformed TAT on DISPO based on arrival method (F (1, 542) = .471, p = 0.493; Table 5). The estimated marginal means of transformed TAT on DISPO were 15.325 and 15.016 for patients who arrived via EMS and walk-in, respectively (Table 6).

Effects of Type of Test. There was a statistically significant difference in the dependent variable (transformed TAT on DISPO) based on the type of test (F (3, 542) = 6.033, p < 0.001; Table 5). The estimated marginal means of transformed TAT on DISPO were 15.284, 16.231, 16.671, and 12.496 for patients with different types of tests, including CMP, ETOH, CBC plus Coag, and other tests (Lac/ Trop), respectively (Table 6). According to the results of pairwise comparisons presented in Table 7, the transformed TAT on DISPO was statistically significantly lower for patients with critical CMP results than patients with critical ETOH results (Mean difference = -0.947, SE = 0.466, p = 0.043); the transformed TAT on DISPO was statistically significantly lower for patients with critical lactic acid or troponin results than patients with critical CMP results (Mean difference = -2.788, SE = .974, p = .004); the transformed TAT on DISPO was statistically significantly lower for patients with critical lactic acid or troponin results than patients with critical ETOH results (Mean difference = -3.736, SE = 0.999, p < 0.001); the transformed TAT on DISPO was statistically significantly lower for patients with critical lactic acid or troponin results than patients with critical CBC or Coag results (Mean difference = -4.175, SE = 1.123, p < 0.001). There was no statistically significant difference in transformed disposition time between patients with critical CMP results and patients with critical CBC or Coag results (p = 0.053) and between patients with critical ETOH results and patients with critical CBC or Coag (p = 0.551).

Effects of Acuity Level. There was a statistically significant difference in the dependent variable (transformed TAT on DISPO) based on acuity level (F (3, 542) = 4.234, p = 0.006; Table 5). The estimated marginal means of transformed TAT on DISPO were 12.743, 16.879, 16.760, and 14.300 for patients with different level of acuity, including immediate, emergent, urgent, and less urgent, respectively presented in Table 6. According to the results of pairwise comparisons (Table 6), the transformed TAT on DISPO was statistically significantly lower for immediate patients than emergent patients (Mean difference = -4.136, SE = 1.536, p = 0.007); the transformed TAT on DISPO was statistically significantly lower for immediate patients than urgent patients (Mean difference = -4.016, SE = 1.528, p = 0.009); the transformed TAT on DISPO was statistically significantly higher for emergent patients than less urgent patients (Mean difference = 2.579, SE = 1.077 p = 0.017); the transformed TAT on DISPO was statistically significantly higher for urgent patients than less urgent patients (Mean difference = 2.459, SE = 1.061, p = 0.021); there was no statistically significant difference in transformed TAT on DISPO between emergent patients and urgent patients (p = 0.766), and between immediate patients and less urgent patients (p = 0.394).

Results of ANOVA for Transformed TAT on COLREC

We conducted an ANOVA with a dependent variable = transformed TAT on COLREC, one independent variable (period), and three control variables (mode of arrival, type of test, and acuity level). Tables 8-10 presented the analysis results. The R2 = 0.209 (Adjusted R2 = 0.196; (Table 8) indicated 20.9% of the total variation in the dependent variable, transformed TAT on COLREC, can be explained by the variables in the model, including period, mode of arrival, type of test, and acuity level.

Effects of Intervention Period. According to the analysis results of the ANOVA, there was a statistically significant difference in the dependent variable (transformed TAT on COLREC) based on period (F (2, 542) = 65.220, p < 0.001; Table 8). The estimated marginal means of transformed TAT on COLREC were 1.337, .937, and .913 for pre-intervention, intervention, and post-intervention, respectively (Table 9). These values can be transformed back to the original scale by taking the antilogs (Altman [28]). Nonetheless, according to the results of pairwise comparisons (Table 10), transformed TAT on COLREC was statistically significantly higher in pre-intervention than in intervention (Mean difference = .400, SE = 0.041, p < 0.001). Transformed TAT on COLREC was also statistically significantly higher in pre-intervention than in post-intervention (Mean difference = 0.424, SE = 0.042, p < 0.001). There was no statistically significant difference in transformed TAT on COLREC between intervention and post-intervention (p = 0.562).

Table 9: Estimated Marginal Means of the Dependent Variable (Transformed TAT on COLREC).

Table 10: Results of Pairwise Comparisons on the Dependent Variable (Transformed TAT on COLREC).

Note. SE = standard error; p = p-value; CI = confidence interval; lower = lower bound of CI; upper = upper bound of the CI. P-values were adjusted using Bonferroni’s method for pairwise comparisons; transformed data.

Effects of Arrival Method: There was no statistically significant difference in transformed TAT on COLREC based on arrival method (F (1, 542) = 0.716, p = 0.398; (Table 8). The estimated marginal means of transformed TAT on COLREC were 1.079 and 1.046 for patients who arrived via EMS and walk-in, respectively (Table 9).

Effects of Type of Test: There was no statistically significant difference in the dependent variable (transformed TAT on COLREC) based on the type of test (F (3, 542) = 0.187, p = 0.905; Table 8). The estimated marginal means of transformed TAT on COLREC were 1.075, 1.068, 1.088, and 1.017 for patients with different types of tests, including CMP, ETOH, CBC plus Coag, and other tests (Lac/ Trop), respectively (Table 9).

Effects of Acuity Level: There was no statistically significant difference in the dependent variable (transformed TAT on COLREC) based on acuity level (F (3, 542) = 1.331, p = 0.263; Table 8). The estimated marginal means of transformed TAT on COLREC were 0.915, 1.064, 1.109, and 1.161 for patients with different level of acuity, including immediate, emergent, urgent, and less urgent (Table 9).

Discussion

Due to the laboratory›s interventions, both the mean TAT from patient arrival to disposition and from specimen collection to receive in the laboratory significantly improved from 350.1 minutes to 291.9 minutes and from 31.6 minutes to 14.1 minutes, respectively, between the pre-intervention and the intervention periods. In other words, the average TAT in minutes of both time of collection to delivery to the laboratory (TAT on COLREC) and patient arrival at ED to provider disposition (TAT on DISPO) significantly improved during the months when laboratory clerks reached out to ED nurses after time exceeded 21 minutes from specimen collection but never delivered to the laboratory. This finding confirmed the results of past research that maximizing the use of information technology in the pre-analytical phase could reduce TAT of the specimen workflow (Baer [5-21]). Inter-departmental collaboration improved processes that eventually led to better patient outcomes. An efficient process of specimen collection and transportation from the pre-analytical phase at the ED led the workflow to a faster track on generating clinical data necessary for a provider’s disposition. The study also proved that laboratory intervention made a lasting impact in such a way that TAT on both variables only slightly increased after one year since the lab stopped the intervention.

As Hammerling [10] pointed out, interdepartmental communication and cooperation played a significant role in the testing process. Both improper collection and delayed specimen delivery affect the accuracy of laboratory results. The laboratory must share a steady flow of information on proper collection techniques to reduce pre-analytical errors and reduce TAT between specimen collection and specimen delivery to the lab in the pre-analytical phase (Ghaedi [29-31]).

Conclusion

In this study, several key reasons were identified for failing the 21-minute requirement of TAT- Epic Rover training of HCW, use of paper requisitions, lack of adequate staff, and the pneumatic tube system failure. Of these four factors, the most actionable is the lack of the committed use of Epic Rover. A suggested follow up is to construct a weekly metric where every outlier specimen is matched with a nonuse of Epic Rover and the involved HCW. This scorecard should be presented to ED leadership as a tool to drive the improvement of HCW skill training. Upgrading the specimen collection process through the implementation of technical devices enabled the collaboration of ED and the laboratory to reduce delays or TAT outliers and improved patient outcomes. Automation in the pre-analytical phase, such as Epic Rover, PTS, and Vocera phones, can significantly improve TAT in the pre-analytical phase. However, there is still the human factor, such as being distracted during the specimen collection process, that negates the advantages of modern technology. Such distractions on ED HCWs delayed the specimen transport to the lab affecting the total TAT of the specimen workflow. Even though the laboratory could not intervene on every single outlier, the few phone calls early on the shift set the tone for the nonlaboratory HCW on timely specimen submission. The study also raised enough awareness that the outliers got worse a year after intervention but still far better than the preintervention data.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Medical Microbiology

Root Coverage Using the Tunnel Technique Associated with Conjunctive Tissue Graft or Leukocyte- and Platelet Rich Fibrin Membranes: A Randomized Clinical Trial

Introduction

Gingival recession is described as the apical migration of the gingival margin beyond the cementoenamel junction, exposing the tooth root surface [1]. It is a very prevalent problem affecting adults and children [2], and the most common etiological factors associated with gingival recessions are anatomical [3,4] pathological conditions [5], iatrogenic factors [6], and mechanical trauma [7]. Besides the aesthetic problems, gingival recessions can cause dentin hypersensitivity, difficulty in dental cleaning, carious and non-carious cervical lesions, and periodontal attachment loss [8]. The success of gingival recessions treatment is directly related to several factors that can influence the root coverage, such as the classification of gingival recession and the amount of proximal attachment loss, interventional factors, systemic factors, professional experience, and post-operative complications [9-11]. The connective tissue graft (CTG) technique described by Edel [12] is based on the fact that the connective tissue carries a genetic message that induces the epithelium to become keratinized. Studies have shown that the use of the coronal repositioned flap technique associated with CTG is the gold standard for root coverage treatment [1,13]. However, this technique has the main disadvantage of the need for a second surgical site, the palate is usually the donor area, increasing the patient discomfort. Because of these limitations, investigations regarding tissue substitutes, such as autogenous membranes from platelet concentrates, have been carried out in the last twenty years [14].

Since the removal of anticoagulants and the modification of centrifugation protocols, many essential aspects of tissue regeneration with platelet-rich plasma (PRP) have been described. Additionally, the introduction of leukocytes in platelet concentrates had a significant impact on the use of PRF on tissue regeneration and wound healing. PRF membranes are easy to prepare and manipulate, and when used for the treatment of gingival recessions, repairs the functional properties of the gingiva, and promotes the maintenance and integrity of the keratinized gingival tissue [15]. A split-mouth randomized clinical trial comparing the root coverage with connective graft or PRF showed statistically significant results in the degree of root coverage after six and 12 months for both approaches, with no significant difference between the two groups according to a systematic review by Miron & Choukroun [16]. However, when assessing the improvement in keratinized tissue, studies have shown that the CTG promoted a higher gain when compared to the PRF membranes [17,18]. The objective of this split-mouth randomized clinical trial is to compare the results obtained using the tunnel technique – a flap design that does not include the papillae, promoting superior aesthetic results – associated with CTG or PRF membranes in the root coverage, the gingival thickness (assessed by examination high-quality tomographic scan) and the keratinized tissue gain in Miller Class I and II gingival recessions.

Materials and Method

Study Design and Patient Selection

The local Ethical Committee of São Leopoldo Mandic School of Dentistry (Campinas, Brazil) approved the study – number 92738518.0.0000.5374, and it was registered at Brazilian Clinical Trials Registry (REBEC) under the number RBR-4msz4x. Eleven patients between 28 to 62 years of age, ASA I (healthy patients, according to the physical status classification system of the American Society of Anesthesiologists – ASA), presenting Miller class I and II gingival recession (without interproximal attachment loss) were selected in a private clinic in Porto Alegre, Brazil. The sample size consisted of 59 teeth (29 sites treated with CTG and 30 sites treated with PRF).

The inclusion criteria were the presence of Miller’s class I and II vestibular recessions in at least one tooth in each quadrant of the maxilla, from central incisor to the second premolar, in vital teeth, without bleeding on probing. Smokers, pregnant and lactating, and patients with systemic diseases (ASA II) were excluded from the study. It is a split-mouth single-blind randomized clinical trial, and the groups’ randomization was performed immediately before the surgical procedure. The recessions were divided in test group (treated with Tunnel technique + PRF; n = 30) and control group (treated with Tunnel technique + CTG; n = 29). On the postoperative visit, the patients completed a visual analogue scale (VAS) – a numerical scale ranging from zero to 10 – to describe the pain intensity observed in the recessions treated in the test or control groups. The CTG was removed from the palatal area on the same side that received the control treatment, favoring the accuracy of patients’ responses on the VAS scale.

Clinical Examinations

The clinical examination was performed by two experienced and calibrated examiners (EJ and CFS), blinded to the treatments. The clinical parameters examined in each gingival recession were:

a) Visible plaque index: presence or absence of plaque deposits;

b) Clinical attachment level and probing depth at six sites per tooth;

Those measurements were performed at baseline, three and six months after the surgical procedure, using a 15 mm millimeter periodontal probe (North Carolina Hu-Friedy®).

c) Gingival thickness and amount of attached gingiva: measured in the central point of the buccal surface. The gingival thickness was measured using a Cone Bean tomography (Prexion 0.10 mm voxel tomograph) with soft tissue retraction. A linear measure of the soft tissue was performed 1mm above the bone crest in the most cervical point. The tomographic examinations were performed in the same radiological center and evaluated by a single examiner at the baseline (before the surgical procedures) and six months after surgery. The measurement is described in Figure 1.

Figure 1

PRF Preparation

The PRF was prepared according to the protocol described by Choukroun, et al. [14]. Immediately before the beginning of the surgical procedure, 10 ml of blood was collected by venipuncture into a sterile glass tube without anticoagulant. The blood was collected quickly, and the tubes were centrifuged for ten minutes using a relative centrifugal force (RCF) of 200 xg in the Montserrat Fibrinfuge 25® centrifuge (Zenith Lab Co, Changzhou Jiangsu, China). Due to the difference in density, the tube content was separated into three basic parts after centrifugation: red cells at the bottom, acellular plasma on the surface, and PRF in the middle of the tube. After the centrifugation cycle, autologous leukocyte and platelet-rich fibrin were produced on the tubes (Figure 2). They were removed from the tube, without the clot, with Dietrich tweezers, placed in a metal box and pressured to obtain a 1 mm thick membrane. The fibrin in the liquid phase (monomeric) was collected from the centrifuge tube with a sterile 3 ml transfer pipette (Shandong Weigao, Weihai, China) and used to glue the fibrin matrices. Three membranes were positioned on top of each other, agglutinated with fibrin in the liquid phase, and pressed in the metal box to form the final membrane. This process is described in Figure 3.

Figure 2

Figure 3

The Donor Area

For intra and extra oral disinfection, 0.12% and 2% chlorhexidine digluconate solutions were used, respectively. Local infiltrated anesthesia was performed using articaine 4% with epinephrine 1: 100,000 (Articaine®, DFL-Brasil). The CTG was harvested from the palate in the region between the first premolar and the second molar, depending on the needs of each patient. The CTG was removed with the bilaminar technique using a Swann Morton® blade [15]. The first horizontal incision was made 2 mm from the gingival margin, perpendicular to the palatal surface with 1.5 to 2 mm of depth. The second horizontal incision was parallel to the first one at a distance of 2 to 3 mm. The graft was removed with mesial and distal vertical incisions with a uniform thickness of 1.5 to 2 mm (Figure 4). The CTG was de-epithelialized (Figure 5) and kept in the saline solution until its use in the receptor area. The donor area was protected with PRF membranes and sutured with 5.0 nylon thread (Ethicon).

Figure 4

Figure 5

Receptor Area

The procedures were performed bilaterally on the maxilla, one side included in the control, and the other in the test group. Surgical procedures were performed by a single operator (TCU) with a magnifying loup (2.5x Surgitel Dent-All Innovation, Netherlands) and photophore. After local infiltrative anesthesia, root preparation was performed with scaling and planning of the gingival recession with Grayce curettes (Hu-Friedy®, RJ, Brazil), allowing the smear layer removal. This procedure produced a flat or negative root surface, relevant to the position of the CTG or PRF membrane under the flap with less tension. The area was irrigated with saline.

The subperiosteal incision was performed with Microblade Beaver (Swann Morton®, Sheffield, England) in the gingival margin of the treated and adjacent teeth, without releasing of the interdental papillae and favoring a full-thickness horizontal flap. Tunneling instruments (Hu-Friedy®, RJ, Brazil) were used to create a tunnel and enlarge the flap beyond the mucogingival junction. Apically, the flap was divided from the mucogingival line to ensures a coronal mobilization, tension removal, and passive coronary displacement of the flap.

The PRF membrane (three agglutinated membranes) or the CTG were inserted and adapted under the flap by the tunneling technique, covering the root recessions completely. Stabilizing sutures were performed on the mesial and distal portion of the graft with simple sutures using 5.0 nylon thread (Ethicon®, Johnson & Johnson). Suspensory sutures involving the contact point in the interproximal area were performed to advance the flap coronally and to obtain root coverage without tension. A 5.0 nylon thread was used (Ethicon®, Johnson & Johnson). Composite resin without acid conditioning was added on the contact points, before suture, to assist the mechanical stabilization of the suture (Figures 6 & 7).

Figure 6

Figure 7

Postoperative Care

Each patient was instructed to use ice pack externally in the operated area, in order to minimize edema and postoperative bleeding. Amoxicillin 875 mg (every 12 hours for seven days), ibuprofen 600 mg (every 12 hours for three days), paracetamol 750 mg (every six hours for as long as there was a pain) were prescribed. Mouthwash with 0.12% chlorhexidine digluconate (Periogard, Colgate-Palmolive), twice a day, was recommended for two weeks. The sutures were removed after 12 days, and, after that, the patients were instructed to perform oral hygiene using a soft toothbrush and reduced force. The healing procedure occurred naturally, and patients were reevaluated after one, three and six months of surgery.

Statistical Analysis

After the patients’ characterization, probing depth, and clinical attachment level were evaluated with descriptive analysis to determine the periodontal health of the subjects included in the research. The comparison of the efficacy of root coverage with CTG or PRF in the time points (initial, after three or six months), as well as the interaction between those factors, were analyzed with two-way repeated-measures analysis of variance (ANOVA). For multiple testing, Tukey’s tests were used. Friedman tests were applied to compare the percentage of root coverage obtained with the treatments, and also to evaluate whether the treatment with PRF affected the postoperative pain assessed by the VAS scale. The statistical analysis was performed using the SPSS 23 (SPSS Inc., Chicago, IL, USA), with a significance level of 5%.

Results

From the 11 patients included in the research, 59 superior teeth were treated: 3.4% central incisors; 10.2%, lateral incisors; 35.6%, canines; 33.9%, first pre-molar; and 16.9%, second premolars. In one patient, one tooth was treated with PRF and the contralateral tooth with CTG. In three patients, two teeth were treated with PRF and the two contralateral with CTG. Three patients had three teeth treated with PRF and the three contralateral with CTG. One patient had four teeth treated in each group. Three patients did not present the same number of teeth treated in each group, being two teeth treated with PRF and three with CTG or four teeth with PRF and three with CTG. The probing depth and clinical attachment level of the treated patients were described in Table 1. For the clinical parameter of the gingival recession, the two-way repeated-measures ANOVA demonstrated a statistically significant interaction between the treatment and the time point (p = 0.003). Both treatments presented a reduction in the gingival recession over time, with no statistically significant difference between three- and six-months measures. Although the test group presented a lower gingival recession at baseline (even with the randomization before the surgical procedure), there was no statistically significant difference between both treatments after three months. At six months, a lower gingival recession was achieved by the CTG group (Table 2 & Figure 8).

Table 1: Means and standard deviations of the probing depth and the clinical level of insertion (in millimeters) of teeth treated in the study.

Table 2: Means and standard deviations of the gingival recession and the amount of attached gingiva, for each treatment and time point.

Figure 8

Means followed by different capital letters indicate a statistical difference between time points in each group. Means followed by different lower-case letters indicate a significant difference between treatments in each time point. General means followed by different capital letters indicate a statistically significant difference between treatments, regardless of time. General means followed by different lower-case letters indicate a statistically significant difference between times, regardless of treatment. Regarding the tomographic measures of gingival thickness, it was observed a significant interaction between the treatments and the time point (two-way repeated-measures ANOVA, p = 0.010). The PRF presented no statistically significant difference in gingival thickness over time, while the treatment with CTG promoted an increase in gingival thickness after six months. Although there was no difference between groups regarding gingival thickness at baseline, at six months, the CTG group presented higher tissue thickness (Table 3 & Figure 9).

Table 3: Gingival thickness (in millimeters), regarding the treatment and the time points.

Figure 9

There was no statistically significant difference (Friedman test, p = 0.194) in the percentage of root coverage between CTG (mean: 84.3%; standard deviation: 19.6%; median: 100%) and PRF (mean: 63.4%; standard deviation: 33.6%; median: 67%). Considering the frequency of the sites with complete root coverage or not, there is no statistically significant difference between groups (Fisher’s exact test, p=0.119) (Table 4). Regarding postoperative pain (Figure 10), lower VAS scores were observed in the PRF group (Friedman test, p = 0.003). While the PRF sites experienced the average pain of one (minimum and maximum scores were zero and six respectively), the CTG sites presented the average of the pain of four (minimum and maximum scores of one and eight, respectively).

Table 4: Percentage of sites with complete root coverage after six months.

Figure 10

Discussion

The high prevalence of gingival recessions in adults and the greater demand for root coverage treatment stimulate researchers to investigate alternatives to the CTG. This approach produces a second surgical site and, consequently, higher morbidity of the procedure. In the present study, PRF was evaluated as an alternative to CTG due to its characteristics of fibro-promotion and regeneration. Both methodologies used for the treatment of gingival recessions showed favorable results, considering that there was no statistical difference between treatments after three months. However, after six months, the sites treated with CTG obtained a 20% higher degree of root coverage compared to the sites covered with PRF. This difference was also observed on the percentage of complete root coverage, being 55.2% for the CTG group and 33.3% for the PRF group. A split-month, randomized, controlled clinical trial19, comparing the degree of root coverage using the coronal advanced flap (CAF) technique associated to CTG (n = 30) or PRF (n = 30), obtained 84% of root coverage in the CTG group and 77.12% for the PRF group after six months. The percentage of sites with complete coverage was 60% for the CTG and 50% for the PRF, with no statistical difference between groups. Similar results were found for Tunali et al. 18 in a split-mouth and randomized study, comparing the root coverage obtained with CAF associated with CTG (n = 22) or PRF (n = 22) after 12 months. The study achieved a degree of coverage of 77.36% and 76.63% for ETC and PRF, respectively, with no statistical difference between groups.

However, in the present study, the sites treated with CTG demonstrated superior results regarding the degree of root coverage and gingival thickness increasing. A possible explanation for this difference could be the surgical technique used in both groups. The study used the Tunnel technique in association with the CTG or PRF grafts. This technique has the advantage of not including the papillae in the flap, which, besides, to promote superior aesthetic results, also allows increasing the gingival thickness and the range of keratinized tissue. The tomographic results were used to evaluate the gingival thickness. When the PRF was used, there was no significant difference over time, while the CTG promoted an increase in gingival thickness after six months. (Öncü, et al. [19]) also described a statistical difference favorable to the CTG group regarding gingival thickness. Both studies used the bilaminar technique to remove the CTG, a methodology that recommends the removal of the CTG with the epithelium, and its de-epithelialization outside the mouth.

It favors the removal of a CTG with higher quality, with less adipose tissue and higher density, which promotes less contraction of the graft during the healing (Figure 11). Regarding the amount of attached gingiva, both treatments produced an increase of this parameter over time; however, favorable results were observed for CTG when the groups were compared. This result corroborates with previous systematic reviews [16,20,21]. (Tunali, et al. [18]) described no significant differences in the amount of gingiva attached after 12 months. The increase in attached gingiva in the CTG group can be explained by the fact that CTG maintains the genetic expression from the donor site, thus increasing keratinization and the gingival thickness in the receptor area [22]. Recent studies, with long-term follow-up (over five years), have shown that gingival margins with at least 2 mm of attached gingiva have better stability and can prevent gingival recession [23,24]. The lower keratinization observed in the PRF group is explained by the fact that autologous fibrin stimulates angiogenesis, inducing the formation of neovascularization and new tissues in the recipient area. For a successful procedure, the quality of the receptor area is crucial. Therefore, if a band of keratinized tissue is available, it will stimulate the formation of more keratinized tissue.

Figure 11

On the other hand, if only a non-adherent mucous tissue is presented, it will stimulate the formation of tissue with the same properties, with low quality and without attached gingiva [25]. Randomized clinical trials with more than twenty years of follow-up have emphasized the importance of tissue biotype for the gingival margin stability over time24. The techniques using the submerged CTG have obtained higher gain in the gingival thickness and in the band of keratinized tissue, modifying the tissue biotype. It is a characteristic of connective tissue to express its genetic condition on the epithelium [26,27]. This condition was also reported in the present study, once the gain in gingival thickness with CTG, measured through tomographic examinations, was significantly higher (0.4 mm) after six months. However, PRF promoted no significant gain in gingival thickness over time, achieving 0mm of thickness gain after six months. The present study used the protocol proposed by (Pinto, et al. [28]) to prepare the PRF membrane. According to it, three PRF membranes were positioned one over the other and agglutinated with autologous fibrin in the liquid phase before the insertion on the grafted area. The agglutination of PRF membranes is a relevant step of this technique once the quality and quantity of soft tissue obtained after root coverage with PRF membranes are directly related to the amount of fibrin matrix grafted [29].

It was demonstrated, through quantitative histomorphometric analysis, that fibro-promotion could be clinically predictable when using three to four PRF membranes per pair of teeth29. Scientific evidence indicates that PRF significantly improves the healing of hard and soft tissue [25,30]. Clinical procedures are benefited with the use of autologous fibrin28. In the present study, the side treated with root coverage associated with PRF, and so did not the need of a second donor site, presented more comfort and less pain in the postoperative period, with significantly lower values (median=0 and scores ranging from 0 to 6) when compared to the side receiving CTG and with a second donor area (median=4 and scores ranging from 1 to 8). It is important to observe that, in this study, the CTG was always removed from the same side that received the treatment with CTG [31]. The primary objective of periodontal plastic surgery in the treatment of gingival recessions is to improve aesthetic factors, to reduce the dentin hypersensitivity and the enlargement of keratinized tissue by covering the exposed root roots. The choice of the surgical technique is directly related to several factors such as the anatomy and location of the defect, the amount of adjacent attached gingiva, the number of teeth to be treated, donor area, professional’s technical skills, patient’s pain threshold.

Therefore, it is necessary to evaluate all these factors together, with the best available scientific evidence, for decision making. Recent scientific evidence indicates that the evolution of surgical instruments, less invasive surgical techniques, and the quality of sutures in the last two decades has positively converged to the periodontal plastic surgery results. Chambrone & Pini Prato13 and systematic reviews31 considered high scientific quality by the 2015 American Academy of Periodontology Workshop, support the evidence that the complete root coverage is directly related to the gingival recession depth (cervicoapical length) and to techniques that use CTG, considered the gold standard procedure due to its morphogenetic characteristics. Though, some factors limit its usage, such as the patient’s refusal to undergo a second surgical site to obtain the CTG. Therefore, alternative strategies are necessary in cases where it is not possible to use it. PRF is one option due to its regenerative and fibro-promotion properties, and the ability to induce the formation of new fibroblasts when the receptor area presents a band of keratinized tissue.

However, in the present study, the increase in gingival thickness and the gain in keratinized tissue are restricted to the cases treated with CTG. Within the limitation of the present study, both grafts (PRF and CTG) used for root coverage of Miller class I and II gingival recessions showed favorable results. There was a significant reduction in the extent of the recession in both groups, with no statistically significant difference between them after three months; however, after six months, the results were significantly favorable to the CTG group. The average percentage of coverage was 84.3% for the CTG group and 63.4% for the PRF group. The percentage of sites with complete coverage was 55.2% for the CTG group and 33.3% for the PRF group. Regarding the attached gingiva, the PRF group is associated with significantly gain. The tomographic results of gingival thickness showed that there was no significant difference in gingival thickness for the PRF group, while CTG promoted a significant increase in gingival thickness after six months. Although several randomized clinical trials have been carried out to find tissue substitutes for the CTG in the treatment of gingival recessions, CTG remains the gold standard due to its biological and morphological characteristics. These properties are seen in the postoperative period, in which a thicker, denser, and keratinized gingival tissue is observed.

The use of the PRF membrane for the treatment of gingival recessions is limited. Its benefits are related to healing factors observed clinically in the donor area, accelerating revascularization and healing of the site, and reducing the patient’s postoperative pain and discomfort during the wound repair. During the follow-up of gingival recessions treated with PRF, there was no change in tissue biotype, and most cases remained with thin tissue and reduced keratinized gingival, which is more susceptible to the gingival recession when in the presence of etiological factors.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us