American medical journal

Stiff Person Syndrome: Taxonomic Analysis Supports Use of Large Data Methods to Appraise Major Comorbidities of a Rare Disorder

Background

Stiff-person syndrome (SPS) is a painful, rare, neuro-immunological condition developing in mid-life. Clinically, SPS includes severe spasms, disabling neuromuscular stiffness, anxiety, impaired ambulation, and frequent falls. Described in 1954 as ‘stiff man syndrome’, in 1988, auto-antibodies binding glutamic acid decarboxylase-65 (GAD- 65Abs) were identified in SPS; in 1993 a paraneoplastic variant was associated with amphiphysin auto-antibodies [1-4]. In a clinical appraisal of an early large laboratory-based cohort of SPS patients with high-titer GAD65Ab, we noted markedly elevated auto-antibody levels even after decades of symptoms [5]. In 2008, we described distinctive anatomical patterns of SPS symptoms with GADA65Abs vs. amphiphysin antibodies: GAD65Ab-SPS prominently affects thoraco-lumbar structures and amphiAb-SPS the cervico-thoracic region [6]. The extent to which SPS affects physiological systems more broadly, impacts access to care, and even basic prevalence are not well established. Recently, a natural history study observing patients not receiving immunomodulating therapy noted worsening over time with progression to disability [7]. Care settings, healthcare access, disparities, and utilization have received limited attention [8]. Complexities aside, in conditions as rare as SPS, substantive clinical cohorts are uncommon; as a result, the characterization of SPS, including comorbid conditions, has progressed slowly [9-12].

Even the prevalence of SPS remains subject to some controversy [13] Importantly, GAD65Ab are not unique to SPS, high-titer GAD- 65Ab also associate with cerebellar ataxia, while low-titer GAD65Ab are quite prevalent and associated with type I and adult-onset autoimmune diabetes; clinical syndrome appraisal remains vitally important [9,10] In late 2016, the use of specific ICD-10 codes became mandatory for all Medicare providers, this in concert with wider adoption of electronic health records means that extensive and detailed clinical coding data are now available for analysis [14-16]. The purpose of this study was to use and validate data from CMS databases, nationally- representative of older adults, and inclusive of younger adults with disabling medical conditions, applying large-data analytical methods and case-level validation, to improve understanding of SPS.

Methods

Study Design and Population

This was a cross-sectional study of persons diagnosed with SPS (ICD-10 code G25.82) utilizing two sources of Medicare (CMS) data. Two approaches were utilized because SPS is rare enough that the standard (smaller) data set offered limited statistical power, whereas the larger available data set did not contain important demographic information; internal and external validation assessment supported this approach, see below [17,18]. The two approaches are summarized in Table 1. Phase 1 involved systematic case-count extraction from several data files based on a sample population of over 6 million beneficiaries in the 20% sample of CMS 2016 data (CMS-20) administered by the Chronic Condition Data Warehouse (CCDW) [19]. Phase 2 involved Carrier claim file records for a sample population of about 1.5 million beneficiaries in the 5% sample of CMS 2017 data (CMS-5), accessed under a formal Data Use Agreement administered by the VA (VIReC). Validation was performed on CMS-5 data by comparing those with a primary diagnosis of SPS by neurologists to those diagnosed by others. Medicare provides healthcare coverage to older adults of the U.S. population, including over 99% of the legal residents of the United States who are age 65 years of age or older. CMS files also include those younger than 65 receiving coverage on the basis of disability or diagnosis with “qualifying conditions” like renal failure [20].

Table 1: Comparison of Phase 1 and Phase 2 data sources, research plan development, and hypothesis testing.

In phase 1, case counts were extracted systematically for each code or group of codes of a priori interest to the study. When two codes are submitted simultaneously, this retrieves the number of beneficiaries coded for either condition. We then constructed concrete Boolean algebra algorithms to calculate the ‘intersection’ of two diagnoses, i.e., the number of patients having both of two specified conditions [21]. Utilizing a system of Boolean equations, we created profiles documenting presence or absence of several conditions simultaneously.

In phase 2, data included individual claims records from CMS-5 including claim-by-claim.

Information: ICD-10 codes (up to 12 per visit), provider data, and selected beneficiary demographics for each fee-based encounter. We defined a ‘coding event’ as code inclusion in a claim for a specific patient. We then utilized SAS and SAS SQL to establish a normalized relational database structure for the data based on beneficiaries as the primary key; associating demographic data and all 2017 ICD-10 coding events.

Identification of Cases

Cases were defined as those beneficiaries in the database with one or more coding events during 2017 specifying G25.82 (SPS). For validation, cases were classified according to the position of SPS as primary or secondary, and whether the claim-filing healthcare provider was a neurologist (NPI primary taxonomy code 2084N0400X) or other.

Non-SPS Population and Confounding Variable Analysis

The non-SPS population for this study was all those in the database not coded for SPS. In phase 1, to ensure an appropriate comparator group, we first needed to construct a condition ‘master list’ that would generate the estimated total ‘active’ beneficiaries in CMS-20. The ‘master list’ codes included common conditions, anticipated SPS comorbidities, diverse neurological conditions, and autoimmune conditions, additional conditions were added until adding another condition did not increase total beneficiaries by >.01%. We also required that the ‘master list’ conditions (minus G25.82) generate a population from which all 409 SPS-coded beneficiaries could be retrieved. The resulting codes are shown in Table 2. Phase 2 ‘active’ presence was defined as having one or more ICD-10 diagnoses in CMS-5.

Table 2: ICD-10 codes utilized in Phase 1 ‘a priori’ search strategy.

Note: Agoraphobia, social phobia, visuospatial, psychomotor, frontal lobe and executive deficits were not coded at significant rates in those diagnosed with SPS.

Covariates

Covariates (e.g., comorbid conditions, clinical features, and symptoms) for Phase 1 were selected a priori, Table 3. The primary purpose of Phase 1 was to examine evidence regarding associations between SPS and known covariates, e.g. spasms; we also included contrast conditions as ‘negative’ controls, e.g. hypertension. A secondary aim of Phase 1 was to appraise the clinical care settings, e.g. inpatients. Search options for ‘setting of care’ were selected in CCDW; Boolean algorithms determined group membership. For Phase 2, no ICD-10 coded conditions were selected a priori as covariates because the purpose was to identify co-diagnoses in an unbiased manner. Conditions that occurred in fewer than 10% of SPS individuals (i.e., n≤5) were excluded because of the unreliability of estimates of small samples with large denominators. As SPS is very rare, race and location data were too sparse for analysis.

Table 3: Case counts, mean age, and standard error of mean age by age and gender groups for SPS-diagnosed beneficiaries in the 5% 2017 Medicare data sample.

Note: SPS patients in this sample totaled 97. Counts, mean age, and standard error are also provided for the total CMS-5 sample for those over 65, and for those SPS patients over age 64 who were diagnosed with SPS as a primary condition by a neurologist (SPS (1°, NEUROLOGY) ≥65). The total sample below age 65 is not provided as this group is not representative of the broader population. Cell size less than 11 is suppress per data use agreement (Supp.).

Statistical Methods

Case counts, proportions, and relative rates were computed and compared using z-test of proportions, p-values, and confidence intervals, statistics were adjusted for planned comparisons with the Bonferroni-Dunn method, yielding 0.05 effective p-value [22]. The study of very rare conditions in a representative sample population has specific features that distinguish the analysis from the standard case-control observational study, these features may mean that applications to rare diseases merit further consideration. Firstly, even when diagnostic certainty is modest, the specificity of the diagnosis is dominated by the very low rate of the diagnosis in the population, i.e., if 50% of those diagnosed with SPS were mis-diagnosed (prevalence 3 per 100,000), specificity approaches 1. Secondly, there are implications regarding sensitivity. Given the marked rarity of SPS, there is negligible risk that the clinical features of the total (comparator) population will be ‘contaminated’ by the features of SPS even if a relatively large percentage of SPS patients are undiagnosed, i.e., low sensitivity, e.g., if 50% of SPS patients (having prevalence of 3 per 100,000) were undiagnosed, and they all experienced muscle spasms, this would only increase the estimate of muscle spasms in the total population by 0.0015%.

Thus, of these two, validation of the diagnostic certainty (true positives) is the most impactful concern. In Phase 1, we compared SPS and non-SPS groups. Rates were not adjusted due to non-availability of demographic information. In Phase 2 we segmented cases by age. For beneficiaries below age 65, adjusted prevalence was not determinable. For beneficiaries age 65 and over (older adults), we found the age distribution mirrored the U.S. population and adjustment was not required [23]. We compared older adult SPS and non-SPS populations adjusting for multiple comparisons utilizing the false discovery method (FDR) of Benjamini and Hochberg [24]. To balance false detection error rates with false positive rates, FDR was set equal to 0.15. On this basis, 27 clinical codes met criteria. Analysis with a volcano plot indicated that all of the 27 conditions were significant using p < 0.05 criteria [25]. Hierarchical analysis was performed on the common co-morbid diagnoses (all CMS-5 patients, N = 97) using Gower’s distance algorithm in R [26] Comparison of Phase 1 and Phase 2 was performed. Fisher’s exact test was used to compare SPS coding events in CMS-20 and CMS-5 data, no difference was observed (Fisher’s statistic = 0.9106). To appraise use of CMS-20 data despite the inclusion of both younger and older adults we compared the diagnostic phenotypes of older and younger SPS groups in the CMS-5 data, including all conditions present in at least 10% of the total SPS population, N=83, using a T-test and found no difference. Additionally, condition-by-condition testing for difference in proportions was performed, adjusted for multiple comparisons; no single condition was different. Unbiased analysis of the CMS-5 data using the Benjamini-Hochberg false detection method, even with a liberal FDR of 25% did not find any significant differences [24].

Validation by Taxonomic Analysis

NPI numbers in CMS-5 data were cross-walked to primary taxonomy codes of the National Uniform Claim Committee to which SPS diagnostic coding events were initiated by neurologists versus other specialist and general practitioners. The rates of diagnosis with elevated antibody titers, R76.0 and other codes in the R code chapter, i.e. signs, symptoms, and syndromic diagnoses, were assessed.

Standard Protocol Approvals, Registrations, and Patient Consents

This study was performed under a protocol reviewed and approved by the University of Maryland Institutional Review Board.

Results

Case Counts and Prevalence in Older Adults

We identified 409 SPS-coded persons in the 2016 20% Medicare sample (CMS-20). CMS reports 11,963,696 beneficiaries in CMS-20, 84.4% of whom are over age 64; however to reduce potential record- selection bias, i.e., bias arising when comparing a population with coding events to a population that also includes those without coding events,27 we defined the reference population as beneficiaries with one or more ICD-10 coding events, i.e. diagnosis criteria heuristic, Table 1 [27]. The number of reference CMS-20 beneficiaries for 2016 was 6,192,830. In the 2017 5% CMS data sample (CMS-5), we identified 97 SPS-coded persons (SPS); this was consistent with the CMS-20 sample. Mean age for SPS was 61.25 (SE = 1.49), not different in females and males, Table 3. For further study, the CMS-5 population was divided into those age 65 and above (older adults, n=48), and those below age 65 (n=49). CMS-5 contained records from a total of 1,557,061 older adult beneficiaries with one or more coding event; 896,967 females and 660,094 males. For females and males aged 65 and above, The rate of SPS diagnosis by all healthcare providers (diagnostic prevalence, mean +/- 95% C.I.) was 3.01 (+/- 1.38) and 3.18 (+/- 1.66) per 100,000, respectively. Diagnostic prevalence for older adults was not different by gender and overall was 3.08 (+/- 1.06) per 100,000. Age effects on diagnostic prevalence were assessed by segmenting older SPS adults into 5-year age-cohorts, diagnostic prevalence monotonically decreased with age, however the oldest and youngest older adult cohorts were not different, p = .242.

Clinical Features and Settings of Care in Aggregate Data

SPS clinical characteristics were assessed in CMS-20 utilizing an a priori search strategy informed by literature review; settings of healthcare delivery were also assessed. Use of diagnostic codes for neurological signs and symptoms typical of SPS were elevated, Table 4. Several affective and cognitive diagnoses especially anxiety, depression, and mild cognitive impairment were increased, Table 4. Autoimmune diabetes has been closely associated with SPS: Type I diabetes diagnoses, (rate-ratio) 3.23 p<0.001; and unspecified diabetes diagnoses, 2.14 p<0.001; but not type II diabetes diagnoses, 1.00 p=.395 were increased in SPS. Hypothyroidism diagnosis (E03.9) was also increased, 2.83, p<0.001. Regarding the setting of care, SPS diagnosis was coded in the outpatient setting in 194 (47%) of patients, and in an inpatient setting in 82 (20%), 42 (10.2%) patients received both inpatient and outpatient care, half of these did not receive skilled nursing care, home health, or hospice. A small minority of patients, 9 (2.2%), received hospice care, while 38 (9.3%) received home health care; 19 (4.6%) required skilled nursing, similarly for durable medical equipment. 156 (38%) patients were ‘on record’ only, not receiving care in any specified setting. For the non-SPS population, 3,737,224 (60.13%) received outpatient care and 1,026,642 (16.52%) received inpatient care.

Table 4: Phase 1: Symptoms and signs of SPS in in Medicare beneficiaries with SPS.

Note: 2016 20% Medicare data sample, total SPS patients with specified signs and symptoms, unadjusted prevalence in the non-SPS population, (unadjusted) rate ratios, and statistical significance. SPS patients in this sample totaled 409, the total patients with one or more diagnoses per protocol numbered 6,192,830.

Coding Events for Individuals With SPS

In CMS-5, 1189 unique ICD-10 codes were utilized in 48 older adults diagnosed with SPS. Of these, 1075 were ‘diagnostic’, i.e. ICD-10 Chapters A-Y; here 98 were utilized in 5 or more SPS-diagnosed persons, representing 10% of the SPS population, and were included. Of these, 3 were not coded in 5 or more non-SPS beneficiaries and were excluded. Per our data use agreement, cell values below 5 cannot be reported. The number of SPS coding events per beneficiary ranged from 1 to 89, with a median of 3. For 22 patients, G25.82 (SPS) was the most frequently used ICD-10 code during 2017; for nine others, G25.82 was coded ≥10 times during the year, exceeded in frequency only by diagnoses commonly associated with SPS, e.g. abnormalities of gait, or ‘high-utilization’ conditions, e.g. renal failure. We divided the SPS population into tertiles based on number of G25.82 coding events, there were 32 individuals with one G25.82 coding event, another 32 with 2-5 G25.82 coding events, and 33 individuals with 6-89 coding events. The diagnosis rates across all ICD-10 codes were compared using comparison of proportions between tertiles and no significant differences were identified. The smallest P-value identified pertained to diagnosis with R76.0, raised antibody titer, for the top and bottom tertiles, unadjusted P-value = 0.00013. Those diagnosed with raised antibody titers (R76.0) demonstrated diagnoses typically associated with GAD65 antibodies (number): gait abnormalities (9), muscle spasms (8), ataxia (5), and type 1 diabetes (5). For further study, records with single G25.82 coding events were not excluded.

Comparison of Older and Younger Adults with SPS in CMS-5 Data

The CMS-5 data was used in sensitivity analysis of age and sex characteristics as potential confounders of major clinical features in the larger sample (CMS-20) for which demographic data were not available. To do this, we generated diagnostic phenotyping profiles consisting of diagnosis rates for all possible ICD-10 codes, for older and younger SPS-coded populations. We compared the relative diagnosis rates for older and younger SPS populations and found no significant differences. The phenotype profile comparison for older adult SPS-diagnosed and non-SPS population is shown in Figure 1A. This illustrates that SPS has a substantial disease burden in the areas of endocrine, psychiatric, neurologic, cardiopulmonary, and musculoskeletal conditions as well as a marked increase in symptom burden (R series codes).

Figure 1

Unbiased Analysis of Distinctive Clinical Characteristics

To identify clinical characteristics that may not be established a priori, we utilized an unbiased ‘large data’ approach to identify differences between SPS and non-SPS populations in CMS-5. The identified conditions with increased prevalence were defined by calculating prevalence in SPS compared with the non-SPS population followed by a Benjamini-Hochberg false discovery rate adjustment for multiple comparisons, a statistical method widely applied in gene expression analysis [24]. The list of identified conditions is provided in Table 5 with ICD-10 codes and annotation.16 Through this analysis, we observed that those with SPS have higher rates of diagnoses in several categories including auto-immune, endocrine, mental health, neurological, and gastrointestinal disorders, Table 5. Raised antibody titers, gait abnormalities, ataxia, and muscle spasms demonstrated the highest risk ratio for SPS. In addition, SPS patients have high rates of symptom-focused clinical coding events, including weakness, shortness of breath, fatigue, and headache. Identified comorbid conditions included commonly recognized comorbidities such as low back pain, depression, and anxiety, but also included conditions of obstructive sleep apnea, nausea, and dysphagia. Variation in relative rates and statistical significance is efficiently depicted using Volcano plots, shown here in Figure 1B, visually identifying statistically significant rates of diagnosis in SPS compared with non-SPS. Hierarchical analysis of conditions commonly co-diagnosed with SPS was performed and demonstrated that spasms and abnormal gait were the diagnoses most strongly associated with raised antibody titers in SPS, the relationships are illustrated in a dendrogram, Figure 1C.

Table 5: Phase 2: Frequencies and rates for SPS and Total populations from CMS-5 data for conditions identified through unbiased testing. Significance determined utilizing Benjamini-Hochberg method, with P-value for difference of proportions, and risk ratios, sorted by risk ratio from greatest to least. Total population is 1,557,061.

Note: P-values are calculated based on difference in proportions, values were ranked and selected using false-detection protocols [Benjamini, 1995]. Annotation was generated using automated sorting of CDC annotation codes and verified manually [CDC, 2017]. The SPS population includes all those coded with G25.82 in the 5% CMS sample population, see row 1.

Case-By-Case Analysis with Provider Taxonomy

A case-by-case validation was performed in the older adults with SPS (CMS-5 >= 65 y.o.) by linking provider taxonomic classification through an NPI look-up table. We found that 13 of 48 older adults received a coded diagnosis of SPS (G25.82) in the primary position by neurologists; of these 8 were also diagnosed with raised antibody titers (R76.0), Table 6. The age and gender characteristics of those diagnosed with SPS as a primary diagnosis by a neurologist are reported in Table 3. The remainder included 10 older adults with SPS as a primary diagnosis claimed by non-neurologist providers; 20 with SPS as a secondary diagnosis by non-neurologist providers; and 5 others. None of those with either primary or secondary diagnosis of SPS by non-neurologist providers (in the absence of neurological primary or secondary diagnosis) were diagnosed with raised antibody titers, suggesting a strong linkage between neurologist diagnosis of SPS and appraisal of antibody titers.

Table 6: Taxonomy, coding position of SPS diagnosis, and documentation of a raised antibody titer.

Note: 1) Other medical clinicians had taxonomic classifications from family medicine, emergency medicine, acute and critical care medicine, and pulmonary medicine. 2) Other primary diagnoses included G20, Parkinsons disease; G80.1, Spastic diplegic cerebral palsy; M54.17, Lumbosacral radiculopathy; C71.9, Unspecified malignant brain neoplasm; M06.9, Unspecified rheumatoid arthritis.”

Discussion

This study applies multiple approaches to estimate the prevalence of SPS diagnosis and appraise commonly comorbid conditions and symptoms. We observed diagnostic prevalence 3/100,000 in a large, diverse population indicating that SPS is diagnosed infrequently. Using taxonomic analysis, we observed that 13 of 48 older adults diagnosed with SPS had the diagnosis recorded by a neurologist and located in the primary diagnostic position. This would mean that neurological diagnostic prevalence is estimated here to be 8 per million; restricting the diagnosis further to those both diagnosed with SPS by a neurologist as a primary condition and with diagnosed raised antibody titers, would further lower the estimate to 5 per million. Determining the prevalence of a rare condition is challenging and there is potential for both over and under estimation. We acknowledge that many of the providers filing claims with the ICD-10 code for SPS were unlikely to possess expert knowledge of this condition, suggesting possible over-diagnosis. By contrast, SPS is thought to be so rare that many professionals will not recognize it when they see it, suggesting possible under-diagnosis. Galli and colleagues used a stringent approach to identifying database records that had a high degree of probability for SPS.13 Excluding 63 of 78 records identified, they estimated point prevalence for GAD65Ab-associated SPS as 2.06/million.

An important limitation of that study was that the reference population had profound male predominance; if SPS is female predominant, this could bias towards underestimation.27 In addition, that population had a broad age distribution, given that onset of SPS typically occurs in the mid-40s, this would also contribute to under-estimation relevant to the at-risk population. The clinical and research diagnostic challenges pertaining to SPS are substantive; limiting study to increased GAD65Abs per se does not ensure diagnosis of SPS, as patients with type 1 diabetes often have increased GAD65Abs and are far more prevalent and potentially overlapping [3,10]. The prevalence of low-titer GAD65 antibodies is relatively high in the general population, and for these reasons, low-titer levels of antibodies should not be used as pathognomonic of SPS, and caution is warranted. Criteria must recognize uncertainty and inaccuracy in diagnosis, e.g. SPS mimics; the cumulative nature of the coding data result in the persistence of imprecise, tentative, and rejected diagnoses; and possible failure to distinguish GAD-associated SPS from GAD-associated cerebellar ataxia. Knowledge of key SPS features as highlighted in this work may aid clinicians in coding SPS only when appropriate.

This report also provides new Bayesian estimates of comorbid conditions, common symptoms, and neurological signs. We utilized two distinctive approaches to analyzing data applied separately: a traditional a priori approach and an unbiased ‘large data’ approach. The results of these approaches were complimentary and together provide a detailed profile of SPS. We identified increased prevalence of weight loss, weakness, and OSA along with a more comprehensive and detailed characterization of known symptoms and comorbidities such as low back pain, neck pain, headache, depression, anxiety, hypothyroidism, dyspnea, and epilepsy. To some extent, SPS features have been described phenomenologically over several years of clinical study [1-13] By combining directed and unbiased search methodologies we observe that while SPS produces high rates of muscle spasms and pain, especially in the axial musculature, headache was prevalent as well. Neurological signs associated with SPS were confirmed through targeted searching and showed increased rates of abnormal reflexes, meningismus, abnormal gait, and repeated falls. In short, this appraisal of older adults with SPS used population- level Medicare records to construct a clinical portrait that extends prior studies and yield useful detail regarding co-morbid conditions and associated clinical symptoms. The finding that weight loss, OSA, and dysphagia occurred at increased rates in the older SPS patients compared to the unaffected older adults indicates that respiratory and digestive functions may need to be addressed to improve quality of life and avoid compounding morbidity in patients with SPS. These findings are not contrary to known features of SPS: co-contraction of agonist and antagonist muscles is a prominent electrophysiological finding in SPS and a systematic assessment study conducted at NIH found that many exhibited breathing problems and rigidity of the abdominal wall as well as expected lumbar muscle spasms, global stiffness, and frequent falls [5,7,9,11]. Limitations of this study include the extent of data available through records of the Centers for Medicare Services, these are constrained to diagnostic coding, site-ofcare, and demographic features of disease in this study. The nature of International Classification of Disease (ICD) coding data as a research tool has been not without controversy and certainly earlier versions functioned as administrative.14 With the widespread adoption of electronic health records and the engagement of physicians and other clinicians in code development, ICD-10 codes are qualitative different from prior versions [15,16].

Diagnostic coding is now widely integral to clinical thought and practice: used to communicate diagnostic reasoning, qualify patients for tests and treatments, and ensure that practice quality metrics are attained, as envisioned by the World Health Organization [28]. Research methods to learn from this data are advancing [14,26,29]. Nonetheless, ICD coding remains limited by both the skill and engagement of the diagnosing provider and electronic health record factors. Although the sensitivity and specificity of ICD-10 codes for rare and common diseases is critically important to a study of this nature, literature on this topic is limited. The existing literature on U.S. ICD-10 validation is especially sparse because conversion to the ICD-10 system was only finalized in late 2016. Nonetheless, international research literature indicates that although sensitivity is moderate, it is generally in the acceptable range; specificity is almost universally very high, approaching 99% for most conditions [30]. Technically, sensitivity here is very high as a consequence of rare prevalence, it is notable that the positive predictive value of SPS diagnosis may warrant further study. SPS diagnosis remains grounded in clinical discernment of signs and symptoms, augmented by diagnostic testing, e.g., markedly elevated GAD65 antibodies. [5,7,10].

Given the role of elevated GAD65 auto-antibodies in autoimmune SPS, our finding that a minority older adults with ICD-10 coding for SPS were also diagnosed with elevated antibody titer, suggests that raising awareness of antibody testing for SPS and the critical importance of identifying marked elevation in contrast to moderate elevations typical of diabetes, may improve clinical practice.3,5 Most anticipated observations in this study were consistent with prior studies, however one important exception was noted [2,3]. SPS patients have high rates of diabetes but this was not identified by our unbiased Phase 2 search strategy. Close examination of individual claims data led us to observe that multiple distinct diabetes codes were used by providers. This may be a consequence of regional variation in coding as these patients are geographically dispersed. By contrast, weakness is a feature that has not been prominently described previously, but which may be important.

Conclusion

SPS affects adults even in later life and is rarely diagnosed. Unbiased analysis suggests muscle spasms, abnormal gait and raised antibody titers are key features. Clinical features may include obstructive sleep apnea, weakness, and dysphagia. We conclude that improved recognition of core SPS features, and additional study are both needed. In conclusion, stiff-person syndrome is diagnosed in U.S. older adults at a rate of 3 per 100,000 but this is limited to 8 per million for those diagnosed by neurologists as a primary claim diagnosis. SPS is principally a rare neuro-immunological condition associated with severe muscle spasms, disabling gait disturbance, and markedly raised antibodies to GAD65. The syndrome may variably include pain, respiratory and gastrointestinal compromise, depression, anxiety, diabetes, hypothyroidism, weakness, and cognitive changes. Multimorbidity likely contributes to increased hospitalization and patients typically require expert care. Improved recognition of core features will advance more rapid and precise SPS diagnosis, whereas recognition of the broader disease phenotype may further optimize clinical care for this complex disabling and painful condition.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Free medical journal

Extreme Few-View Tomography without Training Data

Introduction

XTREME few-view tomography is referred to the situation where the number of tomography measurement views is less than 10 [1- 3]. If we model the data acquisition as a system of linear equations, the system is extremely under- determined for extreme few-view tomography. Constraints are vital in shrinking the solution space [4]. Iterative algorithms are better than analytical algorithms when the imaging system is under-determined [5-9]. The total-variation (TV) norm of the gradient of an image is a good indicator for the piecewise- constant feature of the image. TV minimization is a popular method for few-view tomography [10-15]. Extreme few-view tomography requires more information about the target image in addition to the piecewise-constant constraint. In the era of machine learning, a large amount of information can be learned from images like the image to be reconstructed [16-22]. This paper assumes that similar images are not available. We must seek other information. In transmission tomography, we roughly know the values of the attenuation coefficients for the materials in the objects being imaged. We use these known values as the constraints in this paper, as described in the next section.

Methods

There are many approaches to develop an image reconstruction algorithm. One approach is to set up an objective function, which typically contains a data fidelity term and one or more Bayesian terms. Each Bayesian term represents a constraint. An algorithm that minimizes this objective function minimizes all the terms simultaneously. Another approach is the projections onto convex sets’ (POCS) approach. In this approach, the main algorithm consists of two or more sub-algorithms. These sub-algorithms work separately and sequentially. Each of them has its own goals in mind. For a POCS algorithm, it is not easy to study its convergence. However, it is easy to fine tune each sub- algorithm independently and to adjust the balance between them. The POCS approach is adopted in this paper and is described in Figure 1. The POCS algorithm we used in this paper consists of three sub-algorithms. The first sub-algorithm takes care of image reconstruction. Any iterative image reconstruction algorithm can potentially be used to minimize the discrepancy between the forward projection of the reconstructed image and the line- integral measurements. In Figure 1, the image reconstruction algorithm ① is chosen to be the well-known maximum- likelihood expectation-maximization (MLEM) algorithm [23]:

where pm is the mth projection, ai, j,m is the projection contribution from the pixel (i, j) to the projection bin m , and k is the iteration index. In fact, the user can choose any justifiable iterative image reconstruction algorithm for algorithm ①. For example, a transmission EM algorithm [24] or a least square minimization algorithm [5]. The second sub-algorithm is a gradient descent algorithm to minimize the TV norm of the reconstructed image. The gradient descent algorithm ② in Figure 1 is given as

Figure 1

reconstructed image X. We use an extremely small step size η to ensure the stability of the algorithm. At the same time, we repeat this step 5000 times to guarantee the TV norm is effective. The TV norm can be defined as

One can combine algorithm ① and algorithm ② into one Bayesian algorithm [11].

The third sub-algorithm is used to enforce the reconstructed image pixels to take the pre-specified values. The sub-algorithm ③ is the new attempt in this paper. It simply moves its image pixel value to its closest default image values. For torso imaging, the default values can be set up as the linear attenuation coefficients of the air, soft tissues, and bones. In fact, this sub-algorithm is nothing but segmentation. Notice that this step is skipped for most iterations, as enumerated by the variable ‘Count.’ We only activate this step every 100 counts, as dictated by remainder function ‘mod’ in mod (Count,100) = 0, which is the reminder of Count/100. We only know the approximate potential values in the image. We must downplay these ‘known values’ constraint and give the overall POCS algorithm a chance to converge to the true values that may not be the same as our ‘set values.’ Therefore, it is important not to terminate the POCS with the sub-algorithm ③. The computer simulations in this paper consider a two- dimensional (2D) parallel-beam imaging system, with an image array size of 256 × 256 pixels, the detector size of 256 bins, and 8 views (over 180°). The projection line integrals were calculated analytically. The POCS algorithm used 1009 iterations used. Notice that 1009 is not a multiple of 100. This gives the pixel values in the reconstructed image a chance to move away from the segmented values set in the third sub-algorithm. Both noiseless data and noisy data were used in the computer simulations. The noise was Gaussian distributed with a mean value of 0 and a variance of 5.

Figure 2

Results

Figure 2 shows the true phantom used in computer simulations. The large disk has a value of 0.5. There are 8 small discs. Discs 1-5 have a value of 1.5; discs 6-8 have a value of 1.0. In our implementation of the sub-algorithm ③, the potential pixel values were set at 0.51, 1.01, and 1.51. The corresponding pseudo code to update the pixel x (i, j) is as follows.

When ‘Count’ is a multiple of 100, execute.
if (0.25 < x(i, j) ≤ 0.75) then x(i, j) = 0.51;
if (0.75 < x(i, j) ≤ 1.25 ) x(i, j) = 1.01;
if (x(i, j) > 1.25 ) x(i, j) = 1.51.

We do not force any pixel to a hard zero in the sub-algorithm ③, because the MLEM algorithm cannot update the pixel value zero. Figure 3 shows two MLEM reconstructions, one with noiseless data and the other one with noisy data, respectively. Here, the sub-algorithms ② and ③ are disabled in Figures 1 & 2 shows two TV reconstructions, one with noiseless data and the other one with noisy data, respectively. Here, the sub- algorithm ③ is disabled in Figures 1 & 2 shows two proposed POCS reconstructions, one with noiseless data and the other one with noisy data, respectively. All images are displayed in the gray-scale window of [0, 1.59]. The structure similarity (SSIM), peak signal-to-noise ratio (PSNR), and signal-to-noise ratio (SNR) are compared for the reconstructions. Table 1 compares the results using the noiseless data; Table 2 compares the results using the noisy data. It is shown that the rough knowledge used in sub- algorithm ③ is helpful in obtaining better reconstructions (Figures 4 & 5).

Figure 3

Figure 4

Figure 5

Table 1: Comparison studies with noiseless data.

Table 2: Comparison studies with noisy data.

Discussion and Conclusion

One may ask; “What is the objective function of the sub- algorithm ③?” We do not need one. If one insists on having one, we can set something up as However, we do not suggest a gradient based algorithm to minimize (4). In our proposed POCS algorithm, we do not simply alternate between the sub-algorithms sequentially. Within each POCS iteration, we execute sub-algorithm ① once, sub-algorithm ② 5000 times, and sub-algorithm ③ 1/100 times. We run sub-algorithm ② 5000 times because we choose to use a very small step size in a gradient descent algorithm to minimize the TV norm. A larger step size produces worse images. A very small step size has almost no effect on the reconstructed image. To overcome this difficulty, we use a very small step size and a large iteration number for the sub- algorithm ②. As the sub-algorithm ③, the number of iterations is 1/100. In other words, this sub-algorithm is skipped 99 times for every 100 global POSC iterations. This is equivalent to re-setting the initial image every 100 global POCS iterations. One may argue that it is wrong to use the emission MLEM algorithm when the noise is Gaussian. The emission MLEM algorithm was originally derived for the Poisson noise. It is more proper to use a transmission EM algorithm or a least square error minimization algorithm for the sub-algorithm ①. Our attempt of using an emission MLEM algorithm is to demonstrate that the TV minimization makes the noise model less important. The Bayesian information dominates the noise model in the maximum likelihood. When the measurements are incomplete, any prior information and corresponding constraints will help. The piecewise feature of the objects makes the TV norm minimization effective. The rough knowledge of the image values, as demonstrated in this paper, is also effective. We believe that there are other features of the images that can be used to supplementing the incomplete data. Machine learning turns out to be an effective way to explore the common features for a group of similar images.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open access medical journal

An Optimal Combination of Inositol and Phytic Acid Effectively Promotes Hair Growth

Introduction

Many people suffer from alopecia because of aging, genetic predisposition, stress, and other causes. Epidemiological studies show that 85% of men and 40–50% of women are affected by alopecia during their lives [1]. Various attempts have thus been made to provide excellent hair growth agents. Minoxidil, a hair growth agent that stimulates vasodilation, is used to treat a wide variety of alopecia and telogen effluvium cases, including androgenetic alopecia in men and women. Previous studies have demonstrated that vascular endothelial growth factor (VEGF) stimulates hair growth by increasing hair follicular angiogenesis in mice [2]. Moreover, increased production of (1) VEGF, (2) fibroblast growth factor (FGF), and (3) insulin growth factor (IGF)-1 [3,4], all of which increase vascularization, is a part of minoxidil-mediated hair growth, suggesting that stimulation of vascularization and/or production of VEGF, FGF, and/or IGF-1 is a therapeutic strategy to treat various types of alopecia. However, because minoxidil has side effects with long-term use [5], a safe alternative hair growth agent is needed. Rice bran is a by-product of the process of milling brown rice to white rice, contains various functional ingredients. Its oil-soluble components include γ-oryzanol, ferulic acid, tocopherol, and tocotrienol, and its water-soluble components include inositol (IN) and phytic acid (PA) [6].

Much information regarding the safety of each of these components, including dietary and usage experience is available, and many reports on their physiological activity have been published. Among these, IN exists in cell membranes in the body mainly in the form of inositol phospholipids. It exerts various physiological functions as a vitamin B-like substance. Phosphatidyl inositol is known to be degraded into inositol-3-phosphate and diacylglycerol upon activation of the receptors present in cell membranes by hormones and neurotransmitters, and to function as a second messenger in intracellular signal transduction pathways [7,8]. PA, also called inositol hexaphosphate is a phosphorylated compound with six phosphate groups attached to IN. We hypothesized that these components can exert hair growth-promoting effects. Therefore, we conducted a test to examine whether IN and PA, which are water-soluble components derived from rice bran, have hair growth-promoting effects.

Materials and Methods

The test subjects were rice bran-derived IN and PA commercialized by Tsuno food industrial Co., Ltd.

Cell Culture

Human follicle dermal papilla cells (HFDPCs) purchased from Promo Cell GmbH (Heidelberg, Germany) were grown in papilla cell growth medium (Follicle Dermal Papilla Cell Growth Medium (Readyto- use); Prom Cell GmbH, Heidelberg, Germany), as described previously [9,10]. Cells were cultured at 1×105 cells/well in a 24-well plate or 1×104 cells/well in a 96-well plate in the medium, and incubated with IN or PA or their mixture (IP mix) for 24 h. Cell proliferation was measured using a CCK-8 kit (Dojindo Molecular Technologies, Inc., Kumamoto, Japan) with a water-soluble tetrazolium salt that produced orange formazan.

Quantitative Real-Time Polymerase Chain Reaction

Total RNA was isolated from cell lysates using the guanidine isothiocyanate– phenol–chloroform method [11] with ISOGEN (Nippon Gene Co., LTD., Tokyo, Japan), following the manufacturer’s instructions. cDNA was then prepared using the PrimeScript RT Reagent Kit (TaKaRa, Kyoto, Japan). Real-time PCR was performed using the StepOne system (Thermo Fisher Scientific, Waltham, MA, USA) with specific primers. Real-time PCR was used detect the expression of glyceraldehyde 3-phosphate dehydrogenase (GAPDH), IGF-1, and VEGF using Power Up SYBR Green Master Mix (Takara Bio, Shiga, Japan). The expression levels of the target genes were calculated relative to the expression levels of the housekeeping gene GAPDH using the ΔΔCT method. Primer for GAPDH, VEGF and IGF-1 were designed based on reports by Adachi et al. (2015) [12] and Nakamura et al. (2018) [13] as follows; GAPDH, F5′-CTCCTGTTCGACAGTCAGCC-3′ and R5′-TCGCCCCACTTGATTTTGGA-3′, VEGF, F5′- CTACCTCCACCATGC- 3′ and R5′- ATGATTCTGCCCTCCTCC-3′, and IGF-1, F5′- TTTCAAGCCACCCATTGACC- 3′ and R5′- GCGGGTACAAGATAAATATCCAAAC-3′.

Human Clinical Study

Twenty-four healthy females suffering from thinning hair were recruited and randomly divided into two groups (12 per group, IP mix and placebo) at Nikoderm Research Inc. (Osaka, Japan). Finally, 23 participants were included and 1 participant (placebo group) was excluded as she declined participation owing to personal circumstances (Figure 1). A lotion containing IP mix or placebo was applied to the scalp for 18 weeks (Table 1). Participants did not use other hair growth reagents for at least 2 months before starting the study. The vertex scalp was photographed using an EOS Kiss X7 digital camera (Canon, Inc., Tokyo, Japan). The hair density and diameter were objectively assessed using phototrichogram [14] before, and 9 and 18 weeks after applying the test lotion. This study was planned according to the guidelines of the Declaration of Helsinki and Ethical Guidelines for Medical and Health Research Involving Human Subjects proposed by the Japan Ministry of Health, Labour, and Welfare. All participants provided written informed consent before commencement of the study, which was approved by the Ethics Committee at Nikoderm Research Inc.

Table 1: Placebo and IP mix formulation.

Figure 1

Statistical Analysis

Results were statistically analysed using Dunnett’s test or Tukey’s test (in vitro cell culture study) and paired t-test and student unpaired t-test (human clinical study).

Results

HFDPC Cell Growth

HFDPC growth was significantly increased following incubation with 0.06–1% IN (Figure 2a) and 0.016–0.25% PA (Figure 2b) compared with that in the vehicle control.

Figure 2

Gene Expression Levels of IGF-1 and VEGF in Cultured HFDPCs

IGF-1 gene expression on RT-qPCR was significantly increased in cells treated with 0.5% PA, but not changed with IN, compared to that in the untreated cells (Figure 3). VEGF gene expression was significantly increased in cells treated with 0.5% IN and 0.5% PA compared with that in the untreated cells. Furthermore, the increase in VEGF gene expression was greater in cells treated with the IP mix, prepared with a 1:3 weight ratio of IN and PA, than in cells treated with IN and PA alone. (Figure 4a) Treatment with mixtures of IN and PA prepared at ratios of 1:2, 1:3, and 1:4 showed that the mixture with the 1:3 ratio was the most effective at increasing VEGF expression (Figure 4b).

Figure 3

Figure 4

Hair Growth-Stimulating Effects of the IP Mix in Female Participants

We next assessed the effect of the IP mix with the 1:3 weight ratio of IN and PA, on hair growth in women. The mean age of the participants at the start of the study was 56.8 ± 4.8 years in the placebo group and 54.3 ± 7.7 years in the IP mix group. All participants applying the placebo and IP mix lotion completed the study without exhibiting any side effects such as hirsutism or facial hair growth during the study period. Phototrichogram analysis showed that hair density was significantly increased after applying the IP mix (Figure 5). However, the hair diameter of the IP mix group was slightly decreased (Figure 6a) and the rate of decrease in anagen hair was slower in the IP mix group than in the placebo group (Figure 6b). Photographing the vertex scalp also confirmed a grossly visible increase in hair density in the IP mix group (Figure 7). Participant A showed an increase in overall hair volume (Figure 7a). Participants B and C showed increased hair density in the areas indicated by arrows (Figures 7b & 7c); in Participant D showed a smaller width of the parting (Figure 7d) after 18 weeks of using the IP mix lotion.

Figure 5

Figure 6

Figure 7

Discussion

IN and PA are contained in the seed coats of plants, such as in rice bran. Early growth of rice is known to be improved by using seeds with high PA content [15], and myo-inositol plays an important role in seed maturation and germination [16]. These components have thus been shown as factors involved in plant growth. In human cells, IN is a factor necessary for growth and survival [17], and inositol triphosphate, an intermediate between IN and PA, has many functions as an intracellular signal transmitter [18]. Therefore, we hypothesized that inositol and phytic acid can promote hair growth, and that these molecules may work synergistically. In this study, we demonstrated that IN and PA derived from rice bran stimulated VEGF production in HFDPCs. The effect of IN and PA was the strongest when mixed at a ratio of 1:3. Furthermore, the IP mix increased hair density in female participants. The decrease observed in hair diameter may be attributed to increased new hair growth. The anagen hair ratio was decreased (although not significantly) in the placebo group, whereas almost no change was found in the IP mix group, indicating that the decrease was suppressed by the IP mix.

Several studies have reported the synergistic effect of the combination of IN and PA in areas other than hair growth. When used in combination, IN and PA have been reported to effectively inhibit colorectal cancer progression more strongly compared to either agent alone [19]. Further, ingesting a mixture of IN and PA at a mass ratio of 220:800 (molar mass ratio 1:1) has been reported to improve abnormalities in carbohydrate and lipid metabolism [20]. In this study, the mixture mass ratio of IN and PA that produced the strongest effect was 1:3. The molar mass equivalent of this ratio is approximately 1:1, and total number of IN skeletons to phosphate groups is 1:3. The type 3 receptor for IP3, an inositol with three phosphate groups, is reported to be expressed in the hair follicles of the skin and to play an important role in regulating the hair cycle [21]. Additionally, inositol triphosphate receptor expression is increased by compounds that inhibit stress-induced damage in human hair follicles, suggesting a link with hair growth and hair loss [22]. These reports thus suggest that treatment with a mixture of IN and PA at a mass ratio of 1:3 (molar mass ratio of 1:1) can efficiently generate inositol triphosphate and potentially improve hair growth and prevent hair loss. In conclusion, this study demonstrated that the IN and PA mixture at 1:3 mass ratio increases VEGF production in HFDPCs. We further showed that the IN and PA mix improves hair growth in women. IN and PA are water-soluble ingredients derived from rice bran; therefore, they can be easily incorporated into dosage forms such as lotions. The combination of these ingredients is thus expected to contribute to the development of safe and effective hair growth agents.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on medical science

Usage of Artificial Intelligence in Gallbladder Segmentation to Diagnose Acute Cholecystitis

Introduction

Cholelithiasis is currently the most common gallbladder condition with around 10% of patients with Cholelithiasis developing acute cholecystitis. In the United States, on average, acute cholecystitis affects 200,000 people annually [1]. While the pathogenesis and treatment of acute cholecystitis is well understood, methods of diagnosis are still complicated and require multiple physical symptoms and laboratory tests. Radiological scanning methods include Ultrasonography, CAT scan, and Hepatobiliary Iminodiacetic Acid scan. In addition to radiological scanning, physical and pathological symptoms such as fever, Murphy’s sign, and elevated WBC count are also important for the diagnosis of acute cholecystitis. The Tokyo Guidelines, initially introduced in 2007, were presented as a method to streamline and standardize the diagnosis of acute cholecystitis. The guidelines create three different criteria, with two needed to be met to diagnose a patient with acute cholecystitis. These criteria include findings from sonographic imaging, localized inflammation of the upper right quadrant of the abdomen, and pathological findings. However, even with two revisions of the Tokyo Guidelines, success rates, specifically specificity still fluctuates drastically in various validation studies.

As such, there has been a demand to show the improved accuracy of ultrasonography in diagnosing acute cholecystitis. Common internal conditions often examined for include thickening of the gallbladder wall, presence of pericholecystic fluid within the gallbladder, and gallbladder distension. In particular, the measuring gallbladder distension, if given the correct parameters, can have high accuracy of diagnosis. A review, authored by Kiewet, et al. pooled 57 studies (5859 patients) and found that ultrasonography when used to diagnosis acute cholecystitis has a sensitivity of 81% and a specificity of 83% [2]. Previous novel research conducted to demonstrate the diagnostic power of ultrasound in acute cholecystitis includes the work done by Shaish, et al. who quantifiably defined the width of 3.5 cm (with 83% sensitivity and 88% specificity) as a reasonable cutoff for considering the diagnosis of acute cholecystitis [3]. In the past decade, the usage of artificial intelligence (AI) in radiology has increased tremendously and been applied to the diagnosis of diseases in many different organs [4,5]. Chang, et al. were able to use various machine learning methods to detect differences between biomarkers of patients with and without Alzheimer’s disease [6]. Meanwhile, Yoo et al. used deep learning to diagnose bladder tumors with results ≥ 98% for low grade benign tumors [7].

One of the most influential machine learning technologies is U-Net. U-Net was developed in 2015 by researchers at the University of Freiburg and is a convolutional neural network that is frequently used in biological image segmentation. This method proved to be accurate and efficient, with segmentation of a 512×512 image only taking around one second with a graphics processing unit [8]. However, there has still yet to be an application of AI in. Given the prevalence of acute cholecystitis and the high potential to inaccurately diagnosis this condition, acute cholecystitis serves as an important target for innovation. The goal of the current study is to use segmentation of 2D ultrasound to estimate total volume of the gallbladder to diagnose acute cholecystitis. U-Net segmentation, a novel machine learning method, was used to auto-segment ultrasound images of gallbladders and subsequently automatically measure the gallbladder length, width, and other variables. Support Vector Machines were applied to determine which features have highest predictive power and test different kernels to assess the accuracy of volume as a predictor of acute cholecystitis. Results in the U-Net segmentation showed up to 95.71% mean intersection over union (IOU) and 80.49% accuracy and sensitivity of KNN and SVM with radial basis function kernel, on the test cohort to diagnose acute cholecystitis.

Methods

Ultrasound Image Segmentation and Annotation by U-Net

The dataset used in this study contain a total patient size of 127. For each patient, one longitudinal and one transverse scan of the gallbladder was chosen at random and manually segmented to show the outline of the gallbladder excluding the gallbladder wall. The annotated images were set, and ground truths used to train a neural network to automatically annotate gallbladder ultrasound scans. Specifically, the research utilized a modified version of U-Net, a very commonly used convolutional neural network function designed for biological image segmentation. The modified U-Net was designed with one input layer, 42 hidden convolution, max-pooling, and concatenate layers, and one output layer with both the input and output being size [512, 512, 1]. Data was split into a 75% testing, 20% validation, to 5% testing ratio. The hyperparameter used in this experiment was learning rate ranging from 0.0003 to 0.001. The associated U-Net epoch numbers for each learning rate (number of iterations through the test data before model has reached sufficient predictive power) was based on the early stop method to reduce overfitting in the validation set. Throughout the training process, various optimizers were tested to refine weights of nodes and epoch number. These optimizers include Adam, RMSprop, SGM, Nadam, and Adadelta, with only Adam, RMSprop, and Nadam functioning properly as optimizers. Given these optimizers, dice score and mean IOU were calculated to assess accuracy of the U-Net and the different applied optimizers.

Extraction of Gallbladder Features

Automatically segmented gallbladder ultrasound scans were used to calculate the volume of the gallbladder. A script was created to measure the longest axis of both the longitudinal and transverse scans of a patient using the annotation’s coordinates. In total, nine features were recorded for each pair of scans. These were: longitudinal long axis length, longitudinal short axis length, longitudinal 2D area, longitudinal eccentricity, transverse long axis length, transverse short axis length, transverse 2D area, and transverse eccentricity.

Machine Learning Based Diagnosis

A random forest test was performed in R to determine the predicative powers of each of the nine variables with mean decrease in Gini coefficients. In the final step, multiple tests were performed using different machine learning algorithms on the 127-patient data with all nine variables. These include SVM (linear, RBF, polynomial, and sigmoid), KNN, logistic regression, and naïve bayes. Data was split into a standard 80% training to 20% testing proportion. Accuracy, precision, and sensitivity were recorded for both training and testing data.

Results

Ultrasound Image Segmentation and Annotation by U-Net

The modified U-Net model developed produced automatically segmented gallbladders with high accuracy compared to corresponding ground truth images. A representative original ultrasound image, the auto segmented image by modified U-Net, and the manually segmented ground truth image are shown in Figure 1. The auto segmented image resembled the manually segmented ground truth image with high similarity (Figure 1). Mean IOU scores, which represents how much overlap occurs between the auto segmented and ground truth image, ranged between 0.8803-0.9571 with all but one score ranging between 0.9435-0.9571 with that score specifically associated with the Nadam optimizer (Table 1).

Table 1: Summary of the Performance of U-Net Auto Segmentation.

Figure 1

Extraction of Gallbladder Parameters

The gallbladder features from longitudinal scan (Long Axis, Short Axis, 2D Area, Eccentricity) and transverse scan (Long Axis, Short Axis, 2D Area, Eccentricity), and 3D Volume were extracted by a Python algorithm. Results of relationships between different variables are displayed in a pair plot below (Figure 2).

Figure 2

Figure 3

Machine Learning Based Diagnosis

Mean Gini coefficient was calculated to determine the predictive power of each variable. 3D volume and the short axis of the longitudinal scan were found to have the highest power (4.411284 and 4.307068) (Table 2 & Figure 3). Out of the different machine learning algorithms tested, KNN and SVM with an RBF kernel performed the best. Training data produced accuracy of 0.907 and 0.843 for KNN and SVM with RBF respectively (Table 3). Meanwhile testing data produced very similar results with accuracy of 0.841 for both tests. Accuracy of around 84.1%-90.7% is higher than that of the pooled study on standard ultrasound diagnosis which specifically recorded sensitivity and specificity of 81% and 83% respectively (Table 4).

Table 2: Mean decrease in Gini coefficients for every variable.

Table 3: Results for various machine learning methods in diagnosis of acute cholecystitis for training data.

Table 4: Results for various machine learning methods in diagnosis of acute cholecystitis for testing data.

Discussion

The modified U-Net model developed produced automatically segmented gallbladders with high accuracy compared to corresponding ground truth images. Mean IOU scores ranged between 0.8803- 0.9571 with all but one score ranging between 0.9435-0.9571 with that score specifically associated with the Nadam optimizer. Nadam also had a low epoch value of 150 which may have affected Mean IOU and there may be a case to exclude this optimizer for future experiments. In general, the segmentations of the U-Net were able to overlap in area with the majority of the segmented ground truth images. Strong and accurate performance of the U-Net segmentation is critical to the overall effectiveness of the machine learning method to predict acute cholecystitis. Mean Gini coefficient was calculated to determine the predictive power of each variable. 3D volume and the short axis of the longitudinal scan were found to have the highest power (4.411284 and 4.307068). These two variables were planned to be used to develop our models to predict acute cholecystitis, as it was believed that excessive variables would cause overfitting. However, I decided to test the model first with all nine variables and found that accuracy, sensitivity, and precision between training and test datasets were very similar, with 0%-6.7% decrease in accuracy of the model between the training and test data.

Out of the different machine learning models tested, KNN and SVM with an RBF kernel performed the best. Training data produced accuracy of 0.907 and 0.843 for KNN and SVM with RBF respectively. Meanwhile testing data produced very similar results with accuracy of 0.841 for both tests. Accuracy of around 84.1%-90.7% is higher than that of the pooled study on standard ultrasound diagnosis which specifically recorded sensitivity and specificity of 81% and 83% respectively. The current dataset only included 127 patients which was limited by the number of manual annotations. As such, once split into multiple different cohorts of testing, training, and validation, there were very few cases with positive diagnosis of acute cholecystitis. For future work, the segmentation will be fully automated by U-Net to speed up the data extraction. Subsequently, the dataset could be expanded to include more patients and generate results that reflect a larger population.

Conclusions and Future Work

This research has effectively developed methods to both segment gallbladder ultrasound images and diagnose acute cholecystitis with multiple machine learning methods. This work incorporated novel 3D volume measurements of the gallbladder with comparable accuracy, precision, and sensitivity to similar 2D work, as well as statistics gathered from traditional diagnosis through manual ultrasound examination. More robust models may be achievable using an expanded dataset with more patients. Furthermore, given the success of the KNN and SVM with an RBF kernel, further research may include constructing more complex versions of these specific machine learning models to raise accuracy. Other variables besides gallbladder size may be considered such as thickening of the gallbladder wall and the presence of pericholecystic fluid. With these advancements, using machine learning methods to diagnosis of acute cholecystitis can potentially be applied to clinical settings.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Biomedical Imaging

Resolution of the Coexisting Sarcoidosis After Treatment of Papillary Thyroid Metastases with Radioactive Iodine. A Case Report

Introduction

The coexistence of Sarcoidosis (SA) with thyroid malignancy has been reported in many cases [1-3]. Sarcoid-like reaction has also been seen either within the vicinity of the tumor itself or within the regional lymph nodes draining its primary tumor [4]. Abnormal immune response has been suggested for the SA and/or its reactions when it coexists with thyroid diseases [5]. The presence of SA or scrcoid like lymph nodes and soft tissue infiltrations with thyroid cancer makes the diagnosis malignant recurrence and/or metastasis difficult, and a thorough investigation should be done to properly identify the recurrence and/or the metastasis from the coexisting SA or sarcoid-like manifestations. However, the prognosis and clinical course of SA or the sarcoid-like reaction after the treatment of thyroid malignancy has never been described in the literature. We are reporting a case of a 54-year-old lady diagnosed with metastatic papillary thyroid carcinoma with coexisting lymph nodes and pulmonary SA infiltrations. The patient had almost complete resolution of all SA manifestations after treatment of her metastatic papillary thyroid cancer with high dose radioactive iodine.

Case Description

A 54-year-old lady with a history of long-standing diabetes mellitus, and hypertension was referred to our hospital with chronic shortness of breath, dry cough and hilar lymphadenopathy. Contrast enhanced computed tomography (CECT) of the chest at presentation is displayed in Figures 1-3. (Figures 1-3) Enumerable bilateral pulmonary nodules (black arrows) with right upper lobe faint interstitial opacities highly suspicious of sarcoid activity and numerous pathologically enlarged mediastinal lymph nodes up to 3.4 cm in size (white arrows). The patient underwent mediastinoscopy and biopsy of mediastinal, neck and paratracheal lymph nodes. Histopathological results showed metastatic papillary thyroid cancer (PTC) at the lymph nodes above the thyroid and granulomatous inflammation with focal necrosis and fibrosis from the mediastinal and paratracheal groups.

Fine needle aspiration of thyroid gland nodule showed papillary thyroid carcinoma. The patient had no history of tuberculosis contact, and all tuberculosis work up came back negative and so tuberculosis was excluded. After total thyroidectomy and lymph nodes dissection, pathologic examination confirmed the presence of multifocal thyroid papillary thyroid carcinoma with multiple lymph node metastases and granulomatous inflammation which showed negative fungal and tuberculosis stains. Post thyroidectomy I123 WB SPECT/CT scan is presented in Figure 4. Non stimulated thyroglobulin (Tg) was not increased. (Figure 4) Focal avid uptake of the tracer at the thyroid gland with multiple left neck nodes (black arrow) and diffuse bilateral military pulmonary metastases (white arrows). The patient then received 200 mCi radioactive iodine treatment and her I131 post therapy scan again confirmed the results of the pre-therapy I123 dosimeteric findings. Follow up CECT seven months later is shown in Figures 5-7 showing resolution of all sarcoid lymph nodes, lung infiltrations as well asl the iodine avid peripheral lungs nodules. (Figures 5-7) Almost complete resolution of the pulmonary nodules and mediastinal lesions.

Figure 1

Figure 2

Figure 3

Figure 4

Figure 5

Figure 6

Figure 7

Discussion

Sarcoidosis is a well-known granulomatous disorder of unknown etiology [1]. The co-existence of SA and/or sarcoid like reactions with PTC has been reported in several studies [1-3] which makes the diagnosis of malignant recurrence and/or its metastasis difficult without thorough investigations and multiple biopsies [4]. The defected immune system associated with SA and/or sarcoid like reactions can also be a reason for an abnormal serum thyroglobulin (Tg) result, the key marker for follow up of recurrence of differentiated thyroid cancers [5]. In our case, the primary PTC was associated with neck lymph nodes and likely military bilateral pulmonary metastases. SA was diagnosed via pathologic examination of the biopsied mediastinal lymph nodes and transbronchial biopsy of the pulmonary infiltrate as well as the staging CECT criteria. The accumulation of the pre-therapy dosimetry I123 at the neck residual functioning thyroid tissue, neck lymph nodes and bilateral lungs nodules, directed the treatment towards the high dose I131 after surgical resection in spite of the non-diagnostic Tg level. 7 months after the radioactive iodine treatment, the follow up CECT showed almost complete resolution of the neck, mediastinal lymph nodes and the pulmonary manifestations of SA as well as the bilateral lungs military nodules. This observation again stresses on the causal association between PTC and SA as a paraneoplastic syndrome or sarcoid like reactions to the primary tumor and directs towards the importance of treatment of the primary PTC disease even in the absence of abnormal serum Tg as a marker for recurrence in lieu of the defected immune system.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access journals on surgery

Climate Change is a Mental Health Emergency

Introduction

Climate change is no longer a distant threat looming on the horizon; it’s a pressing reality with immediate and far-reaching consequences. The World Health Organization recognises climate change as the greatest threat to human health in the 21st century. Extreme weather events, exacerbated by climate change, wreak havoc on individuals and communities alike. Recent data from the Climate Council revealed that 80% of Australians have experienced some form of extreme weather disaster since 2019, leading to physical and psychological impacts. According to the Black Dog Institute, for every person physically injured in a natural hazard, 40 experience psychological impacts such as anxiety, depression, PTSD, sleep disruption, and suicidal ideation. In the aftermath of climate-related disasters, accessing mental health support becomes crucial. Priority populations, including rural and remote communities, First Nations peoples, individuals with existing health and mental health issues, older people, children, people from culturally and linguistically diverse backgrounds, and those in sea-level geographical locations, are disproportionately impacted. Additionally, there’s a documented increase in family violence post-disaster, exacerbating the need for timely intervention and support.

Mental health support is crucial not only in providing treatment to individuals but also for timely intervention and prevention of violence. Consulting with psychiatrists and mental health professionals is essential for the development of strategies aimed at building resilience, prevention, and effective treatments for those who have experienced these events. In December 2022, the Royal Australian and New Zealand College of Psychiatrists (RANZCP) established the Climate and Sustainability Steering Group. This group was tasked with recommending a possible framework for future College action on climate and sustainability during 2024 and 2025. Broadly, the Steering Group has identified three priority areas: member resources and education; disaster recovery and preparedness; and advocacy and partnerships. Addressing workforce shortages and providing specialised training for mental health professionals, including psychiatrists, to effectively respond to the mental health challenges posed by climate change is a key area identified by the group. RANZCP’s focus extends beyond advocacy to practical measures aimed at bolstering mental health services, particularly in disaster-prone areas. However, these efforts cannot be achieved without government will and intervention.

It is imperative that all governments work together to develop a comprehensive strategy on climate, health, and well-being. Without this, as the severity and frequency of natural disasters increase, and as global temperatures rise, the mental health of the community will continue to pay the price. In its submission to the Department of Health and Aged Care National Health and Climate Strategy, RANZCP outlined immediate priorities for Australia. These include prioritising mental health, widening objectives to comprehensively address climate change impacts on health, enhancing data collection and monitoring, providing resources for training and implementation of strategies, increasing health workforce funding, prioritising vulnerable communities, and involving Indigenous and rural communities in strategy design. The climate crisis is not just an environmental issue; it’s a mental health emergency. Together, we must prioritise prevention, support, and resilience-building efforts to ensure that climate- related disasters do not leave a lasting legacy of mental health challenges and violence.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Medical Informatics

Teaching and Research Necessary Educational Process in the Academic Units of Health Sciences, UAZ

Introduction

Along with the new economic, political, social, health and even war situations that the new millennium has brought, the educational field has had to rethink again the process and work that teachers carry out with their students in the classrooms. class. In particular, the educational event carried out in the Academic Units of Health Sciences of the Autonomous University of Zacatecas (ACS/UAZ) has been or is the one that mostly requires rethinking or readjustments in relation to educational policies and The purposes pursued are, in general terms, to train competent and competitive health professional graduates, who are capable of meeting the demands and needs of the population that requests their services, in addition to being part of the development of the state and the country. In this sense, the ACS/ UAZ in general is an educational field that, due to its long-standing traditional, Napoleonic and vicarious educational tradition, requires a reconceptualization in terms of what to teach and how to teach by the teacher to form students with the ability to creativity, critical perception, the virtue of analyzing and synthesizing, having rigor, discipline, objectivity and responsibility in what they undertake, perseverance, relevance and other qualities and skills, in addition to generating in them a taste for exploration, inquiry, discovery and truth (Porlán Ariza R [1]).

For this, it is essential to have teachers with a new profile, who are capable of teaching but at the same time learning from their students, who are interested in understanding and at the same time understand what they are doing, how they are doing it and why they are doing their pedagogical work. In short, he must be a teacher-researcher who investigates why what he teaches his students is captured and understood or just the opposite, at the same time that he continually rethinks changes in his daily work inside his classroom. , who assumes himself as a fundamental mediator between what he knows, masters and wants to teach (theory) and the educational practice he carries out (learning), therefore, the characteristics and function that his work gives him entails a regulatory and at the same time transformative activity. of the initiatives and internal and external factors that in one way or another affect the dynamics of the classrooms. This intervention must be carried out through a double process, on the one hand, through its cognitive dimension which functions as a sieve based on its beliefs, which allows it to achieve, from its own knowledge and knowledge, decipher and evaluate those variables that receive so much. him as his students for the benefit or detriment of the process, on the other hand, knowing how to conduct himself in his class as a practitioner, who is capable of visualizing himself to be able to make decisions about his specific behavior when the results he is obtaining are not what he expected. expected and can make modifications to reorient the process (Solís Muñoz JB [2]).

This double procedure, although influenced by your system of beliefs and opinions, does not adapt mechanically and automatically, since it is the result of the influence of various variables that interact in the specific context of teaching-learning, all in a process that is It largely escapes their conscious control, which is why the teacher, in his capacity as mediator, must be able to actively and continuously investigate what is happening in the classroom; This requires, in addition to being a disciplinary professional within the health area, being by conviction an explorer of their professional, pedagogical and didactic knowledge schemes and analyzing their relationship with their performance in the classroom, in the same way, recognizing their deficiencies and limitations and have the availability to make commitments to correct them. Currently, the role played by the majority of teachers in classes within the ACS/UAZ field is apparently drawn as that of a passive and authoritarian teacher who mechanically applies the contents that he is responsible for teaching, contrary to the current demands that the The teacher must be an active, versatile and innovative agent in the development of his disciplinary field, a modeler of the content he teaches and the codes and links that structure these contents, thereby conditioning the entire range of learning of his students, thinking the work and pedagogical process as an experimentation, as an inquiry, so that one can constantly question oneself about the meaning and nature of one’s educational practice, for this it is necessary that three conditions be met: Wanting, Knowing and Power (Santos Guerra MA [3]).

Want understood as the will and need to be able to make decisions within the range of your autonomy as a teacher; Knowing how the ability and wisdom to carry out explorations and investigations that lead to discovery as the basis of the teaching process is not enough to want; and Power in the sense of possessing the academic, pedagogical and administrative capabilities and conditions that allow them to make clear, decisive and well-founded decisions to apply theoretically and practically what they could do in their work to improve. If it is recognized and understood that the creation of knowledge is hard, difficult and complicated work, learning to research is an even more critical, arduous and delicate process for both the teacher and the student, which is why it is possible to establish within a classroom a teaching-research link in the ACS/UAZ at the time of the educational process urgently requires deep and significant curricular changes that allow its viability; There are several levels of concreteness at which these transformations should be proposed or rethought, at the level of social structures, the institutional level, organizational functioning and, most importantly, the level of teachers, because if you do not have a good teacher capable, trained and updated to establish a true teaching-research link, it is impossible to think that it will create professional graduates for this new century, with the skills and competencies that allow them to face and solve the various health and social problems successfully (or contribute to that solution) (Verónica R Indacochea González, et al. [4]).

On the other hand, the directions in which the required school, classroom and academic changes should point are in the context of the production of knowledge and knowledge, in the administrative, organizational and active exercise of democracy, the latter is related to morality and civic respect and the competent and effective provision of graduates in the numerous professional and social services; Some others refer to the development of a new culture as well as the liberation and emancipation of the new health professionals from the old stereotyped models within each disciplinary field within the area of health. It is clear that none of the substantive functions carried out by higher education (teaching, research, extension and linkage) have the same value and place within the various higher education institutions, but in particular at ACS/UAZ, linked teaching Research is considered a fundamental duo to subsequently achieve an extension and connection with the successful, competitive and well-valued and appreciated society, although over time this connection has been given different meanings, conceptions and values, which within of their own inertia they mix the original conception of each of the functions performed by higher education. Research and teaching acquire meaning as specific expressions of the forms of production and dissemination of knowledge in these educational institutions, but in recent years and reflected in the educational policies internal and external to the institution, there is an urgent need to link them holistically. to enhance their reason for being and function, although it has been barely analyzed from a socio-educational point of view.

Unfortunately, among the university teaching community and also among the respective educational authorities, the little importance they give to the great importance and need to move day by day towards this link (teaching-research), has delayed this fundamental task of bringing it to the level of the discussion and therefore, has caused little reflection on the matter, a situation that has resulted in a dimension and vision that conditions and limits its current situation. The few statements raised so far, which justify and defend its necessity, have so far only been good intentions within the political-educational actions of the authorities, since no actions are required to materialize these formulations nor are the necessary conditions established to its realization, generally the link is only found or mentioned in political statements and at specific times; Therefore, it is clear that this condition has more of a historical and political character than an academic one and for this reason part of the work to come and that must be carried out is the presentation of structural and operational proposals that favor the realization and effectiveness of this interrelation. within the ACS/UAZ.

Differentiation between Teacher and Researcher

To understand and assess the environment of the need to transition university teaching practice to a new one based on a real and close link between teaching and research to further enhance the educational process, it is necessary to make, even briefly, some explanations that clarify what is being talked about. Although it is recognized that there are great and subtle differences, it is also clear that they have and share similarities and approaches, for example, in a university educational institution both partially or totally share work material, resources, infrastructure, knowledge, knowledge and yet in it and therefore, they differ in schedules, actions, attitudes, values, interest, skills; While the teacher has under his responsibility and control a group of students, which can be small or large, with whom he must try to communicate homogeneously under different situations and conditions, the researcher does so in an interaction with only one or a very small group of his peers. The teacher must possess and display eloquence, argumentation, clarity, objectivity and patience to lead his students to the understanding of a pre-established approach or topic (curriculum-study plan), while the researcher often does not dialogue for initiative and instead manifests many episodes of silence and only focuses on presenting his work, although in many moments of silence it is understood or rather believed that he is silent because he is in a moment and process of reflection on what is presented or what they could have asked him.

Generally, the teacher must gather strength, will and disposition to understand, plan and explain to his students processes, themes and content previously established in the study plan, the researcher establishes his times, rhythms, spaces and conditions to share his findings and results; For this reason, it is necessary for the teacher to have an adequate verbal culture, with a wide vocabulary of teaching jargon, on the other hand, the researcher shows off his ability to express himself in writing, within a dialogue (which is often a monologue) made up of meanings and codes specific to the field of research (Herrera González JD [5]).We could continue pointing out some other points in this regard, but nevertheless the important thing in this work is to point out that despite these and other differences there are shared spaces, contexts, knowledge and knowledge, such as the demand for a certain level of rigor requirements. Scientificity, foundation in what they say and do, in the same way the need to search, present and demonstrate the truth of what is said as part of the need they have to demonstrate their pedagogical, didactic, academic and scientific authority, therefore which, both have to make a great and unavoidable effort of systematization, organization and discipline that accompany their productive tasks, under penalty of putting their work, credibility and even their privileges at risk.

Teaching Research Link

Any didactic model that aims to explain and direct the educational process must consider, as an essential element of its structure, the professional skills that the teacher must have; Defining a proposal for linking teaching-research implies, from the outset, characterizing the specific tasks of the teacher that make it viable, so that he maintains the highest degree of coherence between the psychological, sociological and specifically didactic principles that define and defend said model. In that sense, the construction of the teacher as a facilitator of his students’ learning and at the same time as a researcher of classroom processes, promoting certain essential aspects such as, for example, the constructive conception of learning, the identification and importance of representations and conceptual errors in the construction of knowledge, the role of communication in the classroom, the influence of ecological and social exchanges in learning processes and the development of attitudes, behaviors and values typical of scientific and critical thinking in the student. The reflection on the linkage proposal at the time should discuss the relevance and viability of including among the professional tasks of the teacher as a researcher the situations in which he or she is immersed as a teacher. The idea of incorporating research into the teacher’s work is not new, it dates back to the 70’s when it began to be given importance from theory and practice (Furio and Gil, 1984) and Stenhouse is one of the first theorists. in raising it (1981).

This important proposal is proposed from the vision of applying new curricular approaches in the ACS/UAZ where a teacher model is promoted that investigates in the classroom to solve specific problems and at the same time reflect, theorize and progressively reconstruct their teaching and at the same time contents it teaches. One of the problems of greatest interest for analysis is the way to introduce teachers to the behaviors, attitudes and methodologies of research in the classroom; Below are some general ideas and strategies to carry out the teaching-research activity. Obtain more rational general information from the class and not just the mere intuitive information from the teacher, for this it is necessary to keep a class diary. Evaluate in an investigative manner one or several aspects related to the programming that the teacher does, to do so sporadically apply some questionnaires to the students about content situations of interest to the students and apply observation guides. Analyze and reflect on specific problems in a timely manner, to do this apply some questionnaires and carry out a sociogram with the students. Occasionally invite a professional to teach a topic, who together with the teacher forms a team to address and learn about the difficulties encountered in the group during classes and outside of them, for this the teacher must have his diary, some observations and analysis. of the questionnaire and observation guides and, together with the professional’s vision, perform triangulations between the information collected (López de Parra L [6]).

Conclusion

Reflecting on the future of society’s health and disease problems and needs in this not-so-new 21st century poses enormous difficulties due to the increasingly growing complexity of contexts, situations, factors and variables that condition them increasingly. promptness and breadth. In the general educational context and in particular in that of ACS/UAZ there has been a strong tendency for a long time to concentrate the entire educational present, in the here and now, without contextualizing the process in the dimension of globalization and multiculturalism, This trait or characteristic has a significant impact on the teaching-learning process, on the one hand it responds successfully to the traditional, Napoleonic and vicarious training model of the students, and on the other hand it forgets in many ways that education must evolve progressively and in parallel to the social, political, geographical and economic times, contexts or dimensions that are one of the most important elements to mark the primary task that consists of transmitting a type of specific and disciplinary professional training that gives them the elements, capacities, skills and values its students so that when they graduate they do so with a truly humanistic, holistic, critical, reflective and innovative vision about their immediate or future tasks in the society where they will be inserted in the workplace to face the present and an already uncertain future.

Thinking about the future of education and of the students and graduates of the ACS/UAZ in a context as complicated as the one that currently exists is a task that is considered difficult and complicated. Paradoxically, when the teacher feels more secure about his work, it is when he requires and has the greatest need to think and act with a vision of the future due to the recent educational and social crisis presented by the state and the country, which has been analyzed as the main symptom of exhaustion of a traditional educational model that no longer has many possibilities of being sustained in the medium and long term without profound changes in both teaching and learning. Without going into the details of these debates and approaches, the truth is that for educational, academic, ethical, moral, political, social and health-illness reasons, linked in one way or another to the training of professional, disciplinary, competent graduates and competitive, it is urgent to promote a true educational link in the area between teaching and research. The imperative that is presented is the construction of a new educational process, with higher levels of academic, disciplinary and social achievement, with patterns of behavior and attitudes typical of a professional in the health area such as those currently required by the cosmopolitan and globalized society. of the 21st century, with more equitable possibilities of acceptance, with a more democratic, critical, empathetic vision and action, and with high initiative to participate in decision-making on their part as health professionals and at the same time as citizens. in the environment where they work. In short, we are facing a great challenge, to build a new higher education, in which there are higher levels to learn and generate knowledge, knowledge and supportive experiences, inter-professional cohesion, intergenerational ethical and moral responsibility, the big question It will be to know if the intention and objective of building this new type of teaching-research link can have sufficient potential to generate new horizons of scholastic, academic, political and social recognition to the Autonomous University of Zacatecas and in particular to the Area of Health Sciences, only time could tell.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journal on medical genetics

Evaluation of Edible Coatings Containing Black Pepper (Piper Nigrum) and Turmeric (Curcuma Longa L) for Enhancement of Shelf Life of Jaggery

Introduction

Jaggery, a plant product high in sugar that is used all over the world, is traditionally made by condensing sugarcane (Saccharum officinarum) juice. Jaggery is very beneficial nutritionally and medicinally. Jaggery is regarded as a therapeutic sugar in Indian Ayurvedic medicine, used to heal lung and throat infections. In vivo studies reported that a dietary supplement of Jaggery was found to exhibit health benefits [1] Jaggery being a least processed sugar retains phenolics and other phytochemicals with potent biological activities like antioxidant, cytoprotective and anthelmintic activity [2]. The physical and chemical composition of Jaggery as well as its storage environment play a significant role in determining the quality of product. During storage, solid Jaggery undergoes liquefaction and changed to dark color, which is due to absorption of moisture and microbial attack [3,4]. Physically, it dissolves and liquefies, disturbing the texture. It also dilutes the sugars and lowers the sweetness. Chemically, it promotes inversion of sucrose which in turn leads to loss of texture, structure and body hardness. Moisture gain also encourages microbial spoilage and degradation resulting in lowering of quality and reduced economic value [5]. To prevent microbial deterioration, edible films or coatings are applied to the food. An ideal edible coating is a thin layer of material that can be consumed and provides a barrier to oxygen, external microbes, moisture, and prevent solute movement from food [6,7]. It can also reduce decay without affecting the quality of fresh fruits and vegetables and extend their storage life without causing anaerobiosis [8]. The preservative effect of black pepper (Piper nigrum) and Turmeric (Curcuma Longa L.) extracts on Jaggery have earlier been reported [9,10]. Turmeric belonging to the family of Zingiberaceae, has main biochemical compounds curcumin and curcuminoids commonly used as spices and coloring agents in food. Turmeric has also anti-inflammatory, antifungal, antimicrobial, antioxidant and anti-proliferative properties [10,11]. Traditional antibacterial remedies make considerable use of black pepper (Piper nigrum). There are several recognized piperidine and pyrrolidine alkamides found in P. nigrum. It is well known that the main bioactive component of black pepper, piperine, has antibacterial qualities [9]. Hence, the present investigation is undertaken to evaluate physico-chemical properties, microbial characteristics, antioxidant activity and antibacterial activity of carboxy methyl cellulose (CMC) based black pepper and Turmeric coatings on Jaggery.

Materials and Methods

Sample Collection and Materials

Fresh Jaggery samples (prepared from sugarcane variety ‘Co 0238’) were procured from local small scale Jaggery manufacturing unit situated at Muzaffarnagar, India. Black pepper liquid extract (Piper nigrum) (BRM chemicals, India) and turmeric (Curcuma longa L.) water extracts (95%) (Purenso, India) were purchased from local pharmacy store. Food-grade CMC (99.9%) with an average molecular weight of 41,000 g.mol-1, Glycerol (analytical grade) and other reagents were purchased from Himedia Laboratories, Mumbai, India. Initial analysis of Jaggery was carried out in April 2023 and later after six months of storage in September 2023.

Edible Coating Preparation and Sample Storage

Carboxy methyl Cellulose (1.5 g) was dissolved in 100 mL distilled water and stirred at temperature of 75°C until the mixture became clear and 5% (w/v) glycerol was added to it as plasticizer. Thereafter the solutions were cooled to 50°C and antimicrobial agents (black pepper and turmeric extracts) were at 2% (w/v) concentration were added with constant stirring [12]. Three types of coating solution were prepared: turmeric coated (TC), black pepper coated (PC) and turmeric plus black pepper (equal concentration) coated (TPC) and the control included with no coatings. Jaggery samples (100 g cubes) were coated by dipping them into the pre-prepared coating solutions for 120 s at room temperature, then drying for 60s at room temperature under blowing filtered air under Laminar Air Flow. Both non-coated (NC) and coated samples (TC, PC, TPC) were stored in aluminium pouches for six months for further analysis.

Physico-Chemical Characterization

Both coated and non-coated stored Jaggery samples were subjected to further analysis viz. Physico-chemical characterization (pH, color, turbidity, filterability, insoluble solids, water activity) as described by Guerra and Mujica [13]. Samples were dissolved in sterile distilled water and the pH was determined using digital pH meter (Labman, India). Color (5% w/v solution of Jaggery samples) was determined by measuring OD at 540 nm using visible spectrophotometer (Labman, India). The water activity was measured by placing the samples in water activity meter (Labtron, India) and by measuring the equilibrium relative humidity. Turbidity (5% w/v solution of Jaggery samples) was determined by measuring transmittance at 740 nm using visible spectrophotometer. Filterability (%) was calculated using the ratio of filtered volumes of 100mL each of sucrose (280 Brix) solution and 5 % w/v Jaggery sample solution when filtered through filter paper for 3 min. Insoluble solids were determined by drying and weighing the residue left on filter paper after filtering 1g of Jaggery sample solution.

Moisture, protein, ash, reducing sugars, sucrose content of Jaggery were determined following the methods of Official AOAC [14]. Moisture content (%) was determined as weight ratio expressed after drying 1g of Jaggery sample for 24h in hot air oven. Reducing sugars (%) was determined by using titrating Jaggery sample solution with known volume of Fehling solution. Sucrose (%) was also determined by similar titration method, but after the inversion of sample solution with acid followed by neutralization with alkali. Ash content was determined by incinerating 10g of Jaggery sample in muffle furnace and comparing the weight with air-dried Jaggery sample. For protein content measurements, 100 μL sample solution and 5 mL Bradford reagent were mixed and incubated for 5 min. Absorbance at 595 nm was recorded and plotted with standard curve of bovine serum albumin.

Phyto-Chemical Characterization

The total flavonoids content, tannins content, saponins, total alkaloid and total phenol content of Jaggery were determined using aluminium chloride method [15] and Folin–Ciocalteu’s method [16], respectively. For total phenol content, 2 ml of Folin-Ciocalteu reagent and 2 ml of 10% sodium bicarbonate solution were mixed together with 500 µl of the sample and incubated for 1h at room temperature. Absorbance was recorded at 765 nm. Gallic acid was used as standard and total phenol content was expressed as mg gallic acid equivalent (GAE)/ gram of sample. For total flavonoids, reaction mixture was prepared by adding 5ml of 10% aluminium chloride solution with 5ml of sample solution and absorbance at 415 nm was recorded after incubation for 30 min at room temperature. Catechin is used as standard and total flavonoid content is expressed as mg catechin per gram of sample (mg/g). For total alkaloids, reaction mix was prepared by adding 100 µl of sample with 40ml of 10% acetic acid in ethanol and incubated for 4h at room temperature. After that, ammonium hydroxide was added drop wise to the mix and residue was allowed to settle down for 1h. The residue was than filtered, dried and weighed. Total tannin content was determined by preparing reaction mix of 1 ml sample with 7.5 ml distilled water, 0.5 ml of Folin-Ciocalteu reagent and 1 ml of 35% solution of sodium carbonate. After 1h, absorbance was recorded at 760nm. Total tannin content is expressed as tannic acid equivalent in mg/ gram of sample. Saponins were determined by purifying 5 ml of sample solution with ethanol and Di-ethyl ether and concentrating with n-butanol.

Anti-Oxidant Activity

The Jaggery’s capacity to scavenge DPPH radicals was assessed using the methodology outlined by Yamaguchi, et al. [17]. 1 ml sample solution was mixed with standard BHT at varying concentrations. 3ml of DPPH was added to the mix and incubated for 30 min in dark. Absorbance was recorded at 517nm and DPPH radical scavenging was represented as I %= (A control-A sample)/A control * 100. The effective concentration (EC50) for DPPH (50%) was also calculated. Further, reducing power of Jaggery was determined according to the method reported earlier by Yen and Chen [18]. Different concentration of Jaggery sample solution was mixed with standard antioxidant Trolox and equal volume of 0.2M phosphate buffer and 1% potassium ferricyanide was added. The reaction mix was incubated at 50℃ for 30 min. Further, mix was centrifuged at 3000 rpm after adding 10% trichloro acetic acid. Supernatant was collected and mixed with 1% ferric chloride solution and sterile water. Absorbance was recorded at 700 nm.

Microbiological and Statistical Analysis

For microbiological analysis, Standard Plate Count, Yeast, and Mould count were made using Nutrient Agar medium (NAM) for bacteria and Potato Dextrose Agar (PDA) for mould, while yeasts were isolated using Yeast Agar medium [9]. Data were subjected to statistical analysis for testing their significance by employing Analysis of Variance (ANOVA) technique.

Results and Discussion

Physico-Chemical Characterization of Coated Jaggery

The results of physical properties of non-coated and coated Jaggery are represented in Table 1. The pH of coated Jaggery and non-coated Jaggery were in the range of 5.7–5.9 that is in accordance with Guerra and Mujica [13]. Low pH of Jaggery may be due to deficiency of lime added at the time of juice clarification. Color of Jaggery finds to be the primary factor for consumer preference and market, and is dependent on dark compounds formed during processing. Browning of Jaggery can be caused by the Maillard reaction, oxidation of phenolic chemicals, alkaline breakdown of sucrose, or caramelization of sugars. [19]. Coated Jaggery depicts elevated absorbance at 540 nm (TPC>TC>PC) compared to non-coated Jaggery (NC). NC has golden brown color, while, darkened color was resulted in all coated Jaggery samples. Moisture content and water activity are two important parameters determining the quality, stability and shelf-life of foods during storage. TPC and PC showed a marked increase (0.9 %) in moisture content but a very slight increase in moisture content observed for TC. This shows that coating of Jaggery samples helped in retaining moisture content up to some extent. Water activity (aw) represents the water status in the food system and governs microbial growth [20]. Coating of Jaggery samples showed significant (p≤0.01) differences in water activity as noted by marked decreasing trend in values obtained for non-coated and coated samples. The results indicated that TPC and PC could offer better shelf-life and promising quality for Jaggery during storage. However, aw in the range 0.60–0.65 finds to be the optimum condition for growth of osmophilic and xerophilic microbes such as Aspergillus, Saccharomyces and thus supports their growth on Jaggery and results in spoilage [20].

Table 1: Physico-chemical properties of non-coated and coated stored Jaggery.

Turbidity of all coated Jaggery showed a gradual increase (TPC<PC<TC) with respect to NC Jaggery. About 8–9 %, increase in turbidity was observed between NC compared to PC and TC, respectively. Marginal increase in filterability is seen between NC and TC Jaggery. However, results showed initial remarkable increased (6 and 15 %) filterability in PC and TPC, respectively, but the ash content was differed by 0.01 % in all coated Jaggery.

The results of chemical properties are represented in Table 1. Sucrose and reducing sugar content of coated Jaggery showed very marginal increase in coated Jaggery (TPC>TC>PC). Protein content TC and PC showed no significant difference over NC Jaggery, but, TPC depicted increase of about 0.4 mg/g of protein content. Increase in total phenol, tannin and flavonoid contents was resulted in all coated Jaggery (TPC>PC>TC). TC, PC and TPC exhibited increase in 11.1, 12.0 and 16.5 % phenol; 15.4, 14.6 and 16.2 % tannin and 10.6, 6.7 and 7.7 % flavonoid contents, respectively from NC Jaggery. Because of their distinct functional groups, flavonoids are the most prevalent form of dietary polyphenols with antioxidant potential. Both flavonoids and total phenols are in accordance with the previously reported study by Ahmad et. al [21]. Thus, our results depicted that edible coated Jaggery may be used as a source of antioxidants. Saponin content and total alkaloid content values were not significantly different between samples and depicted only marginal increase in coated samples a compared to non-coated.

Antioxidant Activity

Antioxidant activity of coated Jaggery was measured by two in vitro assays, i.e., DPPH radical scavenging ability and reducing power assay. DPPH is a stable free radical, and in its radical form absorbs at 517 nm whose absorption decreases after acceptance of an electron or hydrogen atom from an antioxidant due to the formation of its non-radical form DPPH-H [22]. The degree of decolorization of DPPH is a stoichiometric measure of the antioxidant potential of test samples. The scavenging ability of TPC, TC and PC were expressed in terms of EC50 values as shown in Table 2. All coated Jaggery showed concentration dependent free radical scavenging activity. Coated Jaggery decreased EC50 concentration than non-coated Jaggery. TC, PC and TPC had EC50 of 3.098, 3.076 and 3.038 mg/mL, respectively. EC50 of BHT, used as standard was 0.0075 mg/mL. Both coated and non-coated Jaggery showed higher (450 folds) EC50 concentration than standard BHT. Results of DPPH radical assay showed a positive correlation (r = 0.92, 0.87 and 0.88) with total phenolics of TC, PC and TPC Jaggery, respectively. High correlations between total phenolics and scavenging of DPPH radical indicated that polyphenols present in the coated Jaggery are the main antioxidants. Further, reducing capacity assay provides a measure of compound’s ability to donate electrons and reduce the oxidized intermediates formed in peroxidation process. The assay is based on the reduction of Fe+3-ferricyanide complex that is monitored at 700 nm. A rise in absorbance signifies a rise in reductive capacity [23].

Table 2: DPPH radical scavenging activity (standard BHT) and reducing power (standard Trolox) of non-coated and coated Jaggery.

Since reducing power of a compound serves as a significant indicator of its antioxidant activity [24], coated Jaggery assayed for reducing power ability. In Table 2, coated Jaggery exhibited in-vitro ferric reducing potential in increasing manner. The absorbance of coated Jaggery at 700 nm had increased than non-coated Jaggery. Trolox, used as a standard showed absorbance of 1.37 at 50 lg/mL. The reducing potential of TC, PC and TPC increased by 23.22, 26.00 and 24.53 %, respectively than non-coated Jaggery. Natural antioxidants play an important role in the prevention and interception of oxidative damage and have great impact on the safety and acceptability of the food system. They keep the food stable against oxidation and act as a potent preservative by controlling microbial growth. Antioxidants have many health benefits, including preserving biological function and guarding against diseases like cirrhosis, diabetes, heart disease, gastropathy, chronic renal disease, and cancers [25]. In addition, antioxidant activity of plant is often associated with polyphenols that with hydrogen donating capacity inhibits free radical induced oxidation [18]. The phenolic compounds of sugarcane juice exhibited antioxidant potential [26] and conferred various biological activities. The antioxidant compounds extracted from Jaggery showed stronger antioxidant potential than BHT in earlier reports [27].

The studies on the in vitro stability and antioxidant capacity showed that when curcumin is embedded in the O/W SFME, its stability and antioxidant activity are significantly improved [28]. It is also reported that naturally occurring antioxidants found in black pepper are useful as food additives to increase shelf life of foods because of its medicinal properties [9]. In present investigation, black pepper and turmeric edible coating resulted synergistic increase in both total phenolic content and anti-oxidative potential of Jaggery, and hence the combination of nutritional and medicinal benefits determines them as a functional food.

Microbial Characterization

The total viable count (TVC) in cfu/g (Colony Forming Units per gram) in NC, TPC, TC, PC Jaggery after six months of storage were 5.1×103, 2.98×103, 2.3 x103 and 3.3×103, respectively. Coating of Jaggery samples with edible coating significantly (p≤0.01) affected microbial counts as shown by marked difference in TVC obtained for uncoated and coated samples. The results suggest that coatings of Jaggery samples with edible coating may reduce the microbial deterioration of Jaggery to some extent. Similar findings were reported previously [29,30]. Priya and Garg [9,31] have reported that coatings with powder of black pepper, turmeric, garlic, cumin enhanced the shelf life of potato, tomato taro roots and bottle guard. They further found that synergistic effect of turmeric and black pepper that significantly reduced the microbial spoilage of food stored in refrigerator.

Antibacterial Activity

The antibacterial activity of non-coated and coated Jaggery was determined by measuring the diameter of inhibition zone as shown in Table 3 and Figure 1. Among the coated Jaggery, only TPC and PC were effective in inhibiting the growth of gram-positive bacteria compared to NC Jaggery. Against gram-negative bacteria, TPC, PC and TC Jaggery significantly inhibited growth compared to NC Jaggery. Based on the diameter of inhibition zone, it was observed that gram-positive bacteria were more sensitive to the coated Jaggery samples than gram-negative bacteria. Polyphenols and antioxidant properties of the Jaggery may be responsible for the antibacterial activity [30]. In earlier studies, curcumin have been reported to inhibit the growth of antibiotic-resistant Pseudomonas aeruginosa, Acinetobacter baumannii, Klebsiella pneumoniae, Firmicutes, Bacillus subtilis, E. coli, Staphylococcus carnosus, and Mycobacterium smegmatis [11,32,33]. Curcumin has been shown to have a modest level of effectiveness against the parasites Leishmania and Plasmodium falciparum [10,34] Antimicrobial properties of aqueous extract of Curcuma longa have been used by Roy and Garg [34] to enhance the refrigerated shelf life of common foods. Black pepper inhibits the growth of various Firmicutes and Bacteriodetes and was proven to be beneficial in enhancing cell morphology, capsule processing, and lowering urease activity [35]. Black pepper showed anti-bacterial, anti-fungal properties, as well as inhibiting food borne pathogens such as yeast, aflatoxins, and mycotoxin [9]. The present study also depicted that antibacterial activity is proportional to the concentration of phenolic compounds and flavonoids and thus, to the antioxidant performance of the coated Jaggery.

Table 3: Zone of inhibition (mm) of coated and non-coated Jaggery against selective bacterial strains (values are shown in mean of 2 replicates each).

Figure 1

Conclusion

The results of the present investigation suggested that turmeric-black pepper and black pepper coated enhanced the shelf-life with preserved quality of Jaggery for 6 months during storage as it reduced the microbial deterioration of Jaggery and kept the Jaggery in good quality equivalent to fresh. Jaggery is the major cash crop of Indian farmers and they have to sell the product at lower price in the season of its production as the long storage at room temperature spoil the quality of the Jaggery and lowers down its value in the market. The coatings of Jaggery with edible CMC + black + turmeric can enhance its shelf life and provides a solution to the farmers for its long-term storage. Black pepper and turmeric are commonly used in India as herbal products for treatment of various ailments and are part of every Indian kitchen as spice. Further, the coating of Jaggery with these herbal extracts increases phenolic content and antioxidant potential of Jaggery. Hence, turmeric and black pepper coated Jaggery may be used as a substitute for regular Jaggery with additional health benefits.

Conflict of Interests: The authors declare that they have no conflict of interests.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Open Access journals on surgery

Morphing from the TV-Norm to the l0-Norm

Introduction

COMPRESSED sensing is a technology to use under-sampled measurements for solving an inverse problem [1-4]. When the measurements are insufficient, the imaging system is underdetermined, and the object to be imaged is not well defined. Extra information is required for a useful reconstructed image. For certain objects, extra information is available. The sparseness of an object is an important piece of information. An image array is said to be sparse if most of the array elements are zero. For example, the edge image of piecewise constant images is sparse. Let us consider the following situation. The desired solution, X, of an inverse problem is sparse after a certain sparsifying transformation ψ, that is, ψ(X) is sparse. The sparseness is measured by the l0 norm, ‖∙‖0 . If X is a vector,

the total number of nonzero elements in 𝑋. (1)

If X is a scalar, ‖𝑋‖0 is a binary number as

An under-determined inverse problem has infinitely many solutions. A compressed sensing solution is a solution with a minimum

value.

Mathematically speaking, the l0 norm is not really a norm nor even a pseudo-norm, because, in general,

In this paper, we use the term ‘norm’ loosely. The difficulty of using the l0-norm is that the l0-norm is nondifferentiable. Many researchers have attempted to tackle the l0-norm minimization problem. The work in [5] converted the l0-norm minimization problem into an unconstrained augment Lagrange problem. The work in [6] solved the l0-norm minimization problem by introducing auxiliary variables. The l0-norm minimization solution used in [7] involved a hard thresholding, linearization, and proximal points techniques. The l0-norm minimization is usually achieved by solving some sub problems [8]. Due to the difficulty in minimizing the l0-norm, the total variation (TV) norm minimization, which is the l1-norm of the gradient, has been given more attention [9-11]. This is because the implementation of the l1-norm optimization is much easier that the l0-norm optimization. The main purpose of this current paper is to present a simple way to implement the l0-norm optimization.

Methods

The l0-Norm and Proposed Meta l0-Norm

In our two-dimensional (2D) image reconstruction applications, we choose the finite differences in the row and column directions as the sparsifying transformation ψ . Let the 2D image be X and its pixels be denoted as 𝑥𝑖,𝑗, where i is the row index and j is the column index. At each pixel, ψ(𝑥𝑖,𝑗) is a finite difference gradient, which is a 2D vector

The optimization metric (i.e., the objective function) can be set up as the sum of the l0-norm of all pixels in the image as

One difficulty of minimizing (5) is that the l0-norm is not differentiable and is not easy to optimize. One common method to mitigate the difficulty is to use the l1-norm approximation [9-13]. The famous total variation (TV) method is to use the l1-norm of the finite difference gradient to replace the l0-norm of the finite difference gradient. For a 2D image X, the isotropic TV norm is defined as

and the anisotropic TV norm is defined as

This paper proposes an alternative ‘norm’ to replace the l0-norm. We refer to the proposed ‘norm’ as the meta l0-norm, which is defined for a scalar X as

where 0 < a < ∞ is user-defined parameter. When 𝑋 = 0, ‖𝑋‖𝑚𝑒𝑡𝑎= ‖𝑋‖0 = 0. When 𝑋 ≠ 0, ‖𝑋‖𝑚𝑒𝑡𝑎0 ≈ ‖𝑋‖0 = 1 if 𝑎 is chosen large enough. Figure 1 illustrates the approximation of ‖𝑋‖0 by proposed ‖𝑋‖𝑚𝑒𝑡𝑎0 .

Figure 1

For a 2D vector

we define the anisotropic meta l0-norm as

and define the isotropic meta l0-norm as

For a 2D image X, we replace (5) by the anisotropic metric

or the isotropic metric

The subdifferential exists for the proposed metric (11) as

where the sign function in (13) is defined as

The subdifferential given in (13) readily leads to a gradient descent algorithm to minimize the metric given in (11). The subdifferential for (12) can be similarly obtained.

Application: Few-View Tomography

Here we consider a 2D parallel-beam imaging system, where the image array was 256 × 256, the detector contained 256 bins, and the number of views over 180° was 30. The noiseless projection line integrals were generated analytically (without using pixels). A method based on ‘projections onto convex sets’ (POCS) was selected to reconstruct the image. This method alternated between the maximum-likelihood expectation-maximization (MLEM) image reconstruction algorithm and a gradient descent algorithm that minimized the metric 𝑀𝑎𝑛𝑖𝑠𝑜𝑀𝑒𝑡𝑎(𝑋) or 𝑀𝑖𝑠𝑜𝑀𝑒𝑡𝑎(𝑋). There were 200 iterations used in the POCS algorithm. At each POCS iteration, there were 5000 iterations for the gradient descent algorithm. A flow chart of the algorithm is illustrated in Figure 2. In the computer simulations, the Shepp-Logan phantom was used [14]. In addition to the noiseless data set, a noisy data set was also generated with the zero-mean Gaussian noise added.

In Figure 2, the image reconstruction algorithm ① is the MLEM algorithm:

Figure 2

where 𝑝𝑚 is the mth projection, 𝑎𝑖,𝑗,𝑚 is the projection contribution from the pixel (i,j) to the projection bin m, and k is the iteration index. In fact, the user can choose any iterative image reconstruction algorithm for algorithm ①. The gradient descent algorithm ② in Figure 2 is given as

where 𝜂 = 2 × 10−7 in our computer simulations, and

is defined by (13) or

. The number of iterations shown in Figure 2 is served as an example only. A TV constrained image reconstruction algorithm was also implemented for the comparison purposes. The TV implementation was in the same format as depicted in Figure 2, except that the gradient

was calculated with the TV norm (6) or (7).

Results

The reconstructions via the MLEM, anisotropic/isotropic TV, and the proposed anisotropic/isotropic meta l0 method are shown in Figures 3 & 4, respectively, for the noiseless and noisy data. Tables 1-4 show the quantitative comparison studies with structure similarity (SSIM), peak signal-to-noise ratio (PSNR), and signal-to-noise ratio (SNR). When the parameter ‘a’ is small, the proposed meta l0-norms behave like TV norms. On the other hand, when the parameter ‘a’ is large, the meta l0-norms behave like the l0-norms. Thus, the proposed meta l0-norm are snapshots of the morphing from the TV norm to the l0-norm for different ‘a’ value. When a = 10000, the numerical values of the exponential function become underflow. The constraints are not effective and are ignored. It is observed that the l0-norm give better SSIM results. However, the TV norms give better PSNR and SNR results. Therefore, we cannot conclude whether the TV norms or the l0-norms are the better choices for sparse solutions.

Figure 3

Figure 4

Table 1: Comparison studies with noiseless data (anisotropic).

Table 2: Comparison studies with noiseless data (isotropic).

Table 3: Comparison studies with noisy data (anisotropic).

Table 4: Comparison studies with noisy data (isotropic).

Discussion and Conclusions

Simple meta l0-norms are suggested in this paper to replace the l0-norm in searching a sparse solution for an inverse problem with under-sampled data. Thess meta norms have a user defined parameter ‘a’. The meta l0-norms behave like the l0-norm when ‘a’ is large. The most significant advantage of the meta l0-norms is that it is subdifferentiable. Therefore, it is straightforward to derive a gradient descent algorithm to minimize the meta l0-norms. Computer simulations in this paper show that the meta l0-norms are effective in producing a piecewise solution. The meta norms are snapshots between the TV norms and the l0-norms. The choice of a better norm is task dependent. The l0-norms are not always preferred. We must point out that the l0-norm minimization problem is far from being solved. The l0-norm minimization problem is NP-hard. The gradient descent algorithm to minimize the meta l0-norms is merely a greedy approach. The l0-norm minimization problem has multiple minima. Any gradient based methods can only find a local minimum.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us

Journals on Medical drug and therapeutics

Association between Type II Diabetes Mellitus and Pre- DM with the Triglyceride-to-HDL-Cholesterol Ratios in Korean Older Adults: KNHANES VIII-1

Introduction

Older persons are more likely to have diabetes mellitus and continued trends in this direction are expected in the upcoming decades. According to Sesti, et al [1], 51% of older persons (over 65) in the US have prediabetes and over 25% have diabetes. In South Korea, three of ten people over 65 had diabetes mellitus with inadequate glycemic and risk factor management (Jung, et al. [2,3]). For a number of reasons, diabetes mellitus is especially significant among the elderly population. First, as people age, diabetes becomes more common. Diabetes is more common in older adults, and as people age, their risk increases (Sesti, et al. [1]). Second, diabetes can result in a number of complications, including kidney disease, nerve damage, heart disease, stroke, and visual issues (Sinclair et al. [4]). Diabetes can increase the chances of certain conditions that older people may already be more susceptible to. Third, diabetes is not the only health issue that older persons frequently have. Effective diabetes management becomes crucial for maintaining general health and preventing additional health issues. Forth, there is evidence suggesting a possible connection between diabetes and cognitive deterioration. Older adults with diabetes may put them more vulnerable to illnesses like dementia. Taking proper care of one’s diabetes can help preserve cognitive abilities. Fifth, In order to preserve physical function and avoid handicap in older persons, diabetes care is essential. In the group of frail older persons, diabetes is associated with premature death, multiple comorbidities, and functional impairment.

Valid and accurate risk factors are crucial to lowering the burden of diabetes in the elderly population since good diabetes treatment can dramatically improve the quality of life for these individuals. Although unhealthy diets, physical inactivity, and being overweight or obese have all been linked to the development of diabetes, a recent study found that non-traditional lipid parameters, such as the ratio of triglycerides to high-density lipoprotein cholesterol (THR), as well as traditional lipid parameters, such as TG, TC, HDL-C, and LDL-C, are also closely linked to the development of diabetes. Numerous research investigations have reported that THR is indicative of insulin resistance. (Yang, et al. [5,6]) and associated with ischemic heart disease (Bertsch, et al. [7]), and incidence of type 2 diabetes in a cohort of men (Vega, et al. [8]). Overall, the TG/HDL-C cut-offs for men and women are approximately >3.5 and >2.5, respectively. Nonetheless, the majority of research examining these associations has been carried out mostly on younger or Caucasian populations. Research on the connection between TG/HDL-C and the risk of diabetes mellitus (DM), especially in the senior Asian population, however, has not been thoroughly studied yet. The purpose of this study was to examine the association between THR and Type 2 DM and pre-DM in the elderly Korean population.

Method

Study Population

This research represents the second analysis of the aquaire data from KNHANES VIII-1, 2018. The KNHANES has been used since 1998 to assess the nutritional and general health of the Korean people. A multistage, intricate, stratified, probability cluster survey was employed in this study, which included a representative sample of South Korea’s non-institutionalized population. About 10,000–12,000 people make up the survey’s yearly sample, and 4600 houses are chosen from a panel and polled. The nutrition survey, health exam, and health interview survey make up the KNHANES VII-I. We used data from 3,192 people over 65 who had triglyceride, HDL-C, and diabetes out of 16,489 people who completed the KNHANES VII-I (Figure 1).

Figure 1

Ethics Statement and Data Access

The Korea Center for Disease Control and Prevention granted permission for access to the KNHANES VIII-I data. This study was exempt from IRB approval because it is a secondary analysis that used and evaluated 2018 KNHANES, VIII-I data that was collected.

Data Collection

The data for this study comprises participants in the KNHANES VIII-1 2018. Age, gender, education level, household income, marital status, and frequency of exercise are examples of sociodemographic characteristics. A person’s education was categorized as finished elementary school, middle school (less than nine yr), high school (10–12 yr), or college (more than thirteen yr). The monthly income of a household was quantified by dividing it by the total number of family members and classified into quartiles. For marital status, those who were married were classified as “with the spouse,” singles who had recently divorced as “divorced,” widowers as “widow/widower,” and those who were single before being married as “widow/widower.” Based on their fasting glucose value and a history of DM, participants were classified into three groups. Impaired fasting glucose was defined as 1) normoglycemic (NG) (fasting glucose value<100mg/dL(5.6mmol/L)), 2) Pre-Diabetes Mellitus (Pre-DM) (fasting glucose value: 100-125mg/dL (5.6-6.9mmol/L)), and 3) Diabetes mellitus (DM) (fasting glucose value ≥ 126mg/dL(7.0mmol/L)) or a history of DM. Weight in kilograms divided by the square of height in meters (kg/m2) generates the body mass index (BMI), the Asian-Pacific cutoff points (Pan, et al. [9]). BMI is divided into four categories: obese (over 25 kg/m2), overweight (23-24.9 kg/m2), normal (18.5-22.9 kg/m2), and underweight (18.5 kg/m2).

Data Analysis

SPSS 24.0 (IBM Corp. Armonk, NY: IBM Corp.) was utilized to analyze the data. The TG/HDL-C quartiles were used to stratify the subjects. The mean and standard deviations were used to represent continuous values, and frequency or percentage were used to represent categorical variables. The chi-square test, one-way ANOVA, or Kruskal Wallis H testa were performed to identify any significant differences between the proportional groups and means. Multivariate Logistic regression analysis for THR in categorized form (Q1 to Q4) were conducted at the 5% (p<.05) level, statistical significance was considered to exist.

Results

Table 1 displays the demographic characteristics of the baseline participants. A total of 3,192 (men: 1,379, women: 1,813, mean) were included in the analysis, the mean age of the population was 72.83±5.04 years old. Participants in NG, Pre-DM and DM were 45.52% (n=1453), 37.69% (n=1203) and 16.79%(n=536), respectively. The mean TG/HDL-C was 3.19±3.11, and mean fasting plasma glucose and BMI were 109.19 ±27.91 and 24.22±3.19kg/m2, respectively. We found that age, gender, systolic blood pressure were not affect senior’s diabetes status. Participants who had lower household income, higher number of waist circumference, higher BMI, diagnosed hypertension, hyperlipidemia, hypercholesterolemia are more likely to be in Pre-DM or DM. Further, we found the highest TG/HDL-C group are more likely to have pre-DM or DM. After adjusting for the full model (adjusted age, BMI, DBP, household income, waist circumference, associated disease), we could detect the relationship (OR=3.404, 95% C.I: 2.514-4.610) with Pre-DM and (OR: 2.308, 95% C.I: 1.697-3.139) with DM (Table 2).

Table 1: Baseline characteristics of participants and by diabetic status.

Note: BMI: Body Mass Index (Kg/m2)

Table 2: Relationship between TG/HDL-C and the diabetic status.

Discussion

Our finding indicated that, after controlling for covariates, TG/HDL-C was positively linked with the incidence of diabetes even in later life, both men and women. Previous studies reported that high TG/HDL-C has been associated with insulin resistance such as American (Vega, et al. [8]), Korean adults (Sung, et al. [10,11]), and Chinese adults (Chen, et al. [12]), but not shown any evidence in the elderly population. However, the results of this study has similar in other Asian study. TG/HDL-C was found to be positively correlated with the risk of diabetes by Chen et al. [12]. The result of this study is significant in that we has similar results with only older subjects. While in Irnanian study TG/HDL-C was not robust predictors of type II Diabetes in high risk individual (Janghorbani, et al. [13]). Additionally, our study’s findings are consistent with past research showing the TG/HDL-C ratio as a surrogate metabolic indicator that can predict, in individuals with pre-hypertension or even normal blood pressure, the onset of type II diabetes (Wagner, et al. [14]). Cheng et al. [15] shown in dose-response relationships between their 4173 Chinese men and 6568 women that the Type 2 diabetes is associated with the TG/HDL-C ratio, an independent risk factor and Women in particular experienced this interaction. According to our findings, women had a greater OR of DM when compared to men’s. Diabetes in older adults is linked to an increased absolute risk of cardiovascular (CV) or microvascular disease, even though young adults with early onset type 2 diabetes and hipoglycemia have a higher relative CV risk. These consequences include a higher death rate, an increase in hospital admissions and institutionalization, as well as a greater social and financial burden. It is yet unknown how precisely a high TG/HDL-C ratio contributes to the onset of type 2 diabetes. Insulin resistance and decreased insulin production are two characteristics of type II diabetes (Pantoja-Torres, et al. [16]).

Therefore, one theory for the connection between high TG/HDL-C ratio and the onset of type II diabetes and pre-DM could be malfunctioning aging pancreatic beta-cells. Reduced HDL-C subsequently results in decreased cholesterol export, which builds up cholesterol in pancreatic beta cells. This includes higher levels of nitric oxide, ceramide, and blood glucose. Elevated TG levels may also cause beta cell death (Levy, et al. [17-19]). Future research should take our study’s possible shortcomings into account. First off, because this study is based on a secondary analysis of national health data, variables like interleukin-5 that were not part of the original dataset cannot be changed, even if they may have an impact on TG and HDL-C levels. Similarly, as this study’s foundation is a secondary analysis of published data, characteristics like hip circumference and tumor necrosis factor that are not part of the data set cannot be changed.

Despite its limitations, this study has certain advantages.

 This study’s sample size was comparatively large when compared to earlier, comparable research;

 We managed the independent variable by secondary data analysis.        

Conclusion

In conclusion, among Korean older individuals who live in communities, a high TG/HDL-C ratio independently predicts the incidence of Type II diabetes in the future. This prediction is not influenced by other related variables.

For More Articles: Biomedical Journal Impact Factor: https://biomedres.us