journal-title stringclasses 191 values | pmid stringlengths 8 8 ⌀ | pmc stringlengths 10 11 | doi stringlengths 12 31 ⌀ | article-title stringlengths 11 423 | abstract stringlengths 18 3.69k ⌀ | related-work stringlengths 12 84k | references listlengths 0 206 | reference_info listlengths 0 192 |
|---|---|---|---|---|---|---|---|---|
Scientific Reports | 27578529 | PMC5006166 | 10.1038/srep32404 | Large-Scale Discovery of Disease-Disease and Disease-Gene Associations | Data-driven phenotype analyses on Electronic Health Record (EHR) data have recently drawn benefits across many areas of clinical practice, uncovering new links in the medical sciences that can potentially affect the well-being of millions of patients. In this paper, EHR data is used to discover novel relationships between diseases by studying their comorbidities (co-occurrences in patients). A novel embedding model is designed to extract knowledge from disease comorbidities by learning from a large-scale EHR database comprising more than 35 million inpatient cases spanning nearly a decade, revealing significant improvements on disease phenotyping over current computational approaches. In addition, the use of the proposed methodology is extended to discover novel disease-gene associations by including valuable domain knowledge from genome-wide association studies. To evaluate our approach, its effectiveness is compared against a held-out set where, again, it revealed very compelling results. For selected diseases, we further identify candidate gene lists for which disease-gene associations were not studied previously. Thus, our approach provides biomedical researchers with new tools to filter genes of interest, thus, reducing costly lab studies. | Background and related workIn the treatment of ailments, the focus of medical practitioners can be roughly divided between two complementary approaches: 1) treating the symptoms of already sick patients (reactive medicine); and 2) understanding disease etiology in order to prevent manifestation and further spread of the disease (preventative medicine). In the first approach, the disease symptoms are a part of a broader phenotype profile of an individual, with phenotype being defined as the presence of a specific observable characteristic in an organism, such as blood type, response to administered medication, or the presence of a disease13. The identification process of useful, meaningful medical characteristics and insights for the purposes of medical treatment is referred to as phenotyping14. In the second approach, researchers identify the genetic basis of disease by discovering the relationship between exhibited phenotypes and the patient’s genetic makeup in a process refereed to as genotyping15. Establishing a relationship between a phenotype and its associated genes is a major component of gene discovery and allows biomedical scientists to gain a deeper understanding of the condition and a potential cure at its very origin16. Gene discovery is a central problem in a number of published disease-gene association studies, and its prevalence in the scientific community is increasing steadily as novel discoveries lead to improved medical care. For example, results in the existing literature show that gene discovery allows clinicians to better understand the severity of patients symptoms17, to anticipate onset and path of disease progressions (particularly important for cancer patients in later stages18), or to better understand disease processes on a molecular level enabling the development of better treatments19. As suggested in previous studies20, such knowledge may be hidden in vast EHR databases that are yet to be exploited to their fullest potential. Clearly, both phenotyping and gene discovery are important steps in the fight for global health, and advancing tools for these tasks is a critical part of this battle. The emerging use of gene editing techniques to precisely target disease genes21 will require such computational tools at precision medicine’s disposal.EHR records, containing abundant information relating to patients’ phenotypes that have been generated from actual clinical observations and physician-patient interactions, present an unprecedented resource and testbed to apply novel phenotyping approaches. Moreover, the data is complemented by large amounts of gene-disease associations derived from readily available genome-wide association studies. However, current approaches for phenotyping and gene discovery using EHR data rely on highly supervised rule-based or heuristic-based methods, which require manual labor and often a consensus of medical experts22. This severely limits the scalability and effectiveness of the process3. Some researchers proposed to combat this issue by employing active learning approaches to obtain limited number of expert labels used by supervised methods2324. Nevertheless, the state-of-the-art is far from optimal as the labeling process can still be tedious, and models require large numbers of labels to achieve satisfactory performance on noisy EHR data3. Therefore, we approach solving this problem in an unsupervised manner.Early work on exploiting EHR databases to understand human disease focused on graphical representations of diseases, genes, and proteins. Disease networks were proposed in Goh et al.25 where certain genes play a central role in the human disease interactome, which is defined as all interactions (connections) of diseases, genes, and proteins discovered on humans. Follow up studies by Hidalgo et al.26 proposed human phenotypic networks (commonly referred to as comorbidity networks) to map with disease networks derived from EHR datasets, which were shown to successfully associate a higher connectivity of diseases with higher mortality. Based on these advances, a body of work linked predictions of disease-disease and disease-gene networks627 even when a mediocre degree of correlation (~40%, also confirmed on data used in this study) was detected between disease and gene networks, indicating potential causality between them. Such studies provided important evidence of modeling disease and human interactome networks to discover associated phenotypes. Recently, network studies of the human interactome have focused on uncovering patterns28 and, as the human interactome is incomplete, discovering novel relationships5. However, it has been suggested that network-based approaches to phenotyping and discoveries of meaningful concepts in medicine have yet to be fully exploited and tested29. This study offers a novel approach to represent diseases and genes by utilizing the same sources of data as network approaches, but in a different manner, as discussed in greater detail in the section, below.In addition, to create more scalable, effective tools, recent approaches distinct from networks have focused on the development of data-driven phenotyping with minimal manual input and rigorous evaluation procedures33031. Part of the emerging field of computational phenotyping includes the methods of Zhou et al.32 which formulates EHRs as temporal matrices of medical events for each patient, and proposes an optimization-based technology for discovering temporal patterns of medical events as phenotypes. Further, Ho et al.33 formulated patient EHRs as tensors, where each dimension is represented by a different medical event, and the use of non-negative tensor factorization in the identification of phenotypes. Deep learning has also been applied to the task of phenotyping30, as well as graph mining31 and clustering34, used to identify patient subgroups based on individual clinical markers. Finally, Žitnik et al.35, conducted a study on non-negative matrix factorization techniques for fusing various molecular data to uncover disease-disease associations and show that available domain knowledge can help reconstruct known and obtain novel associations. Nonetheless, the need for a comprehensive procedure to obtain manually labeled samples remains one of the main limitations of modern phenotyping tools14. Although state-of-the-art machine learning methods have been utilized to automate the process, current approaches still observe degraded performance in the face of limited availability of labeled samples that are manually annotated by medical experts36.In this paper, we compare representatives of the above approaches against our proposed approach in a fair setup and, overall, demonstrate the benefits of our neural embedding approach (described below) on several tasks in a quantifiable manner. | [
"21587298",
"22955496",
"24383880",
"10874050",
"26506899",
"11313775",
"21941284",
"23287718",
"21269473",
"17502601",
"25038555",
"22127105",
"24097178",
"16723398",
"25841328",
"2579841"
] | [
{
"pmid": "21587298",
"title": "Using electronic health records to drive discovery in disease genomics.",
"abstract": "If genomic studies are to be a clinically relevant and timely reflection of the relationship between genetics and health status--whether for common or rare variants--cost-effective ways... |
Frontiers in Psychology | 27721800 | PMC5033969 | 10.3389/fpsyg.2016.01429 | Referential Choice: Predictability and Its Limits | We report a study of referential choice in discourse production, understood as the choice between various types of referential devices, such as pronouns and full noun phrases. Our goal is to predict referential choice, and to explore to what extent such prediction is possible. Our approach to referential choice includes a cognitively informed theoretical component, corpus analysis, machine learning methods and experimentation with human participants. Machine learning algorithms make use of 25 factors, including referent’s properties (such as animacy and protagonism), the distance between a referential expression and its antecedent, the antecedent’s syntactic role, and so on. Having found the predictions of our algorithm to coincide with the original almost 90% of the time, we hypothesized that fully accurate prediction is not possible because, in many situations, more than one referential option is available. This hypothesis was supported by an experimental study, in which participants answered questions about either the original text in the corpus, or about a text modified in accordance with the algorithm’s prediction. Proportions of correct answers to these questions, as well as participants’ rating of the questions’ difficulty, suggested that divergences between the algorithm’s prediction and the original referential device in the corpus occur overwhelmingly in situations where the referential choice is not categorical. | Related WorkAs was discussed in Section “Discussion: Referential Choice Is Not Always Categorical”, referential variation and non-categoricity is clearly gaining attention in the modern linguistic, computational, and psycholinguistic literature. Referential variation may be due to the interlocutors’ perspective taking and their efforts to coordinate cognitive processes, see e.g., Koolen et al. (2011), Heller et al. (2012), and Baumann et al. (2014). A number of corpus-based studies and psycholinguistic studies explored various factors involved in the phenomenon of overspecification, occurring regularly in natural language (e.g., Kaiser et al., 2011; Hendriks, 2014; Vogels et al., 2014; Fukumura and van Gompel, 2015). Kibrik (2011, pp. 56–60) proposed to differentiate between three kinds of speaker’s referential strategies, differing in the extent to which the speaker takes the addressee’s actual cognitive state into account: egocentric, optimal, and overprotective. There is a series of recent studies addressing other aspects of referential variation, e.g., as a function of individual differences (Nieuwland and van Berkum, 2006), depending on age (Hughes and Allen, 2013; Hendriks et al., 2014) or gender (Arnold, 2015), under high cognitive load (van Rij et al., 2011; Vogels et al., 2014) and even under the left prefrontal cortex stimulation (Arnold et al., 2014). These studies, both on production and on comprehension of referential expressions, open up a whole new field in the exploration of reference.We discuss a more general kind of referential variation, probably associated with the intermediate level of referent activation. This kind of variation may occur in any discourse type. In order to test the non-categorical character of referential choice we previously conducted two experiments, based on the materials of our text corpus. Both of these experiments were somewhat similar to the experiment from Kibrik (1999), described in Section “Discussion: Referential Choice Is Not Always Categorical” above.In a comprehension experiment, Khudyakova (2012) tested the human ability to understand texts, in which the predicted referential device diverged from the original text. Nine texts from the corpus were randomly selected, such that they contained a predicted pronoun instead of an original full NP; text length did not exceed 250 words. In addition to the nine original texts, nine modified texts were created in which the original referential device (proper name) was replaced by the one predicted by the algorithm (pronoun). Two experimental lists were formed, each containing nine texts (four texts in an original version and five in a modified one, or vice versa), so that original and modified texts alternated between the two lists.The experiment was run online on Virtual Experiments platform3 with 60 participants with the expert level command of English. Each participant was asked to read all the nine texts one at a time, and answer a set of three questions after each text. Each text appeared in full on the screen, and disappeared when the participant was presented with three multiple-choice questions about referents in the text, beginning with a WH-word. Two of those were control questions, related to referents that did not create divergences. The third question was experimental: it concerned the referent in point, that is the one that was predicted by the algorithm differently from the original text. Questions were presented in a random order. Each participant thus answered 18 control questions and nine experimental questions. In the alleged instances of non-categorical referential choice, allowing both a full NP and a pronoun, experimental questions to proper names (original) and to pronouns (predicted) were expected to be answered with a comparable level of accuracy.The accuracy of the answers to the experimental questions to proper names, as well as to the control questions, was found to be 84%. In seven out of nine texts, experimental questions to pronouns were answered with the comparable accuracy of 80%. We propose that in these seven instances we deal exactly with non-categorical referential choice, probably associated with an intermediate level of referent activation. Two remaining instances may result from the algorithms’ errors.The processes of discourse production and comprehension are related but distinct, so we also conducted an editing experiment (Khudyakova et al., 2014), imitating referential choice as performed by a language speaker/writer. In the editing experiment, 47 participants with the expert level command of English were asked to read several texts from the corpus and choose all possible referential options for a referent at a certain point in discourse. Twenty seven texts from the corpus were selected for that study. The texts contained 31 critical points, in which the choice of the algorithm diverged from the one in the original text. At each critical point, as well as at two other points per text (control points), a choice was offered between a description, a proper name (where appropriate), and a pronoun. Both critical and control points did not include syntactically determined pronouns. The participants edited from 5 to 9 texts each, depending on the texts’ length. The task was to choose all appropriate options (possibly more than one). We found that in all texts at least two referential options were proposed for each point in question, both critical and control ones.The experiments on comprehension and editing demonstrated the variability of referential choice characteristic of the corpus texts. However, a methodological problem with these experiments was associated with the fact that each predicted referential expression was treated independently, whereas in real language use each referential expression depends on the previous context and creates a context for the subsequent referential expressions in the chain. In order to create texts that are more amenable to human evaluation, in the present study we introduce a flexible prediction script. | [
"18449327",
"11239812",
"16324792",
"22389109",
"23356244",
"25068852",
"22389129",
"22389094",
"22496107",
"16956594",
"3450848",
"25911154",
"22389170",
"25471259"
] | [
{
"pmid": "18449327",
"title": "The effect of additional characters on choice of referring expression: Everyone counts.",
"abstract": "Two story-telling experiments examine the process of choosing between pronouns and proper names in speaking. Such choices are traditionally attributed to speakers strivi... |
Journal of Cheminformatics | 28316646 | PMC5034616 | 10.1186/s13321-016-0164-0 | An ensemble model of QSAR tools for regulatory risk assessment | Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflicting predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa (κ): 0.63 and 0.62] for both the datasets. The ROC curves demonstrate the utility of the cut-off feature in the predictive ability of the ensemble model. This feature provides an additional control to the regulators in grading a chemical based on the severity of the toxic endpoint under study.Electronic supplementary materialThe online version of this article (doi:10.1186/s13321-016-0164-0) contains supplementary material, which is available to authorized users. | Related workThere are studies that investigate methods for combining predictions from multiple QSAR tools to gain better predictive performance for various toxic endpoints: (1) Several QSAR models were developed and compared using different clustering algorithms (multiple linear regression, radial basis function neural network and support vector machines) to develop hybrid models for bioconcentration factor (BCF) prediction [17]; (2) QSAR models implementing cut-off rules were used to determine a reliable and conservative consensus prediction from two models implemented in VEGA [18] for BCF prediction [19]; (3) Predictive performance of four QSAR tools (Derek [20, 21], Leadscope [22], MultiCASE [23] and Toxtree [24]) were evaluated and compared to the standard Ames assay [25] for mutagenicity prediction. Pairwise hybrid models were then developed using AND (accepting positive results when both tools predict a positive) and OR combinations (accepting positive results when either one of the tool predicts a positive) [25–27]; (4) A similar AND/OR approach was implemented for the validation and construction of a hydrid QSAR model using MultiCASE and MDL-QSAR [28] tools for carcinogenicity prediction in rodents [29]. The work was extended using more tools (BioEpisteme [30], Leadscope PDM, and Derek) to construct hybrid models using majority consensus predictions in addition to AND/OR combinations [31].The results of these studies demonstrate that: (1) None of the QSAR tools perform significantly better than others, and they also differ in their predictive performance based upon the toxic endpoint and the chemical datasets under investigation, (2) Hybrid models have an improved overall predictive performance in comparison to individual QSAR tools, and (3) Consensus-positive predictions from more than one QSAR tool improved the identification of true positives. The underlying idea is that each QSAR model brings a different perspective of the complexity of the modeled biological system and combining them can improve the classification accuracy. However, consensus-positive methods are prone to introducing a conservative nature in discarding a potentially non-toxic chemical based on false positive prediction. Therefore, we propose an ensemble learning approach for combining predictions from multiple QSAR tools that addresses the drawbacks of consensus-positive predictions [32, 33]. Hybrid QSAR models using ensemble approaches have been developed for various biological endpoints like cancer classification and prediction of ADMET properties [34–36] but not for toxic endpoints. In this study, a Bayesian ensemble approach is investigated for carcinogenicity prediction, which is discussed in more details in the next section. | [
"17643090",
"22771339",
"15226221",
"18405842",
"13677480",
"12896862",
"15170526",
"21504870",
"8564854",
"12896859",
"22316153",
"18954891",
"23624006",
"1679649",
"11128088",
"768755",
"3418743",
"21534561",
"17703860",
"20020914",
"15921468",
"21509786",
"23343412",
... | [
{
"pmid": "17643090",
"title": "The application of discovery toxicology and pathology towards the design of safer pharmaceutical lead candidates.",
"abstract": "Toxicity is a leading cause of attrition at all stages of the drug development process. The majority of safety-related attrition occurs preclin... |
JMIR Medical Education | 27731840 | PMC5041364 | 10.2196/mededu.4789 | A Conceptual Analytics Model for an Outcome-Driven Quality Management Framework as Part of Professional Healthcare Education | BackgroundPreparing the future health care professional workforce in a changing world is a significant undertaking. Educators and other decision makers look to evidence-based knowledge to improve quality of education. Analytics, the use of data to generate insights and support decisions, have been applied successfully across numerous application domains. Health care professional education is one area where great potential is yet to be realized. Previous research of Academic and Learning analytics has mainly focused on technical issues. The focus of this study relates to its practical implementation in the setting of health care education.ObjectiveThe aim of this study is to create a conceptual model for a deeper understanding of the synthesizing process, and transforming data into information to support educators’ decision making.MethodsA deductive case study approach was applied to develop the conceptual model.ResultsThe analytics loop works both in theory and in practice. The conceptual model encompasses the underlying data, the quality indicators, and decision support for educators.ConclusionsThe model illustrates how a theory can be applied to a traditional data-driven analytics approach, and alongside the context- or need-driven analytics approach. | Related WorkEducational Informatics is a multidisciplinary research area that uses Information and Communication Technology (ICT) in education. It has many sub-disciplines, a number of which focus on learning or teaching (eg, simulation), and others that focus on administration of educational programs (eg, curriculum mapping and analytics). Within the area of analytics, it is possible to identify work focusing on the technical challenges (eg, educational data mining), the educational challenges (eg, Learning analytics), or the administrative challenges (eg, Academic- and Action analytics) [8].The Academic- and Learning analytics fields emerged in early 2005. The major factors driving their development are technological, educational, and political. Development of the necessary techniques for data-driven analytics and decision support began in the early 20thcentury. Higher education institutions are collecting more data than ever before. However, most of these data are not used at all, or they are used for purposes other than addressing strategic questions. Educational institutions face bigger challenges than ever before, including increasing requirements for excellence, internationalization, the emergence of new sciences, new markets, and new educational forms. The potential benefits of analytics for applications such as resource optimization and automatization of multiple administrative functions (alerts, reports, and recommendations) have been described in the literature [9,10]. | [
"2294449",
"11141156",
"15523387",
"20054502",
"25160372"
] | [
{
"pmid": "20054502",
"title": "Recommendations of the International Medical Informatics Association (IMIA) on Education in Biomedical and Health Informatics. First Revision.",
"abstract": "Objective: The International Medical Informatics Association (IMIA) agreed on revising the existing international ... |
Scientific Reports | 27686748 | PMC5043229 | 10.1038/srep34181 | Accuracy Improvement for Predicting Parkinson’s Disease Progression | Parkinson’s disease (PD) is a member of a larger group of neuromotor diseases marked by the progressive death of dopamineproducing cells in the brain. Providing computational tools for Parkinson disease using a set of data that contains medical information is very desirable for alleviating the symptoms that can help the amount of people who want to discover the risk of disease at an early stage. This paper proposes a new hybrid intelligent system for the prediction of PD progression using noise removal, clustering and prediction methods. Principal Component Analysis (PCA) and Expectation Maximization (EM) are respectively employed to address the multi-collinearity problems in the experimental datasets and clustering the data. We then apply Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression. Experimental results on public Parkinson’s datasets show that the proposed method remarkably improves the accuracy of prediction of PD progression. The hybrid intelligent system can assist medical practitioners in the healthcare practice for early detection of Parkinson disease. | Related WorkFor effective diagnosis of Parkinson’s Disease (PD), different types of classification methods were examined by Das30. The computation of the performance score of the classifiers was based on various evaluation methods. According to the results of application scores, they found that Neural Networks (NNs) classifier obtains the best result which was 92.9% of accuracy. Bhattacharya and Bhatia31 used data mining tool, Weka, to pre-process the dataset on which they used Support Vector Machine (SVM) to distinguish people with PD from the healthy people. They applied LIBSVM to find the best possible accuracy on different kernel values for the experimental dataset. They measured the accuracy of models using Receiver Operating Characteristic (ROC) curve variation. Chen et al.13 presented a diagnosis PD system by using Fuzzy K-Nearest Neighbor (FKNN). They compared the results of developed FKNN-based system with the results of SVM based approaches. They also employed PCA to further improve the PD diagnosis accuracy. Using a 10-fold cross-validation, the experimental results demonstrated that the FKNN-based system significantly improve the classification accuracy (96.07%) and outperforms SVM-based approaches and other methods in the literature. Ozcift32 developed a classification method based on SVM and obtained about 97% accuracy for the prediction of PD progression. Polat29 examined the Fuzzy C-Means (FCM) Clustering-based Feature Weighting (FCMFW) for the detection of PD. The author used K-NN classifier for classification purpose and applied it on the experimental dataset with different values of k. Åström and Koker33 proposed a prediction system that is based on parallel NNs. The output of each NN was evaluated by using a rule-based system for the final decision. The experiments on the proposed method showed that a set of nine parallel NNs yielded an improvement of 8.4% on the prediction of PD compared to a single unique network. Li et al.34 proposed a fuzzy-based non-linear transformation method to extend classification related information from the original data attribute values for a small data set. Based on the new transformed data set, they applied Principal Component Analysis (PCA) to extract the optimal subset of features and SVM for predicting PD. Guo et al.35 developed a hybrid system using Expectation Maximization (EM) and Genetic Programming (GP) to construct learning feature functions from the features of voice in PD context. Using projection based learning for meta-cognitive Radial Basis Function Network (PBL-McRBFN), Babu and Suresh (2013) implemented a gene expression based method for the prediction of PD progression. The capabilities of the Random Forest algorithm was tested by Peterek et al.36 for the prediction of PD progression. A hybrid intelligent system was proposed by Hariharan et al.24 using clustering (Gaussian mixture model), feature reduction and classification methods. Froelich et al.23 investigated the diagnosis of PD on the basis of characteristic features of a person’s voice. They classified individual voice samples to a sick or to a healthy person using decision trees. Then they used the threshold-based method for the final diagnosis of a person thorough previously classified voice samples. The value of the threshold determines the minimal number of individual voice samples (indicating the disease) that is required for the reliable diagnosis of a sick person. Using real-world data, the achievement of accuracy of classification was 90%. Eskidere et al.25 studied the performance of SVM, Least Square SVM (LS-SVM), Multilayer Perceptron NN (MLPNN), and General Regression NN (GRNN) regression methods to remote tracking of PD progression. Results of their study demonstrated that the best accuracy is obtained by LS-SVM in relation to the other three methods, and outperforms latest proposed regression methods published in the literature. In a study by Guo et al.10 in Central South of Mainland China, sixteen Single-Nucleotide Polymorphisms (SNPs) located in the 8 genes and/or loci (SNCA, LRRK2, MAPT, GBA, HLA-DR, BST1, PARK16, and PARK17) were analysed in a cohort of 1061 PD, and 1066 Normal healthy participants. This study established that Rep1, rs356165, and rs11931074 in SNCA gene, G2385R in LRRK2 gene, rs4698412 in BST1 gene, rs1564282 in PARK17, and L444P in GBA gene have an independent and combined significant effect on PD. As a final point, this study has reported that SNPs in these 4 genes have more pronounced effect on PD.From the literature on the prediction of PD progression, we found that at the moment there is no implementation of Principal Component Analysis (PCA), Gaussian mixture model with Expectation Maximization (EM) and prediction methods in PD diagnosis. This research accordingly tries to develop an intelligent system for PD diagnosis based on these approaches. Hence, in this paper, we incorporate the robust machine learning techniques and propose a new hybrid intelligent system using PCA, Gaussian mixture model with EM and prediction methods. Overall, in comparison with research efforts found in the literature, in this research:A comparative study is conducted between two robust supervised prediction techniques, Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR).EM is used for data clustering. The clustering problem has been addressed in many diseases diagnosis systems1337. This reflects its broad appeal and usefulness as one of the steps in exploratory health data analysis. In this study, EM clustering is used as an unsupervised classification method to cluster the data of experimental dataset into similar groups.ANFIS and SVR are used for prediction of PD progression.PCA is used for dimensionality reduction and dealing with the multi-collinearity problem in the experimental data. This technique has been used in developing in many disease diagnosis systems to eliminate the redundant information in the original health data272829.A hybrid intelligent system is proposed using EM, PCA and prediction methods, Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression. | [
"20082967",
"23711400",
"27184740",
"22387368",
"23154271",
"21556377",
"26618044",
"25623333",
"22387592",
"25064009",
"12777365",
"22656184",
"22733427",
"22502984",
"23182747",
"24485390",
"21547504",
"21493051",
"26019610",
"26828106"
] | [
{
"pmid": "20082967",
"title": "Predicting Parkinson's disease - why, when, and how?",
"abstract": "Parkinson's disease (PD) is a progressive disorder with a presymptomatic interval; that is, there is a period during which the pathologic process has begun, but motor signs required for the clinical diagn... |
Scientific Reports | 27694950 | PMC5046183 | 10.1038/srep33985 | Multi-Pass Adaptive Voting for Nuclei Detection in Histopathological Images | Nuclei detection is often a critical initial step in the development of computer aided diagnosis and prognosis schemes in the context of digital pathology images. While over the last few years, a number of nuclei detection methods have been proposed, most of these approaches make idealistic assumptions about the staining quality of the tissue. In this paper, we present a new Multi-Pass Adaptive Voting (MPAV) for nuclei detection which is specifically geared towards images with poor quality staining and noise on account of tissue preparation artifacts. The MPAV utilizes the symmetric property of nuclear boundary and adaptively selects gradient from edge fragments to perform voting for a potential nucleus location. The MPAV was evaluated in three cohorts with different staining methods: Hematoxylin & Eosin, CD31 & Hematoxylin, and Ki-67 and where most of the nuclei were unevenly and imprecisely stained. Across a total of 47 images and nearly 17,700 manually labeled nuclei serving as the ground truth, MPAV was able to achieve a superior performance, with an area under the precision-recall curve (AUC) of 0.73. Additionally, MPAV also outperformed three state-of-the-art nuclei detection methods, a single pass voting method, a multi-pass voting method, and a deep learning based method. | Previous Related Work and Novel ContributionsTable 1 enumerates some recent techniques for nuclei detection. Most approaches typically tend to use image derived cues, such as color/intensity2528293031, edges192124323334, texture35, self learned features1336, and symmetry22242737.The color and texture-based methods require consistent color/texture appearance for the individual nuclei in order to work optimally. The method presented in ref. 31 applied the Laplacian of Gaussian (LoG) filter to detect the initial seed points representing nuclei. However, due to the uneven distribution of nuclear stain, the response of LoG filter may not reflect the true nuclear center. Filipczuk et al. applied circular Hough transform to detect the nuclear center34, however the circular Hough transform assumes that the shape of the underlying region of interest can be represented by a parametric function, i.e., circle or ellipse. In poorly stained tissue images, the circular Hough transform is likely to fail due to the great variations in appearance of nuclear edges and the presence of clusters of edge fragments.Recently, there has been substantial interest in developing and employing DL based methods for nuclei detection in histology images1336. The DL methods are supervised classification methods that typically employ multiple layers of neural networks for object detection and recognition. They can be easily extended and employed for multiple different classification tasks. Recently a number of DL based approaches have been proposed for image analysis and classification applications in digital pathology1336. For instance, Xu et al. proposed a stacked sparse autoencoder (SSAE) to detect nuclei in breast cancer tissue images. They showed that the DL scheme was able to outperform hand-crafted features on multi-site/stain histology images. However, DL methods required a large number of dedicated training samples since the learning process requires a large number of parameters to be learned. These approaches therefore tend to be heavily biased and sensitive to the choice of the training set.The key idea behind voting based techniques is to cluster circular symmetries along the radial line/inverse gradient direction on an object’s contour in order to infer the center of the object of interest. An illustrative example is shown in Fig. 2(a,b). Figure 2(a) shows a synthetic phantom nucleus with foreground color as grey, and the background color in white. A few sample pixels/points on the nuclei contour with their inverse gradient directions are shown as blue arrows in Fig. 2. Figure 2(b) illustrates the voting procedure with three selected pixels on the contour. Note that for each pixel, a dotted triangle is used to represent an active voting area. The region where three voting areas converge can be thought of as a region with a high likelihood of containing a nuclear center.Several effective symmetric voting-based techniques have been developed employing variants of the same principal. Parvin et al.27 proposed a multi-pass voting (MPV) method to calculate the centroid of overlapping nuclei. Qi et al.22 proposed a single pass voting (SPV) technique followed by a mean-shift procedure to calculate the seed points of overlapping nuclei. In order to further improve the efficiency of the approach, Xu et al.24 proposed a technique based on an elliptic descriptor and improved single pass voting for nuclei via a seed point detection scheme. This initial nuclear detection step was followed by a marker-controlled watershed algorithm to segment nuclei in H&E stained histology images. In practice, the MPV procedure tends to yield more accurate results compared to the SPV procedure in terms of nuclei detection. The SPV procedure may help improve overall efficiency of nuclear detection24, however, it needs an additional mean-shift clustering step to identify the local maxima in the voting map. This additional clustering step requires estimating additional parameters and increases overall model complexity.Since existing voting-based techniques typically utilize edge features, nuclei with hollow interiors could result in incorrect voting and hence in generation of a spurious detection result. One example is shown in Fig. 2(c), where we can see a color image, its corresponding edge map and one of the nuclei, denoted as A. Nucleus A has a hollow interior so that it has two contours, an inner and an outer contour, which results in two edge fragments in the edge map (see second row of Fig. 2(c)). For the outer nuclear contour, the inverse gradients are pointing inwards, whereas for the inner nuclear contour, the inverse gradients are pointing outwards. As one may expect, the inverse gradient obtained from the inner contour minimally contributes towards identifying the nuclear centroid (because the active voting area appears to be outside the nucleus, while the nuclear center should be within the nucleus). Another synthetic example of a nucleus with a hollow interior is shown in Fig. 2(c), and a few inverse gradient directions are drawn on the inner contour. In most cases, those inverse gradients from the inner contour will lead to a spurious result in regions of clustered nuclei. In Fig. 2(e), three synthetic nuclei with hollow regions are shown. It is clear that due to the vicinity of these three nuclei, the highlighted red circle region has received a large number of votes and thus could lead to a potential false positive detection. In section, we will show that in real histopathologic images, existing voting-based techniques tend to generate many false positive detection results.In this paper, we present a Multi-Pass Adaptive Voting (MPAV) method. The MPAV is a voting based technique which adaptively selects and refines the gradient information from the image to infer the location of nuclear centroids. The schematic for the MPAV is illustrated in Fig. 3. The MPAV consists of three modules: gradient field generation, refinement of the gradient field, and multi-pass voting. Given a color image, a gradient field is generated by using image smoothing and edge detection. In the second module, the gradient field is refined, gradients whose direction leads away from the center of the nuclei are removed or corrected. The refined gradient field is then utilized in a multi-pass voting module to guide each edge pixels for generating the nuclear voting map. Finally, a global threshold is applied on the voting map to obtain candidate nuclear centroids. The details of each module are discussed in the next section and the notations and symbols used in this paper are summarized in Table 2. | [
"26186772",
"26167385",
"24505786",
"24145650",
"23392336",
"23157334",
"21333490",
"20491597",
"26208307",
"22614727",
"25203987",
"22498689",
"21383925",
"20172780",
"22167559",
"25192578",
"24608059",
"23221815",
"20359767",
"19884070",
"20656653",
"21947866",
"2391249... | [
{
"pmid": "26186772",
"title": "Feature Importance in Nonlinear Embeddings (FINE): Applications in Digital Pathology.",
"abstract": "Quantitative histomorphometry (QH) refers to the process of computationally modeling disease appearance on digital pathology images by extracting hundreds of image feature... |
Scientific Reports | 27703256 | PMC5050509 | 10.1038/srep34759 | Feature Subset Selection for Cancer Classification Using Weight Local Modularity | Microarray is recently becoming an important tool for profiling the global gene expression patterns of tissues. Gene selection is a popular technology for cancer classification that aims to identify a small number of informative genes from thousands of genes that may contribute to the occurrence of cancers to obtain a high predictive accuracy. This technique has been extensively studied in recent years. This study develops a novel feature selection (FS) method for gene subset selection by utilizing the Weight Local Modularity (WLM) in a complex network, called the WLMGS. In the proposed method, the discriminative power of gene subset is evaluated by using the weight local modularity of a weighted sample graph in the gene subset where the intra-class distance is small and the inter-class distance is large. A higher local modularity of the gene subset corresponds to a greater discriminative of the gene subset. With the use of forward search strategy, a more informative gene subset as a group can be selected for the classification process. Computational experiments show that the proposed algorithm can select a small subset of the predictive gene as a group while preserving classification accuracy. | Related WorkOwing to the importance of gene selection in the analysis of the microarray dataset and the diagnosis of cancer, various techniques for gene selection problems have been proposed.Because of the high dimensionality of most microarray analyses, fast and efficient gene selection techniques such as univariate filter methods8910 have gained more attention. Most filter methods consider the problem of FS to be a ranking problem. The solution is provided by selecting the top scoring features/genes while the rest are discarded. Scoring functions represent the core of ranking methods and are used to assign a relevance index to each feature/gene. The scoring functions mainly include the Z-score11 and Welch t-test12 from the t-test family, the Bayesian t-test13 from the Bayesian scoring family, and the Info gain14 method from the theory-based scoring family. However, the filter-ranking methods ignore the correlations among gene subset, so the selected gene subset may contain redundant information. Thus, multivariate filter techniques have been proposed by researchers to capture the correlations between genes. Some of these filter techniques are the correlation-based feature selection (CFS)15, the Markov blanket filter method16 and the mutual information (MI) based methods, e.g. mRMR17, MIFS18, MIFS_U19, and CMIM20.In recent years, the metaheuristic technique, which is a type of wrapper technique, has gained extensive attention and has been proven to be one of the best -performing techniques used in solving gene selection problems2122. Genetic algorithms (GAs) are generally used as the search engine for feature subsets combined with classification methods. Some examples of GAs are the estimation of distribution algorithm (EDA) with SVM232425, the genetic algorithms support vector machine (GA-SVM)26, and the K nearest neighbors/genetic algorithms (KNN/GA)27.However, most of the existing methods, such as the mutual information based methods17181920, only choose the strong genes in the target class but ignore the weak genes which possess a strong discriminatory power as a group but are weak as individuals3.Over the past few decades, complex network theories have been used in different areas such as biological, social, technological, and information networks. In this present study, a novel method is proposed to search for the ‘weak’ genes by using the sequential forward search strategy. In the proposed method, an efficient discrimination evaluation criterion of a gene subset as a group is presented based on the weight local modularity (WLM) in a complex network. This method employs the advantages of the weight local modularity which most networks are composed of. The WLM are communities or groups within which the networks have a locally small distance between the nodes, but have a relatively large distance between the various communities28. By constructing the weighted sample graph (WSG) in a gene subset, a large weight local modularity value means that the samples in the gene subset are easily separated locally, and that the gene subset is more informative for classification. Therefore, the proposed method has the capability to select for an optimal gene subset with a stronger discriminative power as a group. The effectiveness of method in this present study is validated by conducting experiments on several publicly available microarray datasets. The proposed method performs well on the gene selection and the cancer classification accuracy. | [
"23124059",
"17720704",
"16790051",
"11435405",
"15327980",
"11435405",
"22149632",
"15680584",
"16119262",
"18515276",
"15087314",
"20975711"
] | [
{
"pmid": "23124059",
"title": "Selection of interdependent genes via dynamic relevance analysis for cancer diagnosis.",
"abstract": "Microarray analysis is widely accepted for human cancer diagnosis and classification. However the high dimensionality of microarray data poses a great challenge to classi... |
Frontiers in Neuroscience | 27774048 | PMC5054006 | 10.3389/fnins.2016.00454 | Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection | Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces. | 2. Related workThis section presents the most relevant studies related to the scope of this paper. We focus on target selection tasks, in particular on existing gaze- and SSVEP-based methods for target selection.2.1. Target selectionAccording to Foley et al. (1984), any interaction task can be decomposed into a small set of basic interaction tasks. Foley proposed six types of interaction tasks for human–computer interaction: select, position, orient, path, quantify, and text. Depending on the interaction context, other basic interaction tasks have been proposed since then. The select interaction task is described as : “The user makes a selection from a set of alternatives” (Foley et al., 1984). This set can be a group of commands, or a “collection of displayed entities that form part of the application information presentation.” In human–computer interaction, selection is often performed with a point-and-click paradigm, generally driven by a computer mouse. The performance of an interaction technique for selection is usually measured by Fitts' law. This law is a descriptive model of human movement. It predicts that the time required to rapidly move to a target area is a function of the ratio between the distance to the target and the width of the target. This model is well suited to measure pointing speed, and has thus been widely used for point-and-click selection method where the “pointing” is critical, while the “clicking” is not.In the specific context of hands-free interaction, other input devices need to be used. Among them, gaze tracking has shown promising results (Velichkovsky et al., 1997; Zhu and Yang, 2002). Speech recognition or BCIs are other alternatives for hands-free interaction (Gürkök et al., 2011). Hands-free interaction methods can rely on a point-and-click paradigm, but in this specific context, the “clicking” is often as problematic as the “pointing” (Velichkovsky et al., 1997; Zander et al., 2010). Gaze tracking, speech recognition, and BCIs all share the particularity of presenting a relatively high error rate, compared to a keyboard or a mouse, for example.2.2. Gaze-based interactionIn order to improve dwell-based techniques, several methods have been proposed such as the Fish-eye methods (Ashmore et al., 2005). Fish-eye methods magnify (zoom in) the area around the gaze position, thus decreasing the required selection precision, but without addressing the Midas touch problem. However, the omnipresence of the visual deformation can degrade the exploration of the graphical interface. A potential solution is to zoom in only when potential targets are available (Ashmore et al., 2005; Istance et al., 2008). Another solution relies on designing user interfaces specifically suited for gaze-based selection such as hierarchical menus (Kammerer et al., 2008).2.3. SSVEP-based BCIsWhen the human eye is stimulated by a flickering stimulus, a brain response can be observed in the cortical visual areas, under the form of an activity at the frequency of stimulation, as well as the harmonics of this frequency. This response is known as Steady-State Visually Evoked Potential (SSVEP). SSVEP interfaces are frequently used for brain–computer interaction (Legeny et al., 2013; Quan et al., 2013), as SSVEP-based BCIs have a high precision and information transfer rate compared to other BCIs (Wang et al., 2010).The classical usage of SSVEP-based BCIs is target selection (Quan et al., 2013; Shyu et al., 2013). In order to select a target, the user has to focus on the flickering target she wants to select, each visible target being associated to a stimulation at a different frequency. The SSVEP response is detected in the brain activity of the user through the analysis of the EEG data, and the corresponding target is selected. Most of the time, SSVEP-based interfaces are limited to a small number of targets (commonly three targets), although some attempts were successful at using more targets, in a synchronous context (Wang et al., 2010; Manyakov et al., 2013; Chen et al., 2014).2.4. Gaze and EEG based hybrid interactionThe concept of Hybrid BCI was originally introduced in Pfurtscheller et al. (2010) and it was defined as a system “composed of two BCIs, or at least one BCI and another system” that fulfills four criteria : “(i) the device must rely on signals recorded directly from the brain; (ii) there must be at least one recordable brain signal that the user can intentionally modulate to effect goal-directed behavior; (iii) real time processing; and (iv) the user must obtain feedback.”In the past few years, it has been proposed to combine BCIs with a keyboard (Nijholt and Tan, 2008), a computer mouse (Mercier-Ganady et al., 2013), or a joystick (Leeb et al., 2013). Several types of BCIs can also be used at the same time (Li et al., 2010; Fruitet et al., 2011). In Gürkök et al. (2011), participants can switch at will between a SSVEP-based BCI and a speech recognition system. For a more complete review on hybrid BCIs, the interested reader can refer to Pfurtscheller et al. (2010).All these contributions can be broadly classified in two categories: sequential or simultaneous processing (Pfurtscheller et al., 2010). Hybrid BCIs based on sequential processing use two or more inputs to accomplish two or more interaction tasks. Each input is then responsible for one task. Hybrid BCIs based on simultaneous processing can fuse several inputs in order to achieve a single interaction task (Müller-Putz et al., 2011).2.4.1. Gaze and BCI-based hybrid interactionAlthough the idea of combining BCI and gaze-tracking has been already proposed, it has been marginally explored. Existing works have mainly focused on P300 (Choi et al., 2013) and motor imagery (Zander et al., 2010) BCIs. Regarding P300 paradigms, Choi et al. (2013) combined gaze tracking with a P300-based BCI for a spelling application. Compared to a P300 speller, the number of accessible characters and the detection accuracy are improved. In contrast, Zander et al. proposed to control a 2D cursor with the gaze, and to emulate a mouse “click” with a motor-imagery based brain switch (Zander et al., 2010). They found that interaction using only gaze tracking was a bit faster, but that BCI-based click is a reasonable alternative to dwell time.Later, Kos'Myna and Tarpin-Bernard (2013) proposed to use both gaze tracking and SSVEP-based BCI for a selection task in the context of a videogame. The gaze tracking allowed for a first selection task (selecting an object), followed by BCI-based selection for a second task (selecting a transformation to apply to the previously selected object). The findings of this study indicate that selection based only on gaze was faster and more intuitive.So far, attempts at creating hybrid interfaces using EEG and gaze tracking inputs for target selection have focused on sequential methods, and proposed ways to separate the selection into secondary tasks.Zander et al. (2010) separates the task (selection attribute) into pointing and clicking, while both (Choi et al., 2013) and Kos'Myna and Tarpin-Bernard (2013) use a two-step selection. In this paper, we propose a novel hybrid interaction technique, that simultaneously fusions information from gaze tracking and SSVEP-based BCI at a low level of abstraction. | [
"23486216",
"23594762",
"22589242",
"20582271",
"8361834",
"16933428"
] | [
{
"pmid": "23486216",
"title": "Enhanced perception of user intention by combining EEG and gaze-tracking for brain-computer interfaces (BCIs).",
"abstract": "Speller UI systems tend to be less accurate because of individual variation and the noise of EEG signals. Therefore, we propose a new method to co... |
JMIR Medical Informatics | 27658571 | PMC5054236 | 10.2196/medinform.5353 | Characterizing the (Perceived) Newsworthiness of Health Science Articles: A Data-Driven Approach | BackgroundHealth science findings are primarily disseminated through manuscript publications. Information subsidies are used to communicate newsworthy findings to journalists in an effort to earn mass media coverage and further disseminate health science research to mass audiences. Journal editors and news journalists then select which news stories receive coverage and thus public attention.ObjectiveThis study aims to identify attributes of published health science articles that correlate with (1) journal editor issuance of press releases and (2) mainstream media coverage.MethodsWe constructed four novel datasets to identify factors that correlate with press release issuance and media coverage. These corpora include thousands of published articles, subsets of which received press release or mainstream media coverage. We used statistical machine learning methods to identify correlations between words in the science abstracts and press release issuance and media coverage. Further, we used a topic modeling-based machine learning approach to uncover latent topics predictive of the perceived newsworthiness of science articles.ResultsBoth press release issuance for, and media coverage of, health science articles are predictable from corresponding journal article content. For the former task, we achieved average areas under the curve (AUCs) of 0.666 (SD 0.019) and 0.882 (SD 0.018) on two separate datasets, comprising 3024 and 10,760 articles, respectively. For the latter task, models realized mean AUCs of 0.591 (SD 0.044) and 0.783 (SD 0.022) on two datasets—in this case containing 422 and 28,910 pairs, respectively. We reported most-predictive words and topics for press release or news coverage.ConclusionsWe have presented a novel data-driven characterization of content that renders health science “newsworthy.” The analysis provides new insights into the news coverage selection process. For example, it appears epidemiological papers concerning common behaviors (eg, alcohol consumption) tend to receive media attention. | Motivation and Related WorkThe news media are powerful conduits by which to disseminate important information to the public [8]. There is a chasm between the constant demand for up-to-date information and shrinking budgets and staff at newspapers around the globe. Information subsidies such as press releases are often looked to as a way to fill this widening gap. As a standard of industry practice, public relations professionals generate packaged information to promote their organization and to communicate aspects of interest to target the public [9].Agenda setting has been used to explain the impact of the news media in the formation of public opinion [10]. The theory posits that the decisions made by news gatekeepers (eg, editors and journalists) in choosing and reporting news plays an important part in shaping the public’s reality. Information subsidies are tools for public relations practitioners to use to participate in the building process of the news media agenda [11,12].In the area of health, journalists rely more heavily on sources and experts because of the technical nature of the information [12,13]. Tanner [14] found that television health-news journalists reported relying most heavily on public relations practitioners for story ideas. Another study of science journalists at large newspapers revealed that they work through public relations practitioners and also rely on scientific journals for news of medical discoveries [15]. Viswanath and colleagues [4] found that health and medical reporters and editors from small media organizations were less likely to use government websites or scientific journals as resources, but were more likely to use press releases. In other studies, factors such as newspaper circulation, publication frequency, and community size were shown to influence publication of health information subsidies [16-18].This study focuses on media coverage of developments in health science and scientific findings. Previous research has highlighted factors that might promote press release generation for, and news coverage of, health science articles. This work has relied predominantly on qualitative approaches. For instance, Woloshin and Schwartz [19] studied the press release process by interviewing journal editors about the process of selecting articles for which to generate press releases. They also analyzed the fraction of press releases that reported study limitations and related characteristics. Tsfati et al [20] argued through content analysis that scholars’ beliefs in the influence of media increases their motivation and efforts to obtain media coverage, in turn influencing the actual amount of media coverage of their research.In this study, we present a complementary approach using data-driven, quantitative methods to uncover the topical content that correlates with both news release generation and mainstream media coverage. Our hypothesis is that there exist specific topics—for which words and phrases are proxies—that are more likely to be considered “newsworthy.” Identifying such topics will illuminate latent biases in the journalistic process of selecting scientific articles for media coverage. | [
"15249264",
"16641081",
"19051112",
"22546317",
"15253997",
"12038933",
"25498121"
] | [
{
"pmid": "15249264",
"title": "Health attitudes, health cognitions, and health behaviors among Internet health information seekers: population-based survey.",
"abstract": "BACKGROUND\nUsing a functional theory of media use, this paper examines the process of health-information seeking in different doma... |
BioData Mining | 27777627 | PMC5057496 | 10.1186/s13040-016-0110-8 | FEDRR: fast, exhaustive detection of redundant hierarchical relations for quality improvement of large biomedical ontologies | BackgroundRedundant hierarchical relations refer to such patterns as two paths from one concept to another, one with length one (direct) and the other with length greater than one (indirect). Each redundant relation represents a possibly unintended defect that needs to be corrected in the ontology quality assurance process. Detecting and eliminating redundant relations would help improve the results of all methods relying on the relevant ontological systems as knowledge source, such as the computation of semantic distance between concepts and for ontology matching and alignment.ResultsThis paper introduces a novel and scalable approach, called FEDRR – Fast, Exhaustive Detection of Redundant Relations – for quality assurance work during ontological evolution. FEDRR combines the algorithm ideas of Dynamic Programming with Topological Sort, for exhaustive mining of all redundant hierarchical relations in ontological hierarchies, in O(c·|V|+|E|) time, where |V| is the number of concepts, |E| is the number of the relations, and c is a constant in practice. Using FEDRR, we performed exhaustive search of all redundant is-a relations in two of the largest ontological systems in biomedicine: SNOMED CT and Gene Ontology (GO). 372 and 1609 redundant is-a relations were found in the 2015-09-01 version of SNOMED CT and 2015-05-01 version of GO, respectively. We have also performed FEDRR on over 190 source vocabularies in the UMLS - a large integrated repository of biomedical ontologies, and identified six sources containing redundant is-a relations. Randomly generated ontologies have also been used to further validate the efficiency of FEDRR.ConclusionsFEDRR provides a generally applicable, effective tool for systematic detecting redundant relations in large ontological systems for quality improvement. | Related workThere has been related work on exploring redundant relations in biomedical ontologies or terminologies [24–28]. Bodenreider [24] investigated the redundancy of hierarchical relations across biomedical terminologies in the UMLS. Different from Bodenreider’s work, FEDRR focuses on developing a fast and scalable approach to detect redundant hierarchical relations within a single ontology.Gu et al. [25] investigated five categories of possibly incorrect relationship assignment including redundant relations in FMA. The redundant relations were detected based on the interplay between the is_a and other structural relationships (part_of, tributary_of, branch_of). A review of 20 samples from possible redundant part_of relations validated 14 errors, a 70 % correctness. FEDRR differs from this work in two ways. Firstly, FEDRR aims to provide an efficient algorithm to identify redundant hierarchical relations from large ontologies with 100 % accuracy. Secondly, FEDRR can be used for detecting redundant relations in all DAGs with the transitivity property.Mougin [26] studied redundant relations as well as missing relations in GO. The identification of redundant relations was based on the combination of relationships including is_a and is_a, is_a and part_of, part_of and part_of, and is_a and positively_regulates. FEDRR’s main focus is to provide a generalizable and efficient approach to detecting redundant hierarchical relations in any ontology, which has been illustrated by applying it to all the UMLS source vocabularies. Moreover, the redundant hierarchical relations detected by FEDRR were evaluated by human experts, while only number of redundant relations was reported in [26] without human annotator’s validation.Mougin et al. [27] exhaustively examined multiply-related concepts within the UMLS, where multiply-related concepts mean concepts associated through multiple relations. They explored whether such multiply-related concepts were inherited from source vocabularies or introduced by the UMLS integration. About three quarters of multiply-related concepts in the UMLS were found to be caused by the UMLS integration. Additionally, Gu et al. [28] studied questionable relationship triples in the UMLS following four cases: conflicting hierarchical relationships, redundant hierarchical relationships, mixed hierarchical/lateral relationships, and multiple lateral relationships. It was reported in [28] that many examples indicated that questionable triples arose from the UMLS integration process.Bodenreider [29], Mougin and Bodenreider [30], and Halper et al. [31] studied various approaches to removing cyclic hierarchical relations in the UMLS. Although no cycles have been detected in the current UMLS in terms of the AUI, such approaches ([29–31]) to detecting and removing cyclic relations are needed before FEDRR can be applied. This is because FEDRR is based on the topological sorting of a graph, which requires no cycles in a graph. | [
"17095826",
"26306232",
"22580476",
"18952949",
"16929044",
"19475727",
"25991129",
"23911553"
] | [
{
"pmid": "17095826",
"title": "SNOMED-CT: The advanced terminology and coding system for eHealth.",
"abstract": "A clinical terminology is essential for Electronic Health records. It represents clinical information input into clinical IT systems by clinicians in a machine-readable manner. Use of a Clin... |
BMC Medical Informatics and Decision Making | 27756371 | PMC5070096 | 10.1186/s12911-016-0371-7 | Predicting influenza with dynamical methods | BackgroundPrediction of influenza weeks in advance can be a useful tool in the management of cases and in the early recognition of pandemic influenza seasons.MethodsThis study explores the prediction of influenza-like-illness incidence using both epidemiological and climate data. It uses Lorenz’s well-known Method of Analogues, but with two novel improvements. Firstly, it determines internal parameters using the implicit near-neighbor distances in the data, and secondly, it employs climate data (mean dew point) to screen analogue near-neighbors and capture the hidden dynamics of disease spread.ResultsThese improvements result in the ability to forecast, four weeks in advance, the total number of cases and the incidence at the peak with increased accuracy. In most locations the total number of cases per year and the incidence at the peak are forecast with less than 15 % root-mean-square (RMS) Error, and in some locations with less than 10 % RMS Error.ConclusionsThe use of additional variables that contribute to the dynamics of influenza spread can greatly improve prediction accuracy. | Related workA survey of influenza forecasting methods [3] yielded 35 publications organized into categories based on the epidemiological application – population-based, medical facility-based, and forecasting regionally or globally. Within these categories, the forecasting methods varied along with the types of data used to make the forecast. Roughly half of the publications used statistical approaches without explicit mechanistic models and the other half used epidemiological models. Three of these models used meteorological predictors.In this study, we model directly from the data (time series consisting of weekly incidence geographically aligned with multiple facilities) and use meteorological data to enrich the model. None of the models surveyed in [3] used both the Method of Analogues and meteorological data to forecast influenza in a population.Typically data on the current number of influenza cases reported by the Centers for Disease Control ([4]; one of the more accurate geographically tagged data sets) has a one-week lag. In order to predict 4 weeks ahead of the current date, one uses data up to one week before the current date. This translates, in reality, to a 5-week prediction horizon for a prediction 4 weeks in the future. For the remainder of the paper we will refer to this as a 4-week prediction. Similarly, most climate data for the current date is not available in a format for which acquisition can be automated immediately; for most there is a lag of about one week. Our goal is to predict influenza incidence (number of influenza cases/total number of health-care visits) 4 weeks ahead of the current date, using only data available up to the current time, that is, using both incidence and climate data from the week before.This study was part of a team effort to predict the height of the peak, the timing of the peak and the total cases in an influenza season. This paper addresses the height of the peak and the total cases in a season. Another paper (see [5]) uses machine-learning methods to predict the timing of the peak. | [
"27127415"
] | [
{
"pmid": "27127415",
"title": "Prediction of Peaks of Seasonal Influenza in Military Health-Care Data.",
"abstract": "Influenza is a highly contagious disease that causes seasonal epidemics with significant morbidity and mortality. The ability to predict influenza peak several weeks in advance would al... |
BioData Mining | 27785153 | PMC5073928 | 10.1186/s13040-016-0113-5 | Developing a modular architecture for creation of rule-based clinical diagnostic criteria | BackgroundWith recent advances in computerized patient records system, there is an urgent need for producing computable and standards-based clinical diagnostic criteria. Notably, constructing rule-based clinical diagnosis criteria has become one of the goals in the International Classification of Diseases (ICD)-11 revision. However, few studies have been done in building a unified architecture to support the need for diagnostic criteria computerization. In this study, we present a modular architecture for enabling the creation of rule-based clinical diagnostic criteria leveraging Semantic Web technologies.Methods and resultsThe architecture consists of two modules: an authoring module that utilizes a standards-based information model and a translation module that leverages Semantic Web Rule Language (SWRL). In a prototype implementation, we created a diagnostic criteria upper ontology (DCUO) that integrates ICD-11 content model with the Quality Data Model (QDM). Using the DCUO, we developed a transformation tool that converts QDM-based diagnostic criteria into Semantic Web Rule Language (SWRL) representation. We evaluated the domain coverage of the upper ontology model using randomly selected diagnostic criteria from broad domains (n = 20). We also tested the transformation algorithms using 6 QDM templates for ontology population and 15 QDM-based criteria data for rule generation. As the results, the first draft of DCUO contains 14 root classes, 21 subclasses, 6 object properties and 1 data property. Investigation Findings, and Signs and Symptoms are the two most commonly used element types. All 6 HQMF templates are successfully parsed and populated into their corresponding domain specific ontologies and 14 rules (93.3 %) passed the rule validation.ConclusionOur efforts in developing and prototyping a modular architecture provide useful insight into how to build a scalable solution to support diagnostic criteria representation and computerization.Electronic supplementary materialThe online version of this article (doi:10.1186/s13040-016-0113-5) contains supplementary material, which is available to authorized users. | Related workPrevious studies have been conducted in integrating and formally expressing diagnostic rules from different perspectives. These rules are usually extracted from free-text-based clinical guidelines or diagnostic criteria, and integrated into computerized decision support systems to improve clinical performance and patient outcomes [12, 13]. The related studies mainly include as follows.Clinical guideline computerization and Computer Interpretable Guideline (CIG) Systems. Various computerized clinical guidelines and decision support systems that incorporate clinical guidelines have been developed. Researchers have tried different approaches on computerization of clinical practice guidelines [12, 14–18]. Since guidelines cover many complex medical procedures, the application of computerized guideline in real-world practice is still very limited. However, the methods used to computerize guidelines are valuable in tackling the issues in diagnostic criteria computerization.Formalization method studies on clinical research data. Previous studies investigated the eligibility criteria in clinical trial protocol and developed approaches for eligibility criteria extraction and semantic representation, and used hierarchical clustering for dynamic categorization of such criteria [19]. For example, EliXR provided a corpus-based knowledge acquisition framework that used the Unified Medical Language System (UMLS) to standardize eligibility-concept encoding and to enrich eligibility-concept relations for clinical research eligibility criteria from text [20]. QDM-based phenotyping methods used for identification of patient cohorts from EHR data also provide valuable reference for our work [21].
However, few studies are directly related to building a unified architecture to support the goal of diagnostic criteria formalization. In particular, the lack of a standards-based information model has been recognized as a major barrier for achieving computable diagnostic criteria [22]. Fortunately, current efforts in the development of international recommendation standard models in clinical domains provide valuable references for modeling and representing computable diagnostic criteria. The notable examples include the ICD-11 content model [5, 23] and the National Quality Forum (NQF) QDM [21, 24, 25]. | [
"24500457",
"23523876",
"22874162",
"22462194",
"24975859",
"12509357",
"23806274",
"18639485",
"15196480",
"15182844",
"21689783",
"21807647",
"23304325",
"17712081",
"23601451",
"23304366"
] | [
{
"pmid": "23523876",
"title": "An ontology-driven, diagnostic modeling system.",
"abstract": "OBJECTIVES\nTo present a system that uses knowledge stored in a medical ontology to automate the development of diagnostic decision support systems. To illustrate its function through an example focused on the... |
Frontiers in Neuroscience | 27833526 | PMC5081358 | 10.3389/fnins.2016.00479 | Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network | Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature. | Related workPrior work for bioinspired acoustic surveillance units (ASU; from flies), such as that of Cauwenberghs et al. used spatial and temporal derivatives of the field over a sensor array of MEMS microphones, power series expansion, and Independent Component Analysis (ICA) for localizing and separating mixtures of delayed sources of sound (Cauwenberghs et al., 2001). This work showed that the number of sources that can be extracted depends strongly on the number of resolvable terms in the series.Similar work was also done by Sawada et al. using ICA for estimating the number of sound sources (Sawada et al., 2005a) and localization of multiple sources of sound (Sawada et al., 2003, 2005b).Julian et al. compared four different algorithms for sound localization using MEMS microphones and signals recorded in a natural environment (Julian et al., 2004). The spatial-gradient algorithm (SGA) showed the best accuracy results. The implementation requires a sampled data analog architecture able to solve adaptively a standard least means-square (LMS) problem. The performance of the system, with low power CMOS VLSI design, is of the order of 1° error margin and similar standard deviation for the bearing angle estimation (Cauwenberghs et al., 2005; Julian et al., 2006; Pirchio et al., 2006). A very low power implementation for interaural time delay (ITD) estimation without delay lines with the same ASU unit is reported by Chacon-Rodriguez et al. with an estimation error in the low single-digit range (Chacon-Rodriguez et al., 2009).Masson et al. used a data fusion algorithm to calculate the estimation of the position based on measurements from five nodes, each with four MEMS microphones (Masson et al., 2005). The measurement is made from a unique fixed source emitting a 1 kHz signal.Zhang and Andreou used cross correlation of the signals received and a zero crossing point to estimate the bearing angle of a moving vehicle (Zhang and Andreou, 2008). The hardware was an ASU with four MEMS microphones. | [
"19115011",
"10991021"
] | [
{
"pmid": "19115011",
"title": "Brian: a simulator for spiking neural networks in python.",
"abstract": "\"Brian\" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networ... |
Frontiers in Neuroinformatics | 27867355 | PMC5095137 | 10.3389/fninf.2016.00048 | Methods for Specifying Scientific Data Standards and Modeling Relationships with Applications to Neuroscience | Neuroscience continues to experience a tremendous growth in data; in terms of the volume and variety of data, the velocity at which data is acquired, and in turn the veracity of data. These challenges are a serious impediment to sharing of data, analyses, and tools within and across labs. Here, we introduce BRAINformat, a novel data standardization framework for the design and management of scientific data formats. The BRAINformat library defines application-independent design concepts and modules that together create a general framework for standardization of scientific data. We describe the formal specification of scientific data standards, which facilitates sharing and verification of data and formats. We introduce the concept of Managed Objects, enabling semantic components of data formats to be specified as self-contained units, supporting modular and reusable design of data format components and file storage. We also introduce the novel concept of Relationship Attributes for modeling and use of semantic relationships between data objects. Based on these concepts we demonstrate the application of our framework to design and implement a standard format for electrophysiology data and show how data standardization and relationship-modeling facilitate data analysis and sharing. The format uses HDF5, enabling portable, scalable, and self-describing data storage and integration with modern high-performance computing for data-driven discovery. The BRAINformat library is open source, easy-to-use, and provides detailed user and developer documentation and is freely available at: https://bitbucket.org/oruebel/brainformat. | 2. Background and related workThe scientific community utilizes a broad range of data formats. Basic formats explicitly specify how data is laid out and formatted in binary or text data files (e.g., CSV, BOF, etc). While such basic formats are common, they generally suffer from a lack of portability, scalability and a rigorous specification. For text-based files, languages and formats, such as the Extensible Markup Language (XML) (Bray et al., 2008) or the JavaScript Object Notation (JSON) (JSON, 2015), have become popular means to standardize documents for data exchange. XML, JSON and other text-based standards (in combination with character-encoding schema, e.g., ASCII or Unicode) play a critical role in practice in the exchange of usually relatively small, structured documents but are impractical for storage and exchange of large scientific data arrays.For storage of large scientific data, HDF5 (The HDF Group, 2015) and NetCDF (Rew and Davis, 1990) among others, have gained wide popularity. HDF5 is a data model, library, and file format for storing and managing large and complex data. HDF5 supports groups, datasets, and attributes as core data object primitives, which in combination provide the foundation for data organization and storage. HDF5 is portable, scalable, self-describing, and extensible and is widely supported across programming languages and systems, e.g., R, Matlab, Python, C, Fortran, VisIt, or ParaView. The HDF5 technology suite includes tools for managing, manipulating, viewing, and analyzing HDF5 files. HDF5 has been adopted as a base format across a broad range of application sciences, ranging from physics to bio-sciences and beyond (Habermann et al., 2014). Self-describing formats address the critical need for standardized storage and exchange of complex and large scientific data.Self-describing formats like HDF5 provide general capabilities for organizing data, but they do not prescribe a data organization. The structure, layout, names, and descriptions of storage objects, hence, often still differ greatly between applications and experiments. This diversity makes the development of common and reusable tools challenging. VizSchema (Shasharina et al., 2009) and XDMF (Clarke and Mark, 2007) among others, propose to bridge this gap between general-purpose, self-describing formats and the need for standardized tools via additional lightweight, low-level schema (often based on XML) to further standardize the description of the low-level data organization to facilitate data exchange and tool development.Application-oriented formats then generally focus on specifying the organization of data in a semantically meaningful fashion, including but not limited to the specification of storage object names, locations, and descriptions. Many application formats build on existing self-describing formats, e.g., NeXus (Klosowski et al., 1997) (neutron, x-ray, and muon data), OpenMSI (mass spectrometry imaging) (Rübel et al., 2013), CXIDB (Maia, 2012) (coherent x-ray imaging), or NetCDF (Rew and Davis, 1990) in combination with CF and COARDS metadata conventions for climate data, and many others. Application formats are commonly described by documents specifying the location and names of data items and often provide application-programmer interfaces (API) to facilitate reading and writing of format files. Some formats are further governed by formal, computer-readable, and verifiable specifications. For example, NeXus uses the NXDL (NeXus International Advisory Committee, 2016) XML-based format and schema to define the nomenclature and arrangement of information in a NeXus data file. On the level of HDF5 groups, NeXus also uses the notion of Classes to define the fields that a group should contain in a reusable and extensible fashion.The critical need for data standards in neuroscience has been recognized by several efforts over the course of the last several years (e.g., Sommer et al., 2016); however, much work remains. Here, our goal is to contribute to this discussion by providing much-needed methods and tools for the effective design of sustainable neuroscience data standards and demonstration of the methods in practice toward the design and implementation of a usable and extensible format with an initial focus on electrocardiography data. The developers of the Klustakwik suite (Kadir et al., 2013, 2015) have proposed an HDF5-based data format for storage of spike sorting data. Orca (also called BORG) (Keith Godfrey, 2014) is an HDF5-based format developed by the Allen Institute for Brain Science designed to store electrophysiology and optophysiology data. The NIX (Stoewer et al., 2014) project has developed a set of standardized methods and models for storing electrophysiology and other neuroscience data together with their metadata in one common file format based on HDF5. Rather than an application-specific format, NIX defines highly generic models for data as well as for metadata that can be linked to terminologies (defined via odML) to provide a domain-specific context for elements. The open metadata Markup Language odML (Grewe et al., 2011) is a metadata markup language based on XML with the goal to define and establish an open and flexible format to transport neuroscience metadata. NeuroML (Gleeson et al., 2010) is also an XML-based format with a particular focus on defining and exchanging descriptions of neuronal cell and network models. The Neurodata Without Borders (NWB) (Teeters et al., 2015) initiative is a recent project with the specific goal “[…] to produce a unified data format for cellular-based neurophysiology data based on representative use cases initially from four laboratories—the Buzsaki group at NYU, the Svoboda group at Janelia Farm, the Meister group at Caltech, and the Allen Institute for Brain Science in Seattle.” Members of the NIX, KWIK, Orca, BRAINformat, and other development teams have been invited and contributed to the NWB effort. NWB has adopted concepts and methods from a range of these formats, including from the here-described BRAINformat. | [
"20585541",
"21941477",
"25149694",
"22543711",
"22936162",
"24087878",
"26590340"
] | [
{
"pmid": "20585541",
"title": "NeuroML: a language for describing data driven models of neurons and networks with a high degree of biological detail.",
"abstract": "Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectiv... |
JMIR Public Health and Surveillance | 27765731 | PMC5095368 | 10.2196/publichealth.5901 | Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: A Comparative Analysis | BackgroundTraditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from Google, Twitter, and Wikipedia to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods.ObjectiveThe objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter.MethodsPublicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC’s change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package “bcp” version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia.ResultsDuring the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for Google was 85%. A low sensitivity of 50% was calculated for Twitter; a low PPV of 43% was found for Twitter also. Wikipedia had the lowest sensitivity of 33% and lowest PPV of 40%.ConclusionsOf the 3 Web-based sources, Google had the best combination of sensitivity and PPV in detecting Bayesian change points in influenza-related data streams. Findings demonstrated that change points in Google, Twitter, and Wikipedia data occasionally aligned well with change points captured in CDC ILI data, yet these sources did not detect all changes in CDC data and should be further studied and developed. | Related WorkAs the number of Internet users has increased [11], researchers have identified the use of Google, Twitter, and Wikipedia as novel surveillance approaches to complement traditional methods. Google Flu Trends, which monitors Google users’ searches for information related to influenza, has shown correlation with CDC influenza data, while delivering estimates 1 to 2 weeks ahead of CDC reports [8,12]. Although initially successful, the system has not been without its issues in more recent years. Google Flu Trends overestimated influenza activity during the 2012-2013 influenza season and underestimated it during the 2009 H1N1 influenza pandemic [13-16]. One study found that both the original (2008) and revised (2009) algorithms for Google Flu Trends were not reliable on city, regional, and national scales, particularly in instances of varying intensity in influenza seasons and media coverage [16]. Due to issues with its proprietary algorithm, Google Flu Trends was discontinued in August 2015 [17].Influenza-related posts on Twitter, a social networking platform for disseminating short messages (tweets), have shown high correlation with reported ILI activity in ILINet [18,19]. Studies have found that Twitter data highly correlate with national- and city-level ILI counts [20]. Signorini et al (2011) also demonstrated that tweets could be used to estimate ILI activity at regional and national levels within a reasonable margin of error [21]. Moreover, studies have found that Twitter data perform better than Google data. Nagar et al (2014) conducted a study showing that tweets better reflected city-level ILI incidence in comparison with Google search queries [22]. Aramaki et al discovered that a Twitter-based model outperformed a Google-based model during periods of normal news coverage, although the Twitter model performed less optimally during the periods of excessive media coverage [23]. Moreover, geographic granularity can affect the performance of Twitter data. Broniatowski et al (2015) found that city-level Twitter data performed better than state- and national-level Twitter data, although Google Flu Trends data performed better at each level [24].Wikipedia page view data have proven valuable for tracking trending topics as well as disease monitoring and forecasting [25,26]. McIver and Brownstein (2014) reported that increases in the quantity of visits to influenza-related Wikipedia articles allowed for the estimation of influenza activity up to 2 weeks before ILINet, outperforming Google Flu Trends estimates during abnormal influenza seasons and periods of high media reporting [27]. One study found that Wikipedia page view data have suitable forecasting value up until the peak of the influenza seasons [26], whereas another study also reported that Wikipedia page view data are suitable for forecasting using a 28-day analysis as well as for nowcasting, or monitoring current disease incidence [25]. However, as a disadvantage, the signal-to-noise ratio of Wikipedia data can be problematic [25] as Wikipedia has become a preferred source for seeking health information whether an individual is ill or not [28,29]. In addition, unlike the granularity flexibility of Google and Twitter data, Wikipedia does not have such capability of evaluating influenza activity at local or regional levels because it only provides counts of page views and no accompanying location or user information in its publicly available data. | [
"25835538",
"20798667",
"15714620",
"23896182",
"19329408",
"22844241",
"19020500",
"21886802",
"23407515",
"24626916",
"24146603",
"25406040",
"24349542",
"21573238",
"25331122",
"27014744",
"25392913",
"25974758",
"24743682",
"19390105",
"21827326",
"23760189",
"2489816... | [
{
"pmid": "25835538",
"title": "Estimating influenza attack rates in the United States using a participatory cohort.",
"abstract": "We considered how participatory syndromic surveillance data can be used to estimate influenza attack rates during the 2012-2013 and 2013-2014 seasons in the United States. ... |
Frontiers in Neuroscience | 27877107 | PMC5099523 | 10.3389/fnins.2016.00508 | Training Deep Spiking Neural Networks Using Backpropagation | Deep spiking neural networks (SNNs) hold the potential for improving the latency and energy efficiency of deep neural networks through data-driven event-based computation. However, training such networks is difficult due to the non-differentiable nature of spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise. This enables an error backpropagation mechanism for deep SNNs that follows the same principles as in conventional deep networks, but works directly on spike signals and membrane potentials. Compared with previous methods relying on indirect training and conversion, our technique has the potential to capture the statistics of spikes more precisely. We evaluate the proposed framework on artificially generated events from the original MNIST handwritten digit benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic vision sensor, in which the proposed method reduces the error rate by a factor of more than three compared to the best previous SNN, and also achieves a higher accuracy than a conventional convolutional neural network (CNN) trained and tested on the same data. We demonstrate in the context of the MNIST task that thanks to their event-driven operation, deep SNNs (both fully connected and convolutional) trained with our method achieve accuracy equivalent with conventional neural networks. In the N-MNIST example, equivalent accuracy is achieved with about five times fewer computational operations. | 1.1. Related workGradient descent methods for SNNs have not been deeply investigated because both spike trains and the underlying membrane potentials are not differentiable at the time of spikes. The most successful approaches to date have used indirect methods, such as training a network in the continuous rate domain and converting it into a spiking version. O'Connor et al. (2013) pioneered this area by training a spiking deep belief network based on the Siegert event-rate approximation model. However, on the MNIST hand written digit classification task (LeCun et al., 1998), which is nowadays almost perfectly solved by ANNs (0.21% error rate in Wan et al., 2013), their approach only reached an accuracy around 94.09%. Hunsberger and Eliasmith (2015) used the softened rate model, in which a hard threshold in the response function of leaky integrate and fire (LIF) neuron is replaced with a continuous differentiable function to make it amenable to use in backpropagation. After training an ANN with the rate model they converted it into a SNN consisting of LIF neurons. With the help of pre-training based on denoising autoencoders they achieved 98.6% in the permutation-invariant (PI) MNIST task (see Section 3.1). Diehl et al. (2015) trained deep neural networks with conventional deep learning techniques and additional constraints necessary for conversion to SNNs. After training, the ANN units were converted into non-leaky spiking neurons and the performance was optimized by normalizing weight parameters. This approach resulted in the current state-of-the-art accuracy for SNNs of 98.64% in the PI MNIST task. Esser et al. (2015) used a differentiable probabilistic spiking neuron model for training and statistically sampled the trained network for deployment. In all of these methods, training was performed indirectly using continuous signals, which may not capture important statistics of spikes generated by real sensors used during processing. Even though SNNs are well-suited for processing signals from event-based sensors such as the Dynamic Vision Sensor (DVS) (Lichtsteiner et al., 2008), the previous SNN training models require removing time information and generating image frames from the event streams. Instead, in this article we use the same signal format for training and processing deep SNNs, and can thus train SNNs directly on spatio-temporal event streams considering non-ideal factors such as pixel variation in sensors. This is demonstrated on the neuromorphic N-MNIST benchmark dataset (Orchard et al., 2015), achieving higher accuracy with a smaller number of neurons than all previous attempts that ignored spike timing by using event-rate approximation models for training. | [
"25910252",
"26617512",
"27199646",
"26941637",
"27651489",
"26177908",
"25333112",
"26017442",
"17305422",
"25104385",
"24574952",
"24115919",
"26217169",
"26635513",
"19548795",
"18439138",
"27683554"
] | [
{
"pmid": "25910252",
"title": "Turn Down That Noise: Synaptic Encoding of Afferent SNR in a Single Spiking Neuron.",
"abstract": "We have added a simplified neuromorphic model of Spike Time Dependent Plasticity (STDP) to the previously described Synapto-dendritic Kernel Adapting Neuron (SKAN), a hardwa... |
PLoS Computational Biology | 27835647 | PMC5105998 | 10.1371/journal.pcbi.1005113 | Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons | Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. | Related workA number of methods have been used previously to examine the spectrotemporal sensitivity of auditory cortical neurons. Previous studies have attempted to extend the application of the LN model to auditory cortical data, mostly using maximum-likelihood methods. Indeed, several studies have used approaches that have fundamental similarities to the one we explore here, in that they combine or cascade several linear filters in a nonlinear manner. One such body of work that improved predictions over the LN model is based on finding the maximally-informative dimensions (MID) [20,21,34,43–46] that drove the response of auditory cortical neurons. This method involves finding usually one or two maximally informative linear features that interact through a flexible 1D or 2D nonlinearity, and is equivalent to fitting a form of LN model under assumptions of a Poisson model of spiking variability [46–48]. When this method was applied to neurons in primary auditory cortex it was found that the neurons’ response properties are typically better described using two features rather than one [20,34], in contrast to midbrain neurons which are well fitted using a single feature [43]. That result thus seems consistent with ours, in that we found NRFs fitted to cortical responses most commonly evolved to have two effective HUs (or input features). Another approach, that has been found to improve predictions of auditory cortical responses, is to apply a multi-linear model over the dimensions of frequency, sound level, and time lag, and for the extended multi-linear model also over dimensions involved in multiplicative contextual effects [21]. However, the above studies in auditory cortex [20,34,43] did not use natural stimuli, and hence might not have been in the right stimulus space to observe some complexities, as STRFs measured with natural stimuli can be quite different than when measured with artificial stimuli [49]. An advantage of the NRF model is that its architecture is entirely that of traditional feedforward models of sensory pathways in which activations of lower level features simply converge onto model neurons with sigmoidal input-firing rate functions. NRFs can therefore be interpreted in a context that is perhaps simpler and more familiar than that of, for example, maximally informative dimension models [20,44].Other developments on the standard LN model have included model components that can be interpreted as intraneuronal rather than network properties, such as including a post-spike filter [22] or synaptic depression [23], and have also been shown to improve predictions. Pillow and colleagues [50,51] applied a generalized linear model (GLM) to the problem of receptive field modelling. Their approach is similar to the basic LN model in that it involves a linear function of stimulus history combined with an output nonlinearity. However, unlike in LN models, the response of their GLM also depends on the spike history (using a post-spike filter). This post-spike filter may reflect intrinsic refractory characteristics of neurons, but could also represent network filter effects. A GLM model has been applied to avian forebrain neurons [22], where it has been shown to significantly improve predictions of neural responses over a linear model, but not over an LN model.Although they haven’t yet been applied to auditory cortical responses, it is worth mentioning two extensions to GLMs. First, GLMs can be extended so that model responses depend on the history of many recorded neurons [50], representing interconnections between recorded neurons. While this approach is thus also aimed at modeling network properties, it is quite different from our NRF model, where we infer the characteristics of hidden units. Second, the extension of the GLM approach investigated by Park and colleagues [52] included sensitivity to more than one stimulus feature. Thus, like our NRF or the multi-feature MID approach, this “generalized quadratic model” (GQM) has an input stage comprising several filters which are nonlinearly combined, in this case using a quadratic function. One might argue that our choice for the HUs of a sigmoidal nonlinearity following a linear filter stage, and the same form for the OU, is perhaps more similar to what occurs in the brain, where dendritic currents might be thought of as combining linearly according to Kirchhoff’s laws as they converge on neurons that often have sigmoidal current-firing rate functions. However, we do not wish to overstate either the physiological realism of our model (which is very rudimentary compared to the known complexity of real neurons) or the conceptual difference with GQMs or multi-feature MIDs. A summation of sigmoidal unit outputs may perhaps be better motivated physiologically than a quadratic function, but given the diversity of nonlinearity in the brain this is a debatable point.Another extension to GLMs, a generalized nonlinear model (GNM), does, however, employ input units with monotonically-increasing nonlinearities, and unlike multi-neuron GLMs or GQMs, GNMs have been applied to auditory neurons by Schinkel-Bielefeld and colleagues [24]. Their GNM comprises a very simple feedforward network based on the weighted sum of an excitatory and an inhibitory unit, along with a post-spike filter. The architecture of that model is thus not dissimilar from our NRFs, except that the number of HUs is fixed at two, and their inhibitory and excitatory influences are fixed in advance. It has been applied to mammalian (ferret) cortical neural responses, uncovering non-monotonic sound intensity tuning and onset/offset selectivity.For neurons in the avian auditory forebrain, although not for mammalian auditory cortex, GNMs have also been extended by McFarland and colleagues to include the sum of more than two input units with monotonically-increasing nonlinearities [53]. Of the previously described models, this cascaded LN-LN ‘Nonlinear Input Model (NIM)’ model bears perhaps the greatest similarity with our NRF model. Just like our NRF, it comprises a collection of nonlinear units feeding into a nonlinear unit. The main differences between their model and ours thus pertain not to model architecture, but to the methods of fitting the models and the extent to which the models have been characterized. The NIM has been applied to a single zebra finch auditory forebrain neuron, separating out its excitatory and inhibitory receptive fields in a manner similar to what we observe in the bi-feature neurons described above.One advantage of the NRF over the NIM is that the fitting algorithm automatically determines the number of features that parsimoniously explain each neuron's response, obviating the need to laboriously compare the cross-validated model performance for each possible number of hidden units. Another difference is that the NRF is simpler while still maintaining the capacity to capture complex nonlinear network properties of neural responses; for example, the NIM [53] had potentially large numbers of hyperparameters (four for each hidden unit or “feature”) that were manually turned, something that would be very difficult to do if the model needed to be fitted to datasets comprising large numbers of neurons. In contrast, the NRF has only one hyperparameter for the entire network, which can easily be tuned in an automated parameter search with cross-validation. Consequently, we have been able to use the NRF to characterize a sizeable population of recorded neurons, but so far no systematic examination of the capacity of the NIM to explain the responses of many neurons has been performed.Another recent avian forebrain study [54] used a maximum noise entropy (MNE) approach to uncover multiple receptive fields sensitive to second-order aspects of the stimulus. Unlike the above two GNM [24,53] approaches, this model does not have hidden units with sigmoidal nonlinearities, but finds multiple quadratic features. The MNE predicted neural responses better than a linear model, although still poorly, with an average CCraw of 0.24, and it was not determined whether it could out-predict an LN model. Note, however, that the CCraw values reported in that study do not distinguish stimulus-driven response variability from neural “noise”. Consequently, it is unclear whether the relatively modest CCraw values reported there might reflect shortcomings of the model or whether they are a consequence of differences in the species, brain regions and stimuli under study. Finally, perhaps the most relevant study in the avian forebrain used a time delay feedforward neural network to predict responses of zebra finch nucleus ovoidalis neurons to birdsong [55]. These authors reported that the network predicted neural responses better than a linear model, but performed no quantitative comparisons to support this.Advances on the LN model have also been applied in other brain regions. Various advances on the LN model have also been made in studies of primary visual cortex, and of particular relevance are the few cases where neural networks have been used to predict neural responses. Visual cortical responses to certain artificial stimuli (randomly varying bar patterns and related stimuli) have been fitted using a single hidden layer neural network, resulting in improvements in prediction over linear models for complex but not simple cells in one study [56] and over LN-like models in another study [57]. However, the challenge we tackle here is to predict the responses to natural stimuli. In this respect we are aware of only one similar study by Prenger and colleagues [58] which used a single hidden layer neural network to predict responses to series of still images of natural scenes. The network model in this study gave better predictions than an LN model with a simple rectifying nonlinearity. However, the improvements had limited consistency, predicting significantly better in only 16/34 neurons, and it did worse than an LN model applied to the power spectra of the images. Additionally, the CCraw of the model predictions with the neural data were somewhat small (0.24). This appears to contrast with the seemingly better performance we obtained with our NRF model.These apparent differences in model performance may, however, not all be attributable to differences in model design or fitting. In addition to the fact we already noted that low CCraw values might be diagnostic of very noisy neurons rather than shortcomings of the model, we also need to be cognizant of the differences in the types of data that are being modeled: we applied our model responses of auditory cortical neurons to natural auditory sound recordings, whereas Prenger and colleagues [58] applied theirs to visual cortical neuron responses to random sequences of photographs of natural scenes. Furthermore, the neural responses to our stimuli were averaged over several repeats, whereas the above study did not use repeated stimuli, which may limit how predictable their neural responses may be. However, there are also notable structural differences between their model and ours. For example, the activation function on the OU in the Prenger et al. study [58] was linear (as with [56] but not [57]), whereas the OU of our NRF has a nonlinear activation function, which enables our NRF to model observed neuronal thresholds explicitly. Furthermore, we used a notably powerful optimization algorithm, the sum-of-function optimizer [26], which has been shown to find substantially lower values of neural network cost function than the forms of gradient descent used in the above neural network studies. Finally, the L1-norm regularization that we used has the advantage of finding a parsimonious network quickly and simply, as compared with the more laborious and often more complex methods of the above three studies: L2-norm-based regularization methods and hidden unit pruning [58], early stopping and post-fit pruning [56] or no regularization and comparing different numbers of hidden units [57]. | [
"3973762",
"6976799",
"18184787",
"19295144",
"9603734",
"12019330",
"14583754",
"16633939",
"18854580",
"10946994",
"12815016",
"11784767",
"3479811",
"11700557",
"10704507",
"14762127",
"15969914",
"21689603",
"18579084",
"18287509",
"21264310",
"24305812",
"22457454",
... | [
{
"pmid": "3973762",
"title": "Spatiotemporal energy models for the perception of motion.",
"abstract": "A motion sequence may be represented as a single pattern in x-y-t space; a velocity of motion corresponds to a three-dimensional orientation in this space. Motion sinformation can be extracted by a s... |
Frontiers in Psychology | 27899905 | PMC5110545 | 10.3389/fpsyg.2016.01793 | Effects of Individual Differences in Working Memory on Plan Presentational Choices | This paper addresses research questions that are central to the area of visualization interfaces for decision support: (RQ1) whether individual user differences in working memory should be considered when choosing how to present visualizations; (RQ2) how to present the visualization to support effective decision making and processing; and (RQ3) how to evaluate the effectiveness of presentational choices. These questions are addressed in the context of presenting plans, or sequences of actions, to users. The experiments are conducted in several domains, and the findings are relevant to applications such as semi-autonomous systems in logistics. That is, scenarios that require the attention of humans who are likely to be interrupted, and require good performance but are not time critical. Following a literature review of different types of individual differences in users that have been found to affect the effectiveness of presentational choices, we consider specifically the influence of individuals' working memory (RQ1). The review also considers metrics used to evaluate presentational choices, and types of presentational choices considered. As for presentational choices (RQ2), we consider a number of variants including interactivity, aggregation, layout, and emphasis. Finally, to evaluate the effectiveness of plan presentational choices (RQ3) we adopt a layered-evaluation approach and measure performance in a dual task paradigm, involving both task interleaving and evaluation of situational awareness. This novel methodology for evaluating visualizations is employed in a series of experiments investigating presentational choices for a plan. A key finding is that emphasizing steps (by highlighting borders) can improve effectiveness on a primary task, but only when controlling for individual variation in working memory. | 2. Related workThis section discusses related work addressing how visual presentational choices have been applied and evaluated in the past.2.1. RQ1: whether individual user differences in working memory should be consideredAnecdotal evidence about individual differences has motivated research on presenting the same information in different visualization views (Wang Baldonado et al., 2000). While our work looks specifically at working memory, we contextualize our choice with findings with measurable variation between individuals based on a number of factors such as cognitive abilities (Velez et al., 2005; Toker et al., 2012) (including working memory), personality (Ziemkiewicz et al., 2011), and degree of expertise or knowledge (Lewandowsky and Spence, 1989; Kobsa, 2001). In addition, gender and culture may be factors to consider for visual presentational choices given known gender differences in processing spatial information, and cultural differences in spatial density of information (Hubona, 2004; Velez et al., 2005; Fraternali and Tisi, 2008).2.1.1. PersonalityStudies have evaluated whether personality traits affect individuals' abilities to interpret visualizations. In trait theory, a trait is defined as “an enduring personal characteristic that reveals itself in a particular pattern of behavior in different situations” (Carlson et al., 2004, p. 583). One study found an interaction of the personality trait of locus of control, with the ability to understand nested visualizations (Ziemkiewicz et al., 2011).Another study evaluated the general effect of personality on completion times, and number of insights, but did not study the interaction between information presentational choices and personality (Green and Fisher, 2010). This study also looked at locus of control, and two of the “Big Five” personality traits: Extraversion and Neuroticism. Participants who had an intrinsic locus of control, or scored higher on the traits of extroversion and neuroticism were found to complete tasks faster. In contrast, participants who had an external locus of control, or scored lower on Extraversion and Neuroticism gained more insights.2.1.2. ExpertiseAnother trait that has been considered is the level of expertise of the user. Küpper and Kobsa Kobsa (2001) proposed adapting plan presentation to a model of a user's knowledge and capabilities with regard to plan concepts, e.g., knowledge of the steps and the relationships with them. Others have formally evaluated the effect of familiarity of the data presented and individuals' graphical literacy on abilities to make inferences (from both bar and line charts) (Shah and Freedman, 2011). Individual expertise in using each particular type of graphs (radar graphs and bar graphs) also influenced the usability of each respective type of graph (Toker et al., 2012).2.1.3. Cognitive traitsThe influence of individual cognitive traits has been shown consistently to influence the understandability of visualizations. Studies have also considered a number of related cognitive abilities. Previous studies have found significant effects of cognitive factors such as perceptual speed, verbal working memory, visual working memory on task performance (Toker et al., 2012; Conati et al., 2014). Other studies have also found an effect of individual perceptual speed, and visual working memory capacity, on which visualizations were most effective (Velez et al., 2005; Conati et al., 2014). For example, participants with low visual working memory were found to perform better with a horizontal layout (Conati et al., 2014). These findings suggest that cognitive traits are particularly promising factors to personalize presentational choices to in domains with high cognitive load. We further motivate the trait we chose to study, working memory, in Section 3.2.2. RQ2: how to present the presentationThis section describes some of the choices that can be made about how to present a plan: modality, layout, degree of interactivity, aggregation and emphasis.2.2.1. ModalityPlans can be presented in textual form (Mellish and Evans, 1989; Biundo et al., 2011; Bercher et al., 2014), and as visualizations (Küpper and Kobsa, 2003; Butler et al., 2007; Brown and Paik, 2009; McCurdy, 2009; Billman et al., 2011; de Leoni et al., 2012) (with some variants somewhere on a continuum). Given that users are variable in terms of their verbal working memory (Toker et al., 2012; Conati et al., 2014), the choice of modality is a candidate for design choice. Figure 1 shows a simple plan visualization where nodes describe actions, and edges are transitions to other actions.Figure 1Example of a simple plan visualization.2.2.2. LayoutThe way plans are laid out can help or hinder their presentational complexity. For example, the visual layout can use a mapping most suitable for that domain. The mapping used has differed between different planning domains, for example mapping to a location resource in the domain of logistics (de Leoni et al., 2012), or horizontal alignment according to time for tasks that are constrained by time. For example, Conati et al. (2014) found that users with low visual working memory answer more answers correctly with a horizontal layout compared to a vertical layout for complex visualizations (ValueCharts). Other work has compared the same information presented as a bar chart vs. a radar graph (Toker et al., 2012).2.2.3. Degrees of interactivityAs plans get large it may be necessary to occlude parts of a plan to support an overview. The idea of fading (Hothi et al., 2000) and hiding parts (e.g., using stretchtext Boyle and Encarnacion, 1994) of information presentation (primarly text) has previously been explored in the area of hypertext. Research on stretchtext has investigated the effectiveness of choosing which information is shown (i.e., “stretched”) and which is not (but available, via selection, i.e., “shrunk”). In the area of graphs, Henry (1992) looked at filtering graphs by content, and Sarkar and Brown (1992) applied fish-eyes views to grow or shrink parts of a graph. Other work has supported zooming to manage the visualization of larger plans (Billman et al., 2011; de Leoni et al., 2012).2.2.4. AggregationBy aggregation, we mean gathering of several things together into one thing. For example, the making of dough can include several composite steps such as adding flour or mixing the ingredients, but it can also be aggregated into a single step of “making dough.” In other words, an alternative method for dealing with large plans is to support the cognitive mechanism of chunking, by representing several steps by higher order concepts. For example, Eugenio et al. (2005) found that describing instructions in natural language using aggregated concepts (such as the concept of an engine rather than listing all of its composite parts) lead to greater learning outcomes. This method can also be combined with interactivity using methods such as stretchtext (mentioned above) to contract or expand relevant portions of a plan. Aggregation would also benefit from considering a user's expertise or experience with a particular task (Kobsa, 2001). Several of the surveyed planning systems support concepts similar to main task and sub-tasks, where a main task can consist of several sub-tasks, c.f., Billman et al. (2011). In contrast, several levels of aggregation could make presentation more complex, e.g., Gruhn and Laue (2007) claims that a greater nesting depth in a model increases its complexity. Users have also been found to differ on how well they perform tasks using visualizations with nesting or aggregation; users who scored low on the personality trait of internal locus of control performed worse with nested visualizations when comparing with users who scored highly on the trait (Ziemkiewicz et al., 2011).2.2.5. EmphasisBoth text and graphics can be visually annotated to indicate importance, for example by changing their size or color to indicate relevance (Brusilovsky et al., 1996). Conversely, dimming and hiding has been used to de-emphasize information (Brusilovsky et al., 1996). Color is a particularly good candidate for emphasis; work in visual processing has established that color is processed much quicker by the visual system compared to other highly salient visual features such as shapes (Nowell et al., 2002). This fact has implicitly been taken into consideration in interactive learning environments (Freyne et al., 2007; Jae-Kyung et al., 2008). Color highlighting specifically is a recognized technique for adapting hypertext and hypermedia (Brusilovsky et al., 1996; Bra et al., 1999; Jae-Kyung et al., 2008; Knutlov et al., 2009), and is possibly the most commonly used type of emphasis (Butler et al., 2007; Brown and Paik, 2009; McCurdy, 2009; Billman et al., 2011; de Leoni et al., 2012), however systems also used other visual encodings to distinguish between different types of information such as relative differences in sizes and shapes (c.f., de Leoni et al., 2012).2.3. RQ3: how to evaluate the effectiveness of presentational choicesThe aims of visualization evaluations have varied (Lam et al., 2012). First of all, it is worth to distinguish between what one is evaluating (e.g., data abstraction design vs. presentational encoding), and how (e.g., lab studies, ethnographic studies) one is evaluating it (Munzner, 2009). We also follow most closely the sort of evaluations that could be classed under Lam et al's header of “evaluating human performance” that study the effects of an interactive or visual aspect of the tool on people in isolation.To enable this we supply an overview of previously applied evaluation criteria, broaden our view from visualizations to include earlier work on information presented as hypertext or hypermedia.2.3.1. EfficiencyBroadly speaking, efficiency can be defined as “Helping users to perform their tasks faster.” In previous studies efficiency has been measured as time to complete a single task, a set of tasks, or the number of tasks per hour (Campagnoni and Ehrlich, 1989; Egan et al., 1989; McDonald and Stevenson, 1996). Alternative measures, such as the number or types of interactions have also been used (Kay and Lum, 2004). Efficiency can be affected by the choice of visualization, but also depends on the task and user characteristics (Toker et al., 2012). For example, previous work found that perceptual speed influenced task completion times using bar chars and radar charts. In addition, they found an interaction between visualization type and perceptual speed: the difference in time performance between bar and radar charts decreases as a users' perceptual speed increases. A similar study evaluating a more complex visualization called ValueCharts measured the interaction between task type and various cognitive traits (Conati et al., 2014). They found that level of user expertise, verbal working memory, visual working memory, and perceptual speed all interacted with task type (high level or low level). That is, the influence of individual traits on speed of performance depended on the type of task that was being performed as well. Another study measured the effect of the personality trait of locus of control on the time spent on correct responses (Ziemkiewicz et al., 2011). Overall, participants who scored low on locus on control were slower at answering questions correctly. There was also an interaction with the trait and question type, for some questions (search tasks) participants who scored low on the trait were as fast as participants who scored highly.2.3.2. EffectivenessIn the most general sense, a system can be said to be effective if it helps a user to produce a desired outcome, i.e., “Helps users to perform their tasks well.” The nature of the tasks naturally varies from system to system. For example, in a decision support context, such as recommender systems, effectiveness has been defined as “Help users make good decisions” with regards to whether to try or buy an item (Tintarev and Masthoff, 2015). For plans, this means to successfully complete a task which requires several actions. This task may require completing the actions originally suggested by the system, or a revised sequence resulting from a user scrutinizing and modifying the suggested plan.The most common method for evaluating information visualizations' effectiveness has been to ask participants to answer questions based on the information presented (Campagnoni and Ehrlich, 1989), although this could be said to measure understandability rather than effectiveness (Section 2.3.7). For example, Conati et al. (2014) found that users with low visual working memory answer more answers correctly with a horizontal layout (compared to a vertical layout). The complexity of the questions has varied from simple tasks (e.g., searching for objects with a given property, specifying attributes of an object or performing count tasks; Stasko et al., 2000; Conati et al., 2014), to more complex ones (e.g., questions covering three or more attributes; Kobsa, 2001; Verbert et al., 2013). Consequently, studies of both visualization (Stasko et al., 2000; Kobsa, 2001; Indratmo and Gutwin, 2008), and hypertext (Campagnoni and Ehrlich, 1989; Chen and Rada, 1996), have been evaluated in terms of error rates or correct responses. Effectiveness has also been measured as how frequently a task was successful, for a task such as bookmarking at least one interesting item (Verbert et al., 2013). Other measures such as coverage (proportion of the material visited) have also been used (Egan et al., 1989; Instone et al., 1993).2.3.3. Perceived measures of effectiveness and efficiency vs. actual measuresOne way to supplement measures of effectiveness and efficiency is to ask participants to self-report. Self-reported measures have been found to be reliable and sensitive to small differences in cognitive load in some cases (Paas et al., 2003). That is, if participants perform better but take longer, a self-reported measure of high mental effort could confirm that cognitive load was high. There are justified questions about the reliability of self-reported measures as well, for example, people are known to be worse at correctly judging the time a task takes when under heavy cognitive load (Block et al., 2010). This suggests that self-reported measures may be a good supplement, but not a replacement for the actual measures. Visualizations have also been evaluated in terms of perceived effort when performing a task (Waldner and Vassileva, 2014). One commonly used measure for subjective workload is the NASA TLX, which uses six rating scales for: mental and physical demands, performance, temporal demand, effort and frustration (Hart, 2006).2.3.4. Task resumption lagOne criteria that is underrepresented in evaluations of visualizations is task resumption lag. This metric is particularly relevant in real world applications where interruptions are likely, and where the user is likely to be under heavy cognitive load. Task interleaving is a phenomena that happens in many multi-tasking applications, and is not limited to the evaluation of plans. This interleaving takes time, and poses a cognitive effort on users. One concrete way the cost of task interleaving has been evaluated is the time it takes to resume a previously interrupted task, or resumption lag (Iqbal and Bailey, 2006). Previous studies have identified a number of factors that influence resumption lag, including how frequent (or repeated) interruptions are, task representation complexity, whether the primary task is visible, whether interleaving happens at task boundaries, and how similar concurrent tasks are to each other (Salvucci et al., 2009).Some see task interleaving as a continuum from few interruptions, to concurrent multi-tasking (where joint attention is required) (Salvucci et al., 2009). A dual-task paradigm is a procedure in experimental psychology that requires an individual to perform two tasks simultaneously, to compare performance with single-task conditions (Knowles, 1963; Turner and Engle, 1989); (Damos, 1991, p. 221). When performance scores on one and/or both tasks are lower when they are done simultaneously compared to separately, these two tasks interfere with each other, and it is assumed that both tasks compete for the same class of information processing resources in the brain. Examples of where it may be important to measure task resumption lag include piloting, driving, and radar operation. For example, a pilot executing a procedure may be interrupted by the control center.A dual task methodology has been previously used to evaluate information visualization. Visual embellishments (e.g., icons and graphics) in terms of memorability (both short and long term memory), search efficiency, and concept grasping (Borgo et al., 2012).2.3.5. SatisfactionSatisfaction gives a measure of how much users like a system or its output (i.e., the presented plans). The most common approach used is a questionnaire evaluating subjective user perceptions on a numeric scale, such as how participants perceived a system (Bercher et al., 2014). It is also possible to focus on satisfaction with specific aspects of the interface such as how relevant users find different functionalities (Apted et al., 2003; Bakalov et al., 2010), and how well each functionality was implemented (Bakalov et al., 2010). A variant is to compare satisfaction with different variants or views of a system (Mabbott and Bull, 2004). Individual expertise in using particular types of graphs (radar graphs and bar graphs) has been found to influence preference for which type of graph people perceived as easier to use (Toker et al., 2012).2.3.6. MemorabilityMemorability is the extent to which somebody can remember a plan. This can be tested through recall (can the user reconstruct the plan) and through recognition (can the user recognize which plan is the one that they saw previously). Dixon et al (Dixon et al., 1988, 1993) showed that the memorability of plans can be affected by the representations used (in their case the sentence forms used). Kliegel et al. (2000) found that participants' working memory and plan complexity influence plan memorability (which they called plan retention) and plan execution. Recall of section headers (as a measure of incidental learning) have also been used (Hendry et al., 1990). In measuring memorability, it may be important to filter participants on general memory ability, e.g., excluding participants with exceptionally good or poor memory. In domains where users repeat a task it may also be valuable to measure performance after a certain training period as performance on memorability has been found to stabilize after training (Schmidt and Bjork, 1992). Measurements of memorability are likely to show improved performance with rehearsal. Previous evaluations of information visualizations have considered both short term and and long term memorability (Borgo et al., 2012).2.3.7. UnderstandabilityUnderstandability (also known as ComprehensibilityBateman et al. (2010)) is the extent to which the presented information is understood by participants. Understandability of information can be measured by asking people to summarize its contents (Bloom et al., 1956), answer questions about its contents (called Correctness of Understanding by Aranda et al., 2007), or by using a subjective self-reporting measure of how easy it is to understand (Hunter et al., 2012). For the latter, a distinction is sometimes made between confidence, the subjective confidence people display regarding their own understanding of the representation, and perceived difficulty, the subjective judgement of people regarding the ease to obtain information through the representation (Aranda et al., 2007). Biundo et al. (2011) and Bercher et al. (2014) evaluated a natural language presentation and explanation of a plan. The evaluation task was to connect multiple home appliances. The main evaluation criteria was primarily perceived certainty of correct completion (confidence aspect of understandability), but they also measured overall perceptions of the system (satisfaction). A study evaluating perceived ease-of-use found an effect of verbal working memory on ease-of-use for bar charts (Toker et al., 2012).As plan complexity impacts understandability, there is also research on measuring understandability by analysing this complexity, for example in business process models (Ghani et al., 2008) and workflows (Pfaff et al., 2014). Aranda et al. (2007)'s framework for the empirical evaluation of model comprehensibility highlights four variables that affect comprehensibility, namely language expertise (previous expertise with the notation/representation being studied), domain expertise (previous expertise with the domain being modeled), problem size (the size of the domain), and the type of task (for example, whether readers need to search for information, or integrate information in their mental model). One of the tasks mentioned by Aranda et al. (2007), namely information retention, is covered by our Memorability metric.2.3.8. Situational awarenessSituational awareness is the users' perception of environmental elements with respect to time and space, the comprehension of their meaning, and the projection of their status after some variable has changed (Endsley and Jones., 2012). It is often classified on three levels (Endsley and Jones., 2012): Level 1—the ability to correctly perceive information; Level 2—the ability to comprehend the situation, and Level 3—projecting the situation into the future. Abilities to make decisions in complex, dynamic areas are therefore concerned with errors in situational awareness. In particular, Level 3 expands the situational awareness beyond the regular scope of understandability (c.f., Section 2.3.7). Adagha et al. (2015) makes a case that standard usability metrics are inadequate for evaluating the effectiveness of visual analytics tools. In a systematic review of 470 papers on decision support in visual analytics they identify attributes of visual analytics tools, and how they were evaluated. Their findings imply a limited emphasis on the incorporation of Situational Awareness as a key attribute in the design of visual analytics decision support tools, in particular with regard to supporting future scenario projections. Situational awareness is strongly linked to what Borgo et al. (2012) call concept grasping and define as: “more complex cognitive processes of information gathering, concept understanding and semantic reasoning.”2.3.9. Trade-offs between metricsThe metrics mentioned above provide a useful way of thinking about ways of evaluating visualizations. However, it is unlikely that any choices about how to present a plan will improve performance on all these metrics. For example, effectiveness and efficiency do not always correlate. For example, high spatial ability has been found to be correlated with accuracy on three-dimensional visualization tasks, but not with time (Velez et al., 2005). One method that has been used is to record the time for successful trials only (Ziemkiewicz et al., 2011). Similarly, a meta-review of effectiveness and efficiency in hypertext found that the overall performance of hypertext users tended to be more effective than that of non-hypertext users, but that hypertext users were also less efficient than non-hypertext users (Chen and Rada, 1996). This is also reflected in the literature in psychology where a single combined measure of effectiveness and efficiency has been found to have very limited use (Bruyer and Brysbaert, 2011).Another useful distinction is between memorability at first exposure, and long term effectiveness. In some applications, it may be important for a user to quickly learn and remember the contents and plans. In others, the task may repeat many times and it is more important that effectiveness stabilizes at an acceptable level after a degree of training.Two other metrics that are known to conflict with regard to information presentation are effectiveness and satisfaction. For example, in one study, while participants subjectively preferred a visual representation (Satisfaction), they made better decisions (Effectiveness) using a textual representation of the same information (Law et al., 2005). | [
"26357185",
"18522052",
"8364535",
"23068882",
"19884961",
"11105530",
"14042898",
"22144529",
"16244840",
"15676313",
"13310704",
"19834155",
"25164403"
] | [
{
"pmid": "26357185",
"title": "An Empirical Study on Using Visual Embellishments in Visualization.",
"abstract": "In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of... |
BMC Medical Research Methodology | 27875988 | PMC5118882 | 10.1186/s12874-016-0259-3 | Common data elements for secondary use of electronic health record data for clinical trial execution and serious adverse event reporting | BackgroundData capture is one of the most expensive phases during the conduct of a clinical trial and the increasing use of electronic health records (EHR) offers significant savings to clinical research. To facilitate these secondary uses of routinely collected patient data, it is beneficial to know what data elements are captured in clinical trials. Therefore our aim here is to determine the most commonly used data elements in clinical trials and their availability in hospital EHR systems.MethodsCase report forms for 23 clinical trials in differing disease areas were analyzed. Through an iterative and consensus-based process of medical informatics professionals from academia and trial experts from the European pharmaceutical industry, data elements were compiled for all disease areas and with special focus on the reporting of adverse events. Afterwards, data elements were identified and statistics acquired from hospital sites providing data to the EHR4CR project.ResultsThe analysis identified 133 unique data elements. Fifty elements were congruent with a published data inventory for patient recruitment and 83 new elements were identified for clinical trial execution, including adverse event reporting. Demographic and laboratory elements lead the list of available elements in hospitals EHR systems. For the reporting of serious adverse events only very few elements could be identified in the patient records.ConclusionsCommon data elements in clinical trials have been identified and their availability in hospital systems elucidated. Several elements, often those related to reimbursement, are frequently available whereas more specialized elements are ranked at the bottom of the data inventory list. Hospitals that want to obtain the benefits of reusing data for research from their EHR are now able to prioritize their efforts based on this common data element list.Electronic supplementary materialThe online version of this article (doi:10.1186/s12874-016-0259-3) contains supplementary material, which is available to authorized users. | Related workIn the EHR4CR project, data inventories for ‘protocol feasibility’ [24] and ‘patient identification and recruitment’ [23] have been performed by Doods et al. There, 75 data elements were identified for feasibility assessment and 150 data elements for patient identification and recruitment. Despite the differing scenarios, a comparison with the current inventory for the execution and SAE reporting in clinical trials has shown that 50 data elements have already been identified and 83 are new data elements.CDISC, C-Path, NCI-EVS and CFAST had introduced an initiative on ‘Clinical Data Standards’ to create industry-wide common standards for data capture in clinical trials to support the exchange of clinical research and metadata [32]. This initiative defines common data elements for different therapeutic areas. Currently, traumatic brain injury, breast cancer, COPD, diabetes, tuberculosis, etc. are covered. In addition, the CDISC SDTM implementation guideline contains a set of standardized and structured data elements for each form domain. The aim of this initiative is similar to ours concerning the identification of most frequently used data elements for clinical trials. Nevertheless, the focus of our work is different and goes beyond this initiative in terms of determining the availability and quality of data within EHR systems.Köpcke et al. have analyzed eligibility criteria from 15 clinical trials and determined the presence and completeness within the partners EHR systems [33]. Botsis et al. examined the incompleteness rate of diagnoses in pathology reports resulting in 48.2% (1479 missing of 3068 patients) [25]. Both publications show that re-use of EHR data relies on the availability of (1) data fields and (2) captured patient values. | [
"20798168",
"20803669",
"19151888",
"25220487",
"21888989",
"23266063",
"23690362",
"22733976",
"19828572",
"25123746",
"21324182",
"19151883",
"25463966",
"17149500",
"26250061",
"22308239",
"25991199",
"24410735",
"21347133",
"23514203",
"27302260",
"16160223",
"2232680... | [
{
"pmid": "20798168",
"title": "A progress report on electronic health records in U.S. hospitals.",
"abstract": "Given the substantial federal financial incentives soon to be available to providers who make \"meaningful use\" of electronic health records, tracking the progress of this health care techno... |
Scientific Reports | 27876847 | PMC5120294 | 10.1038/srep37470 | A simplified computational memory model from information processing | This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. | Related WorkIn traditional memory studies memory has been accepted as network10, and visual modeling has been used from psychological to neural, physiological, anatomical, computational etc.12345678910 around neuron, cortex, physical signal, chemical signal and information processing910111213; various memory networks were modeled from structural characters or functional characters such as analogy, dimensionality reduction12, classification14 and so on to study the associative16, free recall15, coding16, retrieval efficiency13 etc., and many different structures of the memory networks were achieved such as the modular6, the hierarchical57, the tree17 and the small world network101819; the results simulated the structure, the function or the behavior partly or the whole, and some of them were hybrid.Aimed at structural modeling, Renart and Rolls6 reported a multi-modular neural network model to simulate the associative memory properties from the neuron perspective in 1999; they used the tri-modular architecture to simulate multiple cortical modules and developed a bi-modular recurrent associative network from neuron and cortical levels for brain memory functions; in the network the local functional features were implemented by intra-modular connections, and the global modular information features were implemented by inter-modular connections. Ten years later, Rolls9 continued the computational theory of episodic memory formation based on the hippocampus and proposed the architecture of an auto-association or attractor neural network; in their results modular is a remarkable character. Meunier20 made a specialized study of modular for memory. Hierarchical5 is another remarkable character in memory network. Cartling7 had put forward a series of memory models and discussed the dynamics of the hierarchical associative process from the neuron; he emphasized the storage mode and information retrieval and mentioned the graph theory for memory network. Besides, Fiebig21 put forward a three-stage neural network model from autonomous reinstatement dynamics to discuss the memory consolidation.Aimed at functional modeling, Lo22 used a mathematical model to simulate biological neural networks and proposed a functional temporal hierarchical probabilistic associative memory network. Lo simulated Hebb’s rule by a recurrent multilayer network and used a dendritic tree structure to simulate the neuron information input, moreover, he discussed multiple and/or hierarchical architecture. Polyn15 discussed the free recall process and reported the retrieval maintenance. Bahland10 delineated an efficient associative memory model by local connections and got a small-world network structure with high retrieval performance.Of course, every structural or functional model is not independent, it is the integration of structural, functional and behavioral; the results fit for the computational modeling. Xie12 selected higher firing rate neuron to set up a low-dimensional network and discussed the functional connections by graph theory. Lu19 got a neuronal network of small-world from multi-electrode recordings related to the working memory tasks in the rat. Xu23 presented a simplified memory network aimed at pattern formation; they used two loops of coupled neuron to analyze the process from short-term to long-term memory theoretically.Especially, memory simulating has attracted close attention aimed at information representation from structure to function, such as retrieval and efficiency. Just like Mizraji1 said, “Cognitive functions rely on the extensive use of information stored in the brain, and the searching for the relevant information for solving some problem is a very complex task”; they proposed a multi-modular network, which processed the context-dependent memory as the basic model; this model was developed according to the information query from brain dynamics searching perspective. Miyamoto24 reviewed the memory encoding and retrieval from the neuroimaging in primates. Tsukada11 used the real neuron as the basic structure to set up an associative memory model, which could realize successive retrieval. Bednar13 combined cortical structure and self-organizing functions to discuss the computing efficiency of memory network. Sacramento17 used a hierarchical structure to connect the neuron to improve the retrieval efficiency, and set up a tree associative memory network. Anishchenko18 pointed that the metric structure of synaptic connections was vital for network capacity to retrieve memories. Rendeiro14 improved memory efficiency by classification, but the model was hierarchical tree structure. Snaider25 set up an extended sparse distributed memory network using large word vectors as data structure, and got a high efficient auto-associative storage method using tree as data structure.From the models above, we can find the memory networks have many remarkable structures and characters, such as modular, hierarchical, small world and so on2; with these characters the information processing can be high efficiency. It is ideal but difficult to model the memory thoroughly from the neuron level91124. Flavell4 ever proposed meta-memory in 1971, and assumed that meta-memory was a special type of memory and represented the memory of memory, i.e., the reorganization, evaluation and monitor processes of memory in humanity itself. We introduced a logic conception of meta-memory2 to avoid the restriction of micro-scale. Meta-memory is an abstract definition, it includes a memory unit in the memory task, and it reflects the function of a neuron but not a neuron. Meta-memory represents an independent memory task unit or integrated information storage in our definition. Meta-memory node can be defined by different scales, for example, a meta-memory can be the memory of a number, a letter or a picture, thus, a neuron or cortices can be used as the meta-memory node in the memory network.We ever put forward an initial memory model with small world characters based on the meta-memory2. In that model the cluster coefficient was discussed detailed but the algorithms were immature with ambiguous memory functions, in this paper we improve our retrieval algorithm and clarify the corresponding relation between the biological structure and information processing taking the word memory as example in order to refine our model in accordance with the memory functions precisely, such as the forgetting and association, which increases the map and understanding of the model. | [
"19496023",
"11305890",
"3676355",
"8573654",
"19190637",
"20307583",
"24427215",
"25150041",
"19159151",
"20970304",
"17320359",
"24662576",
"22132040",
"25192634",
"9623998"
] | [
{
"pmid": "19496023",
"title": "Dynamic searching in the brain.",
"abstract": "Cognitive functions rely on the extensive use of information stored in the brain, and the searching for the relevant information for solving some problem is a very complex task. Human cognition largely uses biological search ... |
Scientific Reports | 27905523 | PMC5131304 | 10.1038/srep38185 | Test of quantum thermalization in the two-dimensional transverse-field Ising model | We study the quantum relaxation of the two-dimensional transverse-field Ising model after global quenches with a real-time variational Monte Carlo method and address the question whether this non-integrable, two-dimensional system thermalizes or not. We consider both interaction quenches in the paramagnetic phase and field quenches in the ferromagnetic phase and compare the time-averaged probability distributions of non-conserved quantities like magnetization and correlation functions to the thermal distributions according to the canonical Gibbs ensemble obtained with quantum Monte Carlo simulations at temperatures defined by the excess energy in the system. We find that the occurrence of thermalization crucially depends on the quench parameters: While after the interaction quenches in the paramagnetic phase thermalization can be observed, our results for the field quenches in the ferromagnetic phase show clear deviations from the thermal system. These deviations increase with the quench strength and become especially clear comparing the shape of the thermal and the time-averaged distributions, the latter ones indicating that the system does not completely lose the memory of its initial state even for strong quenches. We discuss our results with respect to a recently formulated theorem on generalized thermalization in quantum systems. | Related workRecently an exact theorem on generalized thermalization in D-dimensional quantum systems in the thermodynamic limit has been formulated55. The theorem states that generalized thermalization can be observed if the state of the system is algebraically sizably clustering. It also holds for exponentially sizably clustering states. Then the stationary state of the system can be described by a GGE which has to take into account all local and quasilocal charges of the system. For non-integrable systems, for which the total energy is the only conserved quantity, the generalized thermalization reduces to thermalization with a stationary state according to the CGE. We now discuss the exact theorem with respect to the 2D-TFIM. Considering the 2D-TFIM one has to distinguish between the ferromagnetic and the paramagnetic phase. In the paramagnetic phase the 2D-TFIM is gapped and there is no symmetry breaking both for finite system sizes as well as in the thermodynamic limit. As the ground state of gapped quantum systems is sizably exponentially clustering8384 the exact theorem can be applied to the 2D-TFIM in the thermodynamic limit after the interaction quenches in the paramagnetic phase and predicts thermalization. In our numerical studies we have indeed observed a very good agreement both between the time-averaged observables and their thermal counterparts as well as between the distributions for small quenches, i.e. large ratios h/J, also for the finite system sizes that we can simulate. For larger quenches closer to the phase transition we have found deviations between the time averages and the thermal values, but the finite size scaling shows that they decrease with the system size. Our results for the interaction quenches in the paramagnetic phase are thus in agreement with the exact theorem. In the ferromagnetic phase on the other hand the Hamiltonian of the system is not gapped in the thermodynamic limit. The spin flip symmetry is spontaneously broken and long-range order exists, so that all spins of the system are correlated to each other and the correlations do not cluster. An analytic expression for the shape of the decay of the correlations has not been found yet, thus it cannot be decided whether the exact theorem can be applied in the ferromagnetic phase or not. | [
"17155348",
"17358832",
"18421349",
"19792288",
"17677755",
"17501552",
"21702628",
"18518309",
"18232957",
"19392341",
"19392319",
"20868062",
"11019309",
"21405280",
"23829756",
"23952371",
"26551800",
"19113246",
"18352169",
"21231563",
"18517843",
"19792519",
"2358135... | [
{
"pmid": "17155348",
"title": "Effect of suddenly turning on interactions in the Luttinger model.",
"abstract": "The evolution of correlations in the exactly solvable Luttinger model (a model of interacting fermions in one dimension) after a suddenly switched-on interaction is analytically studied. Whe... |
BioData Mining | 27980679 | PMC5139023 | 10.1186/s13040-016-0116-2 | MISSEL: a method to identify a large number of small species-specific genomic subsequences and its application to viruses classification | BackgroundContinuous improvements in next generation sequencing technologies led to ever-increasing collections of genomic sequences, which have not been easily characterized by biologists, and whose analysis requires huge computational effort. The classification of species emerged as one of the main applications of DNA analysis and has been addressed with several approaches, e.g., multiple alignments-, phylogenetic trees-, statistical- and character-based methods.ResultsWe propose a supervised method based on a genetic algorithm to identify small genomic subsequences that discriminate among different species. The method identifies multiple subsequences of bounded length with the same information power in a given genomic region. The algorithm has been successfully evaluated through its integration into a rule-based classification framework and applied to three different biological data sets: Influenza, Polyoma, and Rhino virus sequences.ConclusionsWe discover a large number of small subsequences that can be used to identify each virus type with high accuracy and low computational time, and moreover help to characterize different genomic regions. Bounding their length to 20, our method found 1164 characterizing subsequences for all the Influenza virus subtypes, 194 for all the Polyoma viruses, and 11 for Rhino viruses. The abundance of small separating subsequences extracted for each genomic region may be an important support for quick and robust virus identification.Finally, useful biological information can be derived by the relative location and abundance of such subsequences along the different regions.Electronic supplementary materialThe online version of this article (doi:10.1186/s13040-016-0116-2) contains supplementary material, which is available to authorized users. | Related workThe above problem contains several complex aspects. The main one is that searching for many subsequences with desirable properties is much more difficult than searching for a single optimal one. Additionally, the dimensions of the problem to be solved are typically very large (i.e., DNA sequences with thousands of bases). The complexity of the problem does not suggest a straightforward deployment of a mathematical optimization model, and therefore we consider a meta-heuristic approach that is much faster than enumeration, and sufficiently precise and time-effective.
Meta-heuristics are nature-inspired algorithms that can be suitably customized to solve complex and computationally hard problems, and can be inspired to different principles, such as Ant colony optimization [33], Genetic Algorithms [34], Simulated annealing [35], Tabu Search [36], Particle swarm optimization [37]. Several authors in the literature considered similar problems, although they cannot be reconducted to the framework of multiple solutions that we adopt here.Recent studies [38–40] focused on problems with multiple objective functions, often used as a tool to counterbalance the measurement bias affecting solutions based on a single objective functions, or to mitigate the effect of noise in the data. Deb et al. [41] also approached the issue of identifying gene subsets to achieve reliable classification on available disease samples by modeling it as a multi-objective optimization problem. Furthermore, they proposed a multimodal multi-objective evolutionary algorithm that finds multiple, multimodal, non-dominated solutions [42] in one single run. Those are defined as solutions that have identical objective values, but differ in their phenotypes. Other works [43, 44] pointed to multiple membership classification, dealing with the fitting of complex statistical models to large data sets. Again, Liu et al. [45] proposed a subset gene identification consisting of multiple objectives, but, differently from Deb et al. [41], they scalarize the objective vector into one objective that is solved by using a parallel genetic algorithm, in order to avoid expensive computing cost. Kohavi et al. [46] addressed the problem of searching for optimal gene subsets of the same size, emphasizing the use of wrapper methods for the features selection step. Rather than trying to maximize accuracy, they identified which features were relevant, and used only those features during learning. The goal of our work is again different: to extract information on interesting portions of the genomic sequences by taking into account equivalent subsequences.The rest of the paper is organized as follows: in Section “Materials and methods”, we provide a detailed description of the algorithm. In Section “Results and discussion”, we report and discuss the application of our algorithm to extract equivalent and multiple subsequences from three experimental data sets of virus sequences, described at the beginning of that section, and we describe the results of the classification analysis of the species of those samples. Finally, in Section “Conclusions”, we delineate the conclusions of the work both from the algorithmic and biological point of view jointly with its future extensions. | [
"270744",
"3447015",
"18853361",
"9254694",
"17060194",
"15961439",
"25621011",
"19857251",
"23012628",
"22214262",
"23340592",
"24127561",
"12753540",
"21852336",
"23625169",
"17813860",
"14642662",
"17008640",
"3037780",
"17623054",
"19960211",
"19828751",
"20668080",
... | [
{
"pmid": "270744",
"title": "Phylogenetic structure of the prokaryotic domain: the primary kingdoms.",
"abstract": "A phylogenetic analysis based upon ribosomal RNA sequence characterization reveals that living systems represent one of three aboriginal lines of descent: (i) the eubacteria, comprising a... |
Scientific Reports | 27929098 | PMC5144062 | 10.1038/srep38433 | EP-DNN: A Deep Neural Network-Based Global Enhancer Prediction Algorithm | We present EP-DNN, a protocol for predicting enhancers based on chromatin features, in different cell types. Specifically, we use a deep neural network (DNN)-based architecture to extract enhancer signatures in a representative human embryonic stem cell type (H1) and a differentiated lung cell type (IMR90). We train EP-DNN using p300 binding sites, as enhancers, and TSS and random non-DHS sites, as non-enhancers. We perform same-cell and cross-cell predictions to quantify the validation rate and compare against two state-of-the-art methods, DEEP-ENCODE and RFECS. We find that EP-DNN has superior accuracy with a validation rate of 91.6%, relative to 85.3% for DEEP-ENCODE and 85.5% for RFECS, for a given number of enhancer predictions and also scales better for a larger number of enhancer predictions. Moreover, our H1 → IMR90 predictions turn out to be more accurate than IMR90 → IMR90, potentially because H1 exhibits a richer signature set and our EP-DNN model is expressive enough to extract these subtleties. Our work shows how to leverage the full expressivity of deep learning models, using multiple hidden layers, while avoiding overfitting on the training data. We also lay the foundation for exploration of cross-cell enhancer predictions, potentially reducing the need for expensive experimentation. | Related WorkSeveral computational methods that use histone modification signatures to identify enhancer regions have been developed. Won et al. proposed the use of Hidden Markov Models (HMMs) to predict enhancers using three primary histone modifications30. Firpi et al. focused on the importance of recognizing the histone modification signals through data transformation and employed Time-Delayed Neural Networks (TDNNs) using a set of histone marks selected through simulated annealing31. Fernández et al. used Support Vector Machines (SVMs) on an optimized set of histone modifications found through Genetic Algorithms32. RFECS (Random Forest based Enhancer identification from Chromatin States) improved upon the limited number of training samples in previous approaches using Random Forests (RFs), in order to determine the optimal set of histone modifications to predict enhancers33. We provide a comparison of some of the recent methods of enhancer prediction in Table 1, comparing the following enhancer prediction protocols: RFECS34, DEEP-ENCODE35, ChromaGenSVM32, CSI-ANN31, and HMM30.In addition to histone modifications, recent work has also used other input features to classify regulatory sites in DNA. For example36, is a complementary line of work in which the authors further classify enhancers as strong or weak enhancers. For their input features, they use k-mers of DNA nucleotides, while we use histone modification patterns. The results are not directly comparable to ours because their ultimate classification task is also different. Further, looking at a finer level of detail, their classification ignores whether an enhancer is poised or active, and considers the simpler, two-way classification of strong or weak enhancers. Another recent paper shows how to input biological sequences into machine learning algorithms37. The difficulty arises from the fact that ML algorithms need vectors as inputs and a straightforward conversion of the biological sequence into a vector will lose important information, such as ordering effect of the basic elements38 (C for DNA, amino acids for protein). Prior work developed the idea of generating pseudo components from the sequences that can be fed into the ML algorithm. The above-mentioned paper unifies the different approaches for generating pseudo components from DNA sequences, RNA sequences, and protein sequences. This is a powerful and general-purpose method. In our work, however, we do not need this generality. We feed the (24 different) histone modification markers and, by binning, we consider features corresponding to adjacent genomic regions for each marker (20 for each histone modification marker). We shift the window gradually thus capturing the overlapping regions among contiguous windows and the DNN extracts the relevant ordering information, thanks to such overlap. Further in repDNA39, the authors consider DNA sequences alone. RepDNA calculates a total of 15 features that can be fed into ML algorithms. The 15 features fall into 3 categories—nucleic acid composition, autocorrelation features describing the level of correlation between two oligonucleotides along a DNA sequence in terms of their specific physicochemical properties, and pseudo nucleotide composition features. | [
"20025863",
"18851828",
"6277502",
"12837695",
"21295696",
"21106903",
"22955616",
"25693562",
"21097893",
"23503198",
"11526400",
"24679523",
"22169023",
"21572438",
"24614317",
"19212405",
"22534400",
"19854636",
"15345045",
"11559745",
"9445475",
"17277777",
"19094206"... | [
{
"pmid": "20025863",
"title": "Enhancers: the abundance and function of regulatory sequences beyond promoters.",
"abstract": "Transcriptional control in mammals and Drosophila is often mediated by regulatory sequences located far from gene promoters. Different classes of such elements - particularly en... |
Frontiers in Psychology | 28018271 | PMC5149550 | 10.3389/fpsyg.2016.01936 | Comparing a Perceptual and an Automated Vision-Based Method for Lie Detection in Younger Children | The present study investigates how easily it can be detected whether a child is being truthful or not in a game situation, and it explores the cue validity of bodily movements for such type of classification. To achieve this, we introduce an innovative methodology – the combination of perception studies (in which eye-tracking technology is being used) and automated movement analysis. Film fragments from truthful and deceptive children were shown to human judges who were given the task to decide whether the recorded child was being truthful or not. Results reveal that judges are able to accurately distinguish truthful clips from lying clips in both perception studies. Even though the automated movement analysis for overall and specific body regions did not yield significant results between the experimental conditions, we did find a positive correlation between the amount of movement in a child and the perception of lies, i.e., the more movement the children exhibited during a clip, the higher the chance that the clip was perceived as a lie. The eye-tracking study revealed that, even when there is movement happening in different body regions, judges tend to focus their attention mainly on the face region. This is the first study that compares a perceptual and an automated method for the detection of deceptive behavior in children whose data have been elicited through an ecologically valid paradigm. | Related WorkChildren’s Lying BehaviorPrevious research suggests that children between 3 and 7 years old are quite good manipulators of their non-verbal behavior when lying, which makes the discrimination between truth-tellers and lie-tellers very difficult to accomplish (Lewis et al., 1989; Talwar and Lee, 2002a; Talwar et al., 2007). Most studies report that the detection of children’s lies is around or slightly above chance level, comparable to what has been claimed for adults (Bond and Depaulo, 2006; Edelstein et al., 2006).Yet, the extent to which children display non-verbal cues could be related to the kind of lie and to the circumstances under which these are told. There is evidence that children start lying from a very young age as early as 2 1/2 years old, and lie- tellers between 3 and 7 years old are almost indistinguishable from truth-tellers (Newton et al., 2000; Talwar and Lee, 2002a). Around 3 years old, children are already able to tell “white lies”, before that they mainly lie for self-serving purposes, such as: to avoid punishment, or to win a prize (Talwar and Lee, 2002b). Nevertheless, some research suggests that lie-tellers tend to exhibit slightly more positive non-verbal behaviors, such as smiles, relaxed and confident facial expressions, and a positive tone of voice (Lewis et al., 1989). However, other research suggests that children have poor control of their non-verbal behavior, which points toward opposite and conflictive directions of what has been previously reported (Vrij et al., 2004; McCarthy and Lee, 2009). For instance, a study has reported that children between the ages of 7–9 years old show less eye contact when lying rather than when answering the truth while older children show longer eye contact, which is similar to what adults exhibit during a lying situation (McCarthy and Lee, 2009). Another study suggests a decrease of movement during a lie-tell, particularly on the hands and fingers (Vrij et al., 2004).Furthermore, it has been reported that children tend to leak more cues to deception when they are more aware of their deceptive attempt: For example, children’s second attempts to lie (after having been told to repeat a previous lie) reveal more non-verbal cues in their facial expressions when compared to their first attempts (Swerts, 2012; Swerts et al., 2013). These findings, according to the authors, might be explained by the ironic effect of lying which states that lying becomes more difficult and most likely less successful, if a person becomes more conscious about his or her behavior when trying to intentionally produce a deceiving message.Non-verbal Cues to LyingBecause people are often highly skilled deceivers, accurate lie detection is in general very difficult for human judges. This means that lie detection accuracy is usually around or slightly above chance level (Bond and Depaulo, 2006; Porter and Ten Brinke, 2008; ten Brinke et al., 2012; Serras Pereira et al., 2014). However, most researchers in this field share the idea that there are certain verbal and non-verbal cues that may uncover whether a person is lying or not, and that the accuracy levels of deception detection are higher if both non-verbal and verbal cues are taken into account (Vrij et al., 2004). One line of research has been focusing on finding these cues by manipulating levels of cognitive load during a lie-tell, which makes lying more difficult, and probably facilitates the emergence of deception cues (Vrij et al., 2006, 2008). Other studies have been focusing on specific non-verbal cues of deception, which can disclose some signals related to deception, such as stress and anxiety (DePaulo, 1988; Bond, 2012). In addition, one can sometimes distinguish truth-tellers from liars on the basis of particular micro-expressions, such as minor cues in the mouth or eye region (Ekman, 2009; Swerts, 2012), like pressed lips, and certain types and frequencies of smiles (DePaulo et al., 2003). However, by their specific nature, such micro-expressions are so subtle, and last only a few milliseconds that they might escape a person’s attention, so that deception detection tends to be a very difficult task. Another study suggests that emotional leakage is stronger in masked high-intensity expressions rather than in low-intensity ones, in both upper and lower face (Porter et al., 2012). Furthermore, the highest emotional leak occurs during fear, whereas happiness shows the smallest emotional leakage. Despite the effort on finding deception cues on the face, results from many studies are frequently discrepant, and the supposed cues are often very subtle in nature (Feldman et al., 1979).Additionally, it has been argued that eye gaze can also be a cue for deception, although the results from different studies are contradictory (Mann et al., 2002, 2004, 2013). According to one study, liars showed more eye contact deliberately than truth-tellers, whereas gaze aversion did not differ between truth-tellers and lie-tellers (Mann et al., 2013). In another study deception seems to be correlated with a decrease in blink rate, which appears to be associated with an increase of the cognitive load (Mann et al., 2002). However, in a different study, the opposite result has been reported, emphasizing that blink rate rises while masking a genuine emotion in a deceptive expression (Porter and Ten Brinke, 2008).Body movement has also been suggested as a source for lie detection but there are some contradictory statements about the usefulness of this feature. On the one hand, some literature states that when lying, people tend to constrain their movements, even though it is unclear whether these restrictions are related to strategic overcompensations (DePaulo, 1988), or to avoid deception leakage cues (Burgoon, 2005). In a similar vein, another study measured the continuous body movement of people in spontaneous lying situations, and found that those who decided to lie showed significantly reduced bodily movement (Eapen et al., 2010). On the other hand, a study based on a dynamical systems perspective, has suggested the existence of continuous fluctuations of movement in the upper face, and moderately in the arms during a deceptive circumstance, which can be discriminated by dynamical properties of less stability, but larger complexity (Duran et al., 2013). Although, these distinctions are presented in the upper face, this study failed to find a significant difference in the total amount of movement between a deceptive and truthful condition. Moreover, when considering hand movements, another study found that lie-tellers have the tendency to do more speech prompting gestures, while truth-tellers do more rhythmic pulsing gestures (Hillman et al., 2012).In sum, despite the fact that significant research about non-verbal cues for lie detection has been performed in the last years, results still seem to be very inconsistent and discrepant.Automated Methods for Deception DetectionIn the past few years, several efforts have been made to develop efficient methods for deception detection. Even though there is no clear consensus on the importance of non-verbal cues (see previous section), there has been a specific interest in human face as the main source of cues for deception detection (Ekman, 2009; ten Brinke et al., 2012; Swerts et al., 2013). Many of these methods are based on the Facial Action Code System (FACS) (Ekman and Friesen, 1976), usually taken as the reference method for detecting facial movement and expressions, which has thus also been applied for detecting facial cues to deception (ten Brinke et al., 2012). As a manual method, FACS is time consuming and rather complex to apply since it demands trained coders.More recently, automated measures are being used to help researchers to understand and detect lies more efficiently and rapidly. An example, is the Computer Expression Recognition Toolbox (CERT) which is a software tool that detects the facial expressions in real-time (Littlewort et al., 2011), and it is based on the Facial Action Coding System (FACS) (Ekman and Friesen, 1976). It is able to identify the intensity of 19 different actions units, as well as 6 basic emotions. This automated procedure to detect facial movements and microexpressions can facilitate the research of non-verbal correlates of deception, but that obviously also depends on the accuracy with which these expressions can be detected and classified. One issue is that is not immediately clear how well they would work on children’s faces.Additionally, more novel automated measures are being used to investigate deception from different angles. Automated movement analysis is starting to be used for this purpose (Eapen et al., 2010; Duran et al., 2013; Serras Pereira et al., 2014). Eye tracking has also been used in several different ways for deception detection. Some studies (Wang et al., 2010) use eye tracking to try to define gaze patterns of liars versus truth-tellers; another option for using eye tracking systems is to study the eye-gaze patterns from the experts of deception detection. For instance, a study (Bond, 2008) has reported that experts on deception detection, when deciding about a message veracity, are perceptually faster and more highly accurate, and seem to fixate their gaze behavior in areas such as face and/or body (arms torso and legs). Likewise, some other studies have been focusing on whether deception detection can be achieved by measuring physiological data, such as brain activity, galvanic skin conductance, and thermography techniques (Kozel et al., 2005; Ding et al., 2013; Van’t Veer et al., 2014). However, these methods are quite intrusive, and not suitable for all contexts, especially when dealing with specific types of population, such as children.Current StudyIn sum, considerable work is currently being done on the development of efficient automated methods to detect deception, but there is still a tendency to discard the body as a source of possible non-verbal cues. In the future, such methods could be combined with what has been achieved via automated analysis of verbal cues (Benus et al., 2006) and gestures (Hillman et al., 2012) as potential sources for lie detection, since combining verbal and non-verbal cues have proven to be more accurate for lie detection (Vrij et al., 2004). Moreover, the inconsistency regarding the relevance and value of bodily cues for deception may partly be due to the use of different detection methods. This discrepancy is worthy to be investigated in a more systematic approach.Finally, most of the research with children focuses on developmental questions of lying. In this study, we are interested in exploring the non-verbal cues of such behavior based on the assumption that children are less formatted by the social rules, and that they tend to leak more cues to deception when they are more aware of their deceptive effort (Swerts, 2012). Based on what is above described, this study presents a new approach to look into non-verbal cues of deception. It investigates how easily it can be detected whether a child is being truthful or not in a game situation, in which the lies are more spontaneous, and much closer to a normal social context. In addition, it explores the cue validity of bodily movements for such type of classification, by using an original methodology – the combination of perception studies and automated movement analysis. | [
"16859438",
"17690956",
"12555795",
"23340482",
"16729205",
"22704035",
"16185668",
"12061624",
"14769126",
"18678376",
"21463058",
"21887961",
"18997880",
"16516533"
] | [
{
"pmid": "16859438",
"title": "Accuracy of deception judgments.",
"abstract": "We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or tr... |
JMIR Medical Informatics | 27903489 | PMC5156821 | 10.2196/medinform.6373 | Finding Important Terms for Patients in Their Electronic Health Records: A Learning-to-Rank Approach Using Expert Annotations | BackgroundMany health organizations allow patients to access their own electronic health record (EHR) notes through online patient portals as a way to enhance patient-centered care. However, EHR notes are typically long and contain abundant medical jargon that can be difficult for patients to understand. In addition, many medical terms in patients’ notes are not directly related to their health care needs. One way to help patients better comprehend their own notes is to reduce information overload and help them focus on medical terms that matter most to them. Interventions can then be developed by giving them targeted education to improve their EHR comprehension and the quality of care.ObjectiveWe aimed to develop a supervised natural language processing (NLP) system called Finding impOrtant medical Concepts most Useful to patientS (FOCUS) that automatically identifies and ranks medical terms in EHR notes based on their importance to the patients.MethodsFirst, we built an expert-annotated corpus. For each EHR note, 2 physicians independently identified medical terms important to the patient. Using the physicians’ agreement as the gold standard, we developed and evaluated FOCUS. FOCUS first identifies candidate terms from each EHR note using MetaMap and then ranks the terms using a support vector machine-based learn-to-rank algorithm. We explored rich learning features, including distributed word representation, Unified Medical Language System semantic type, topic features, and features derived from consumer health vocabulary. We compared FOCUS with 2 strong baseline NLP systems.ResultsPhysicians annotated 90 EHR notes and identified a mean of 9 (SD 5) important terms per note. The Cohen’s kappa annotation agreement was .51. The 10-fold cross-validation results show that FOCUS achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.940 for ranking candidate terms from EHR notes to identify important terms. When including term identification, the performance of FOCUS for identifying important terms from EHR notes was 0.866 AUC-ROC. Both performance scores significantly exceeded the corresponding baseline system scores (P<.001). Rich learning features contributed to FOCUS’s performance substantially.ConclusionsFOCUS can automatically rank terms from EHR notes based on their importance to patients. It may help develop future interventions that improve quality of care. | Related WorksNatural Language Processing Systems Facilitating Concept-Level Electronic Health Record ComprehensionThere has been active research on linking medical terms to lay terms [11,30,31], consumer-oriented definitions [12] and educational materials [32], and showing improved comprehension with such interventions [11,12].On the issue of determining which medical terms to simplify, there is previous work that used frequency-based and/or context-based approaches to check if a term is unfamiliar to the average patient or if it has simpler synonyms [11,30,31]. Such work focuses on identifying difficult medical terms and treats these terms as equally important.Our approach is different in 2 aspects: (1) we focus on finding important medical terms, which are not equivalent to difficult medical terms, as discussed in the Background and Significance subsection; and (2) our approach is patient centered and prioritizes important terms for each EHR note of individual patients. We developed several learning features, including term frequency, term position, term frequency-inverse document frequency (TF-IDF), and topic feature, to serve this purpose.It is worth noting that our approach is complementary to previous work. For example, in a real-world application, we can display the lay definitions for all the difficult medical terms in a patient’s EHR note, and then highlight those terms that FOCUS predicts to be most important to this patient.Single-Document Keyphrase ExtractionOur work is inspired by, but different from, single-document keyphrase extraction (KE), which identifies terms or phrases representing important concepts and topics in a document. KE targets topics that the writers wanted to convey when writing the documents. Unlike KE, our work does not focus on topics important to physicians (ie, the writers and the target readers when writing the EHR notes), but rather focuses on patients, the new readers of the notes.Both supervised and unsupervised methods have been developed for KE [33]. We use supervised methods, which in general perform better than unsupervised ones when training data is available.Most supervised methods formulate KE as a binary classification problem. The confidence scores output by the classification algorithms are used to rank candidate phrases. Various algorithms have been explored, such as naïve Bayes, decision tree, bagging, support vector machine (SVM), multilayer perceptron, and random forest (RF) [34-43]. In our study, we implemented RF [43] as a strong baseline system.KE in the biomedical domain mainly focused on literature articles and domain-specific methods and features [44-47]. For example, Li et al [44] developed a software tool called keyphrase identification program (KIP) to extract keyphrases from medical articles. KIP used Medical Subject Headings (MeSH) as the knowledge base to compute a score to reflect a phrase’s domain specificity. It assigned each candidate phrase a rank score by multiplying its within-document term frequency and domain-specificity score.Different from the aforementioned approaches, we treat KE as a ranking problem and use the ranking SVM (rankSVM) approach [48] as it has been shown to be effective in KE in scientific literature, news, and weblogs [42].Common learning features used by previous work include frequency-based features (eg, TF-IDF), term-related features (eg, the term itself, its position in a document, and its length), document structure-based features (eg, whether a term occurs in the title or abstract of a scientific paper), and syntactic features (eg, the part-of-speech [POS] tags). Features derived from external resources, such as Wikipedia and query logs, have also been used to represent term importance [39,40]. Unlike previous work, we explored rich semantic features specifically available to the medical domain.Medelyan and Witten [45] developed a system that extends the widely used keyphrase extraction algorithm KEA [34] by using semantic information from domain-specific thesauri, which they called KEA++. KEA++ has been applied to the medical domain, where it used MeSH vocabulary to extract candidate phrases from medical articles and used MeSH concept relations to compute its domain-specific feature. In this study, we adapted KEA++ to the EHR data and used the adapted KEA++ as a strong baseline system. | [
"19224738",
"24359554",
"26104044",
"20643992",
"23027317",
"23407012",
"23535584",
"17911889",
"21347002",
"23920650",
"9594918",
"18811992",
"25419896",
"25661679",
"14965405",
"18693866",
"12923796",
"11103725",
"1517087",
"20845203",
"23978618",
"26681155",
"20442139"... | [
{
"pmid": "24359554",
"title": "The Medicare Electronic Health Record Incentive Program: provider performance on core and menu measures.",
"abstract": "OBJECTIVE\nTo measure performance by eligible health care providers on CMS's meaningful use measures.\n\n\nDATA SOURCE\nMedicare Electronic Health Recor... |
Frontiers in Neuroscience | 28066170 | PMC5177654 | 10.3389/fnins.2016.00587 | An Improved Unscented Kalman Filter Based Decoder for Cortical Brain-Machine Interfaces | Brain-machine interfaces (BMIs) seek to connect brains with machines or computers directly, for application in areas such as prosthesis control. For this application, the accuracy of the decoding of movement intentions is crucial. We aim to improve accuracy by designing a better encoding model of primary motor cortical activity during hand movements and combining this with decoder engineering refinements, resulting in a new unscented Kalman filter based decoder, UKF2, which improves upon our previous unscented Kalman filter decoder, UKF1. The new encoding model includes novel acceleration magnitude, position-velocity interaction, and target-cursor-distance features (the decoder does not require target position as input, it is decoded). We add a novel probabilistic velocity threshold to better determine the user's intent to move. We combine these improvements with several other refinements suggested by others in the field. Data from two Rhesus monkeys indicate that the UKF2 generates offline reconstructions of hand movements (mean CC 0.851) significantly more accurately than the UKF1 (0.833) and the popular position-velocity Kalman filter (0.812). The encoding model of the UKF2 could predict the instantaneous firing rate of neurons (mean CC 0.210), given kinematic variables and past spiking, better than the encoding models of these two decoders (UKF1: 0.138, p-v Kalman: 0.098). In closed-loop experiments where each monkey controlled a computer cursor with each decoder in turn, the UKF2 facilitated faster task completion (mean 1.56 s vs. 2.05 s) and higher Fitts's Law bit rate (mean 0.738 bit/s vs. 0.584 bit/s) than the UKF1. These results suggest that the modeling and decoder engineering refinements of the UKF2 improve decoding performance. We believe they can be used to enhance other decoders as well. | Related workReviews of research in decoding for BMIs can be found elsewhere (Homer M. L. et al., 2013; Andersen et al., 2014; Baranauskas, 2014; Bensmaia and Miller, 2014; Kao et al., 2014; Li, 2014). Here we discuss the decoders compared in the present study.The improved unscented Kalman filter decoder proposed in this study is a development of our previous unscented Kalman filter decoder (Li et al., 2009). That filter, which we refer to here as UKF1, used an encoding model with non-linear dependence on kinematic variables which modeled tuning to the speed or velocity magnitude of movements. The UKF1 modeled tuning at multiple temporal offsets, using an n-th order hidden Markov model framework where n taps of kinematics (n = 10 was tested) are held in the state space. Encoding studies by Paninski et al. (2004a,b), Hatsopoulos et al. (2007), Hatsopoulos and Amit (2012) and Saleh et al. (2010) found tuning to position and velocity trajectories, called movement fragments or pathlets. The n-th order framework makes the encoding model of the UKF1 flexible enough to capture such tuning. Even though including taps of position also indirectly includes velocity, explicitly including taps of velocity reduces the amount of non-linearity needed in the neural encoding model, which helps improve the approximation accuracy of the UKF. On the basis of UKF1, we expand the neural encoding model and add decoder engineering improvements developed by ourselves and other groups to make the UKF2.The ReFIT Kalman filter (Gilja et al., 2012) has demonstrated high communications bit rate by using two advances in decoder engineering. In closed-loop experiments, we compared the UKF2 with the FIT Kalman filter (Fan et al., 2014), which is similar to the ReFIT Kalman filter in using position-as-feedback and intention estimation, but does not have the online re-training component. The bin size in this study, 50 ms, was the same as the Gilja et al. study. Our Fitts's Law bit rate values for the FIT Kalman filter are lower than those reported by Gilja et al. for the ReFIT Kalman filter, likely due to a combination of factors. First, online re-training separates the FIT and ReFIT Kalman filters. In terms of experimental setup, Gilja et al. used video tracking of natural reaching movements, whereas we used a joystick during hand control of the cursor. The use of an unnatural joystick made our task more difficult: the mean movement time during hand control in our task was approximately double those reported by Gilja et al. we used a joystick due to the limitations of our experimental platform and to compare with our previous work (Li et al., 2009). While using the same Fitts's law bit rate measure, our task used circular targets, which have a smaller acceptance area for the same width compared to the square targets of Gilja et al. We used circular targets because they are more natural in terms of determining whether the cursor is within the target by using a distance criterion. We also spike sorted and did not include unsorted or “hash” threshold crossings, whereas Gilja et al. used threshold crossing counts. | [
"23536714",
"26336135",
"25247368",
"7703686",
"24808833",
"24739786",
"21765538",
"2073945",
"20943945",
"4966614",
"24654266",
"7760138",
"23160043",
"24717350",
"21939762",
"17494696",
"23862678",
"14624237",
"1510294",
"26220660",
"26079746",
"17943613",
"20359500",
... | [
{
"pmid": "23536714",
"title": "State-based decoding of hand and finger kinematics using neuronal ensemble and LFP activity during dexterous reach-to-grasp movements.",
"abstract": "The performance of brain-machine interfaces (BMIs) that continuously control upper limb neuroprostheses may benefit from d... |
BMC Medical Informatics and Decision Making | 28049465 | PMC5209873 | 10.1186/s12911-016-0389-x | Secure and scalable deduplication of horizontally partitioned health data for privacy-preserving distributed statistical computation | BackgroundTechniques have been developed to compute statistics on distributed datasets without revealing private information except the statistical results. However, duplicate records in a distributed dataset may lead to incorrect statistical results. Therefore, to increase the accuracy of the statistical analysis of a distributed dataset, secure deduplication is an important preprocessing step.MethodsWe designed a secure protocol for the deduplication of horizontally partitioned datasets with deterministic record linkage algorithms. We provided a formal security analysis of the protocol in the presence of semi-honest adversaries. The protocol was implemented and deployed across three microbiology laboratories located in Norway, and we ran experiments on the datasets in which the number of records for each laboratory varied. Experiments were also performed on simulated microbiology datasets and data custodians connected through a local area network.ResultsThe security analysis demonstrated that the protocol protects the privacy of individuals and data custodians under a semi-honest adversarial model. More precisely, the protocol remains secure with the collusion of up to N − 2 corrupt data custodians. The total runtime for the protocol scales linearly with the addition of data custodians and records. One million simulated records distributed across 20 data custodians were deduplicated within 45 s. The experimental results showed that the protocol is more efficient and scalable than previous protocols for the same problem.ConclusionsThe proposed deduplication protocol is efficient and scalable for practical uses while protecting the privacy of patients and data custodians.Electronic supplementary materialThe online version of this article (doi:10.1186/s12911-016-0389-x) contains supplementary material, which is available to authorized users. | Related workSeveral PPRL protocols have been developed based on either deterministic or probabilistic matching of a set of identifiers. Interested readers are referred to [22, 23] for an extensive review of the PPRL protocols. The protocols can be broadly classified as protocols with or without a third party. In this section, we review privacy-preserving protocols for deterministic record linkage. These protocols are secure against the semi-honest adversarial model, which is the adversarial model considered in this paper.A record contains a set of identifiers that consists of direct and indirect (quasi-identifier) identifiers and other health information. Direct identifiers are attributes that can uniquely identify an individual across data custodians, such as a national identification number (ID). In contrast, quasi-identifiers are attributes that in combination with other attributes can identify an individual, such as name, sex, date of birth, and address. In this paper, the terms identifier and quasi-identifier are used interchangeably.Weber [12] and Quantin et al. [24] proposed protocols that use keyed hash functions. These protocols require data custodians send a hash of their records’ identifiers to a third party that performs exact matching and returns the results. The data custodians use a keyed hash function with a common secret key to prevent dictionary attacks by the third party. These protocols are secure as long as the third party does not collude with a data custodian. Quantin et al.’s protocol [24] performs phonetic encoding of the identifiers (i.e., last name, first name, date of birth, and sex) before hashing, in order to reduce the impact of typing errors in the identifiers on the quality of the linkage.Several protocols were proposed based on commutative encryption schemes1 [25–27]. In these protocols, each data custodian, in turn, encrypts the unique identifiers for all records across the data custodians using its private key, and consequently, each unique identifier is encrypted with the private keys of all the data custodians. Then, the encrypted unique identifiers are compared with each other, as the encrypted values of two unique identifiers match if the two unique identifiers match. The protocols proposed in [25, 26] are two-party computation protocols, whereas Adam et al.’s [27] protocol is a multi-party computation protocol.The protocols reviewed thus far require the exchange of a long list of hash or encrypted identifiers, which can limit the scalability of the protocols as the number of data custodians and records increases. In addition, protocols based on commutative encryption require communication rounds quadratic with the number of data custodians.Multi-party private set intersection protocols were designed based on Bloom filters2 [28, 29]. In general, each data custodian encodes the unique identifier values of its records as a Bloom filter (see the description of a Bloom filter in the Methods section). The protocols use different privacy-preserving techniques, as discussed below, to intersect the Bloom filters and then create a Bloom filter that encodes the unique identifiers of the records that have exact matches at all data custodians. Then, the data custodian queries the unique identifiers of its records in the intersection Bloom filter to identify the records that match.In Lai et al.’s [28] protocol, each data custodian splits its Bloom filter into multiple segments and distributes them to the other participating data custodians while keeping one segment for itself. Then, each data custodian locally intersects its share of the Bloom filter segments and distributes it to the other data custodians. Finally, the data custodians combine the results of the intersection of the Bloom filter segments to create a Bloom filter that is an intersection between all the data custodians’ Bloom filters. The protocol requires communication rounds quadratic with the number of data custodians, and the protocol is susceptible to a dictionary attack of the unique identifiers that have all the array positions in the same segment of the Bloom filter.In Many et al.’s [29] protocol, each data custodian uses secret sharing schemes3 [30] to split each counter position of the data custodian’s Bloom filter and then distributes them to three semi-trusted third parties. The third parties use secure multiplication and comparison protocols to intersect the data custodians’ Bloom filters, which adds overhead to the protocol.Dong et al. [31] proposed a two-party protocol for private set intersection. The protocol introduced a new variant of a Bloom filter, called a garbled Bloom filter, using a secret sharing scheme. The first data custodian encodes the unique identifiers of the data custodian’s records as a Bloom filter, whereas the second data custodian encodes the unique identifiers of its records as a garbled Bloom filter. Then, the data custodians intersect their Bloom filters using an oblivious transfer technique (OT)4 [32], which adds significant overhead to the overall performance of the protocol.Karapiperis et al. [33] proposed multi-party protocols for a secure intersection based on the Count-Min sketch.5 Each data custodian locally encodes the unique identifiers of its records based on the Count-Min sketch, denoted as the local synopsis, and then, the data custodians jointly compute the intersections of the local synopses using a secure sum protocol. The authors proposed two protocols that use secure sum protocols based on additive homomorphic encryption [34] and obscure the secret value with a random number [19, 35]. The protocols protect only the data custodians’ privacy, whereas our protocol protects individuals’ and data custodians’ privacy. The additive homomorphic encryption adds computation and communication overhead as the number of records and data custodians increases.The results of the protocols in [28, 29, 31, 33] contain the probability of a false positive. Although the protocols can choose a small false positive probability, for some applications, a false positive probability may not be acceptable. | [
"23268669",
"24169275",
"11861622",
"16984658",
"21486880",
"22390523",
"24682495",
"22195094",
"23304400",
"23349080",
"23221359",
"22768321",
"20442154",
"11208260",
"25123746",
"22262590",
"19567788",
"21658256",
"19504049",
"23739011",
"21696636",
"8336512",
"25957825... | [
{
"pmid": "24169275",
"title": "Health data use, stewardship, and governance: ongoing gaps and challenges: a report from AMIA's 2012 Health Policy Meeting.",
"abstract": "Large amounts of personal health data are being collected and made available through existing and emerging technological media and to... |
Frontiers in Neurorobotics | 28194106 | PMC5276858 | 10.3389/fnbot.2017.00003 | ReaCog, a Minimal Cognitive Controller Based on Recruitment of Reactive Systems | It has often been stated that for a neuronal system to become a cognitive one, it has to be large enough. In contrast, we argue that a basic property of a cognitive system, namely the ability to plan ahead, can already be fulfilled by small neuronal systems. As a proof of concept, we propose an artificial neural network, termed reaCog, that, first, is able to deal with a specific domain of behavior (six-legged-walking). Second, we show how a minor expansion of this system enables the system to plan ahead and deploy existing behavioral elements in novel contexts in order to solve current problems. To this end, the system invents new solutions that are not possible for the reactive network. Rather these solutions result from new combinations of given memory elements. This faculty does not rely on a dedicated system being more or less independent of the reactive basis, but results from exploitation of the reactive basis by recruiting the lower-level control structures in a way that motor planning becomes possible as an internal simulation relying on internal representation being grounded in embodied experiences. | Related workIn this section, we will compare reaCog as a system with related recent approaches in order to point out differences. While there are many approaches toward cognitive systems and many proposals concerning cognitive architectures, we will concentrate on models that, like reaCog, consider a whole systems approach. First, we will deal with cognitive architectures in general. Second, we will briefly present relevant literature concerning comparable approaches in robotics, because a crucial property of reaCog is that it uses an embodied control structure to run a robot.Models of cognitive systemsModels of cognitive systems generally address selected aspects of cognition and often focus on specific findings from cognitive experiments (e.g., with respect to memory, attention, spatial imagery; review see Langley et al. (2009), Wintermute (2012). Duch et al. (2008) introduced a distinction between different cognitive architectures. First, these authors identified symbolic approaches. As an example, the original SOAR (State, Operator, and Result; Laird, 2008) has to be noted, a rule-based system in which knowledge is encoded in production rules that allow to state information or derive new knowledge through application of the rules. Second, emergent approaches follow a general bottom-up approach and often start from a connectionist representation. As one example, following a bottom-up approach, Verschure et al. (2003) introduced the DAC (Distributed Adaptive Control) series of robot architectures (Verschure et al., 2003; Verschure and Althaus, 2003). These authors initiated a sequence of experiments in simulation and in real implementation. Verschure started from a reflex-like system and introduced higher levels of control on top of the existing ones which modulated the lower levels and which were subsequently in charge on longer timespans (also introducing memory into the system) and were integrating additional sensory information. The experiments showed that the robots became more adapted to their environment exploiting visual cues for orienting and navigation etc., (Verschure et al., 2003). Many other approaches in emergent systems concentrate on perception, for example, the Neurally Organized Mobile Adaptive Device (NOMAD) which is based on Edelman (1993) Neural Darwinism approach and demonstrates pattern recognition in a mobile robot platform (Krichmar and Snook, 2002). Recently, this has gained broader support in the area of autonomous mental development (Weng et al., 2001) and has established the field of developmental robotics (Cangelosi and Schlesinger, 2015). A particular focus in such architectures concerning learning is currently not covered in reaCog. In general, as pointed out by Langley et al. (2009), these kinds of approaches have not yet demonstrated the broad functionality associated with cognitive architectures (and—as in addition mentioned by Duch et al. (2008)—many of such models are not realized and are often not detailed enough to be implemented as a cognitive system). ReaCog realizes such an emergent system but with focus on a complex behaving system that, in particular, aims at higher cognitive abilities currently not reached by such emergent systems. The third type concerns hybrid approaches which try to bring together the advantages of the other two paradigms, for example ACT-R (Adaptive Components of Thought-Rational, Anderson, 2003). The, in our view, most impressive and comprehensive model of such a cognitive system is presented by the CLARION system (review see Sun et al., 2005; Helie and Sun, 2010) being applied to creative problem solving. This system is detailed enough so that it can be implemented computationally. Applying the so called Explicit-Implicit Interaction (EII) theory and being implemented in the CLARION framework, this system can deal with a number of quantitatively and qualitatively known human data, by far more than can be simulated by our approach as reaCog, in contrast, does not deal with symbolic/verbal information. Apart from this aspect, the basic difference is that the EII/CLARION system comprises a hybrid system as it consists of two modules, the explicit knowledge module and the implicit knowledge module. Whereas, the latter contains knowledge that is not “consciously accessible” in principle, the explicit network contains knowledge that may be accessible. Information may be redundantly stored in both subsystems. Mutual coupling between both modules allows for mutual support when looking for a solution to a problem. In our approach, instead of using representational differences for implicit and explicit knowledge to cope with the different accessibility, we use only one type of representation, that, however, can be differently activated, either being in the reactive mode or in the “attended” mode. In our case, the localist information (motivational units) and the distributed information (procedural networks) are not separated into two modules, but form a common, decentralized structure. In this way, the reaCog system realizes the idea of recruitment as the same clusters are used in motor tasks and cognitive tasks. Whereas, we need an explicit attention system, as given in the spreading activation and winner-take-all layer, in the CLARION model decisions result from the recurrent network finding an attractor state.Many models of cognition take, quite in contrast to our approach, as a starting point the anatomy of the human brain. A prominent example is the GNOSIS project (Taylor and Zwaan, 2009). It deals with comparatively fine-grained assumptions on functional properties of brain modules, relying on imaging studies as well as on specific neurophysiological data. While GNOSIS concentrates mainly on perceptual, in particular visual input, the motor aspect is somewhat underrepresented. GNOSIS shows the ability to find new solutions to a problem, including the introduction of intermediate goals. Although an attention system is applied, this is used for controlling perception, not for supporting the search, as is the case in reaCog. Related to this, the search procedure—termed non-linguistic reasoning—in GNOSIS appears to be less open as the corresponding network is tailored to the actual problem to avoid a too large search space. In our approach, using the attention system, the complete memory can be used as substrate for finding a solution.4.2 Cognitive Robotic ApproachesThe approaches introduced in the previous section are not embodied and it appears difficult to envision how they could be embodied (Duch et al., 2008). Following the basic idea of embodied cognition (Brooks, 1989; Barsalou, 2008; Barsalou et al., 2012) embodiment is assumed as being necessary for any cognitive system. Our approach toward a minimal cognitive system is based on this core assumption. Robotic approaches have been proposed as ideal tools for research on cognition as the focus cannot narrowed down to a singular cognitive phenomenon, but it is required to put a unified system into the full context of different control processes and in interaction with the environment (Pezzulo et al., 2012).ReaCog as a system is clearly embodied. The procedures cannot by themselves instantiate the behavior, but require a body. The body is a constitutive part of the computational system, because the sensory feedback from the body is crucially required to activate the procedural memories in the appropriate way. The overall behavior emerges from the interaction between controller, body and environment. In the following, we will review relevant embodied robotic approaches.Today, many robotic approaches deal with the task of learning behaviors. In particular, behaviors should be adaptive. This means, a learned behavior should be transferable to similar movements and applicable in a broader context. Deep learning approaches have proven quite successful in such tasks e.g., Lenz et al. (2015), but many require large datasets for learning. Only recently Levine et al. (2015) presented a powerful reinforcement learning approach in this area. In this approach, the robot uses trial-and-error during online learning to explore possible behaviors. This allows the robot to quickly learn control policies for manipulation skills and has shown to be effective for quite difficult manipulation tasks. When using deep learning methods it is generally difficult to access the learned model. In contrast to reaCog such internal models are therefore not well suited for recruitment in higher-level tasks and planning ahead. In particular, there is no explicit internal body model which could be recruited. Rather, only implicit models are learned and have to be completely acquired anew for every single behavior.In the following, two exciting robotic examples tightly related to our approach will be addressed in more detail. The approach by Cully et al. (2015) aims at solving similar tasks as reaCog for a hexapod robot. It also applies as a general mechanism the idea of trial-and-error learning when the robot encounters a novel situation. In their case these new situations are walking up a slope or losing a leg. There are some differences compared to reaCog. Most notably, the testing of novel behaviors is done on the real robot. This is possible as the trial-and-error method is not applying discrete behaviors. Instead, central to the approach by Cully et al. (2015) is the idea of a behavioral parametrization which allows to characterize the currently experienced situation in a continuous, low dimensional space. A complete mapping toward optimal behaviors has been constructed in advance offline (Mouret and Clune, 2015). This pre-computed behaviors are exploited when a new situation or problem is encountered. As the behavioral space is continuous, the pre-computed behavior can be used to adapt for finding a new behavior. Further, there is no explicit body model that is shared between different behaviors. Instead, the memory approximates an incomplete body model, as it contains only a limited range of those movements which are geometrically possible. In contrast, reaCog, using its internal body model, allows to exploit all geometrically possible solutions and is not constraint to search in a continuous space, as illustrated by our example case, where a single leg is selected to perform completely out of context.While there is only a small number of robotic approaches dealing with explicit internal simulation, most of these are using very simple robotic architectures with only a very small number of degrees of freedom [for example see Svensson et al. (2009) or Chersi et al. (2013)]. It should further be mentioned that predictive models are also used to anticipate the visual effects of the robot's movements (e.g., Hoffmann, 2007; Möller and Schenck, 2008). With respect to reaCog the most similar approach has been pursued by Bongard et al. (2006). These authors use a four-legged, eight DoFs robot which, through motor babbling—i.e., randomly selected motor commands—learns the relation between motor output and the sensory consequences. This information is used to distinguish between a limited number of given hypotheses concerning the possible structure of the body. Finding the best fitting solution, one body model is selected. After the body model has been learned, in a second step the robot learns to move. To this end, the body model was used to perform different simulated behaviors and was only used as a forward model. Based on a reward given by an external supervisor and an optimizing algorithm, the best controller (sequence of moving the eight joints) was then realized to run the robot. Continuous learning allows the robot to register changes in the body morphology and to update its body model correspondingly. As the most important difference, Bongard et al. (2006) distinguish between the reactive system and the internal predictive body model. The central idea of their approach is that both are learned in distinct phases one after another. In reaCog the body model is part of the reactive system and required for the control of behavior. This allows different controllers driving the same body part and using the same body model for different functions (e.g., using a limb as a leg or as a gripper, Schilling et al., 2013a, Figure 10). In addition, different from our approach, Bongard et al. (2006) do not use artificial neural networks (ANN) for the body model and for the controller, but an explicit representation because application of ANN would make it “difficult to assess the correctness of the model” (Bongard et al., 2006, p. 1119). ReaCog deals with a much more complex structure as it deals with 18 DoFs instead of the only eight DoFs used by Bongard et al. (2006) which makes an explicit representation even more problematic.Different from their approach, we do not consider how the body model and the basic controllers will be learned, but take both as given (or “innate”). While the notion of innate body representations is controversial (de Vignemont, 2010), there is at least a general consensus about that there is some form of innate body model (often referred to as the body schema) reflecting general structural and dynamic properties of the body (Carruthers, 2008) which is shaped and develops further during maturation. This aspect is captured by our body model that encodes general structural relations of the body in service for motor control, but may adapt to developmental changes. While currently only kinematic properties are applied, dynamic influences can be integrated in the model as has been shown in Schilling (2009).A further important difference concerns the structure of the memory. Whereas, in Bongard's approach one monolithic controller is learned to deal with eight DoFs and producing one specific behavior, in reaCog the controller consists of modularized procedural memories. This memory architecture allows for selection between different states and therefore between different behaviors. | [
"20964882",
"17998071",
"17705682",
"9639677",
"15010478",
"17110570",
"16099349",
"18359642",
"16530429",
"1688670",
"23785343",
"26017452",
"21521609",
"19786038",
"15939768",
"15939767",
"18089037",
"8094962",
"23608361",
"15068922",
"21038261",
"21601842",
"20658861",... | [
{
"pmid": "20964882",
"title": "Neural reuse: a fundamental organizational principle of the brain.",
"abstract": "An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. Ac... |
Scientific Reports | 28165495 | PMC5292966 | 10.1038/srep41831 | Multi-Instance Metric Transfer Learning for Genome-Wide Protein Function Prediction | Multi-Instance (MI) learning has been proven to be effective for the genome-wide protein function prediction problems where each training example is associated with multiple instances. Many studies in this literature attempted to find an appropriate Multi-Instance Learning (MIL) method for genome-wide protein function prediction under a usual assumption, the underlying distribution from testing data (target domain, i.e., TD) is the same as that from training data (source domain, i.e., SD). However, this assumption may be violated in real practice. To tackle this problem, in this paper, we propose a Multi-Instance Metric Transfer Learning (MIMTL) approach for genome-wide protein function prediction. In MIMTL, we first transfer the source domain distribution to the target domain distribution by utilizing the bag weights. Then, we construct a distance metric learning method with the reweighted bags. At last, we develop an alternative optimization scheme for MIMTL. Comprehensive experimental evidence on seven real-world organisms verifies the effectiveness and efficiency of the proposed MIMTL approach over several state-of-the-art methods. | Related WorksPrevious studies related to our work can be classified into three categories: traditional MIL, metric learning based MIL and transfer learning based MIL.Traditional MILMulti-Instance Multi-Label k-Nearest Neighbor (MIMLkNN)25 try to utilize the popular k-nearest neighbor techniques into MIL. Motivated by the advantage of the citers that is used in Citation-kNN approach26, MIMLkNN not only considers the test instances’ neighboring examples in the training set, but also considers those training examples which regard the test instance as their own neighbors (i.e., the citers). Different from MIMLkNN, Multi-instance Multi-label Support Vector Machine (MIMLSVM)6 first degenerates MIL task to a simplified single-instance learning (SIL) task by utilizing a clustering-based representation transformation627. After this transformation, each training bag is transformed into a single instance. By this way, MIMLSVM maps the MIL problem to a SIL problem. For another traditional MIL approach, Multi-instance Multi-label Neural Network (MIMLNN)28 is obtained by using the two-layer neural network structure28 to replace the Multi-Label Support Vector Machine (MLSVM)6 used in MIMLSVM.Metric based MILDifferent from MIMLNN, to encode much geometry information of the bag data, the metric-based Ensemble Multi-Instance Multi-Label (EnMIMLNN)4 combines three different Hausdorff distances (i.e., average, maximal and minimal) to define the distance between two bag, and proposes two voting-based models (i.e., EnMIMLNNvoting1 and EnMIMLNNvoting2). Recently, Xu Ye et al. proposes a metric-based multi-instance learning method (MIMEL)29 by minimizing the KL divergence between two multivariate Gaussians with the constraints of maximizing the distance of bags between class and minimizing the distance of bags within class. Different from MIMEL, Jin Rong et al. proposes a metric-based learning method30 for multi-instance multi-label problem. Recently, MIML-DML5 attempts to find a distance metric by considering that the same category bag pairs should have a smaller distance than that from different categories. These metric-based MIL approaches are both designed for the traditional MIL problem where the bags in SD and TD are drawn from the same distribution.Transfer learning based MILRecently, MICS31 proposed tackling the MI co-variate shift problem by considering the distribution change at both bag-level and instance-level. MICS attempts to utilize the weights of bags and instances to solve the covariate shift problem. Then, with the learned weights, the MI co-variate shift problem can be solved by traditional MIL methods. However, MICS does not present the method to utilize the learned weights into multi-instance metric learning. | [
"10573421",
"23353650",
"25708164",
"26923212",
"16526484",
"1608464",
"691075",
"21920789"
] | [
{
"pmid": "10573421",
"title": "A combined algorithm for genome-wide prediction of protein function.",
"abstract": "The availability of over 20 fully sequenced genomes has driven the development of new methods to find protein function and interactions. Here we group proteins by correlated evolution, cor... |
BMC Psychology | 28196507 | PMC5307765 | 10.1186/s40359-017-0173-4 | VREX: an open-source toolbox for creating 3D virtual reality experiments | BackgroundWe present VREX, a free open-source Unity toolbox for virtual reality research in the fields of experimental psychology and neuroscience.ResultsDifferent study protocols about perception, attention, cognition and memory can be constructed using the toolbox. VREX provides a procedural generation of (interconnected) rooms that can be automatically furnished with a click of a button. VREX includes a menu system for creating and storing experiments with different stages. Researchers can combine different rooms and environments to perform end-to-end experiments including different testing situations and data collection. For fine-tuned control VREX also comes with an editor where all the objects in the virtual room can be manually placed and adjusted in the 3D world.ConclusionsVREX simplifies the generation and setup of complicated VR scenes and experiments for researchers. VREX can be downloaded and easily installed from vrex.mozello.comElectronic supplementary materialThe online version of this article (doi:10.1186/s40359-017-0173-4) contains supplementary material, which is available to authorized users. | Why VREX: Related workThere are a wide variety of Unity add-ons assisting the generation of interactive virtual worlds, such as Playmaker [10], Adventure Creator [11] and ProBuilder [12] to name a few. Yet these toolboxes are very general-purpose. There also exists some software applications similar to VREX in terms of simplifying the creation of VR experiments for psychological research, e.g. MazeSuite [13] and WorldViz Vizard [14]. The list of compared software is not comprehensive and here we briefly describe only two of them with key differences to VREX.MazeSuite is a free toolbox that allows easy creation of connected 3D corridors. It enables researchers to perform spatial and navigational behavior experiments within interactive and extendable 3D virtual environments [13]. Although the user can design mazes by hand and fill them with objects, it is difficult to achieve the look and feel of a regular apartment. This is where VREX differs, having been designed for indoor experiments in mind from the beginning. Another noticeable difference is that MazeSuite runs as a standalone program, while VREX can be embedded inside Unity Game Engine, allowing for more powerful features, higher visual quality and faster code iterations in our experience.WorldViz Vizard gives researchers the tools to create and conduct complex VR-based experiments. Researchers of any background can rapidly develop their own virtual environments and author complex interactions between environment, devices, and participants [14]. Although Vizard is visually advanced, this comes at a price of the licence fee to remove time restrictions and prominent watermarks. VREX matches the graphical quality of Vizard with the power of Unity 5 game engine, while staying open source and free of charge (Unity license fees may apply for publishing).As any software matures, more features tend to be added by the developers. This in term means more complex interfaces that might confuse the novice user. The advantage of VREX is the narrow focus to specific types of experiments, allowing for clear design and simple workflow. | [
"16480691",
"11718793",
"18411560",
"15639436"
] | [
{
"pmid": "16480691",
"title": "Cognitive Ethology and exploring attention in real-world scenes.",
"abstract": "We sought to understand what types of information people use when they infer the attentional states of others. In our study, two groups of participants viewed pictures of social interactions. ... |
Frontiers in Neuroinformatics | 28261082 | PMC5311048 | 10.3389/fninf.2017.00009 | Automated Detection of Stereotypical Motor Movements in Autism Spectrum Disorder Using Recurrence Quantification Analysis | A number of recent studies using accelerometer features as input to machine learning classifiers show promising results for automatically detecting stereotypical motor movements (SMM) in individuals with Autism Spectrum Disorder (ASD). However, replicating these results across different types of accelerometers and their position on the body still remains a challenge. We introduce a new set of features in this domain based on recurrence plot and quantification analyses that are orientation invariant and able to capture non-linear dynamics of SMM. Applying these features to an existing published data set containing acceleration data, we achieve up to 9% average increase in accuracy compared to current state-of-the-art published results. Furthermore, we provide evidence that a single torso sensor can automatically detect multiple types of SMM in ASD, and that our approach allows recognition of SMM with high accuracy in individuals when using a person-independent classifier. | 2. Related workExisting approaches to automated monitoring of SMM are based either on webcams or accelerometers. In a series of publications (Gonçalves et al., 2012a,b,c) a group from the University of Minho created methods based on the Kinect webcam sensor from Microsoft. Although, their approach shows promising results, the authors restricted themselves to detecting only one type of SMM, namely hand flapping. In addition, the Kinect sensor is limited to monitoring within a confined space and requires users to be in close proximity to the sensor. This limits the application of the approach, as it does not allow continuous recording across a range of contexts and activities.Alternative approaches to the Kinect are based on the use of wearable 3-axis accelerometers (see Figure 1). Although, the primary aim of previously published accelerometer-based studies is to detect SMM in individuals with ASD, some studies have been carried out with healthy volunteers mimicking SMM (Westeyn et al., 2005; Plötz et al., 2012)1, and therefore do not necessarily generalize to the ASD population.Figure 1Accelerometer readings of one second in length from the class flapping. The accelerometer was mounted to the right wrist. Each line corresponds to one of the three acceleration axes.To-date, there have been two different approaches to automatically detecting SMM in ASD using accelerometer data. One approach is to use a single accelerometer to detect one type of SMM, such as hand flapping when a sensor is worn on the wrist (Gonçalves et al., 2012a; Rodrigues et al., 2013). The second approach is to use multiple accelerometers to detect multiple SMM, such as hand flapping from sensors worn on the wrist, and body rocking with a sensor worn on the torso (Min et al., 2009; Min and Tewfik, 2010a,b, 2011; Min, 2014). Other studies have done the same, but included a detection class where hand flapping and body rocking occur simultaneously in time (i.e., “flap-rock,” see Albinali et al., 2009, 2012; Goodwin et al., 2011, 2014).While more sensors appear to improve recognition accuracy in these studies, one practical drawback is that many individuals with ASD have sensory sensitivities that might make them less able or willing to tolerate wearing multiple devices. To accommodate for different sensory profiles in the ASD population, it would be ideal to limit the number of sensors to a minimum, while still optimizing accurate multiple class SMM detection.Typical features used for acceleration analyses of SMM in prior studies have focused on: distances between mean values along accelerometer axes, variance along axes directions, correlation coefficients, entropy, Fast Fourier Transform (FFT) peaks, and frequencies (Albinali et al., 2009, 2012; Goodwin et al., 2011, 2014), Stockwell transform (Goodwin et al., 2014), mean standard deviation, root mean square, number of peaks, and zero crossing values (Gonçalves et al., 2012a; Rodrigues et al., 2013), and skewness and kurtosis (Min, 2014; Min and Tewfik, 2011). These features are mainly aimed at characterizing oscillatory features of SMM as statistical characteristics of values distributed around mean values in each accelerometer axis, joint relation of changes in different axial directions, or frequency components of oscillatory moves. While useful in many regards, these features fail to capture potentially important dynamics of SMM that can change over time, namely, when they do not follow a consistent oscillatory pattern or when patterns differ in frequency, duration, speed, and amplitude (Goodwin et al., 2014). A final limitation to previous publications in this domain, is that different sensor types have been used across studies. These may have different orientations, resulting in features with different values, despite representing the same SMM. To overcome this limitation, other sets of features are required that do not vary in their characteristics across different types of SMM and sensor orientations. | [
"15149482",
"25153585",
"17404130",
"15026089",
"20839042",
"17048092",
"12241313",
"14976271",
"17550872",
"8175612"
] | [
{
"pmid": "15149482",
"title": "The lifetime distribution of health care costs.",
"abstract": "OBJECTIVE\nTo estimate the magnitude and age distribution of lifetime health care expenditures.\n\n\nDATA SOURCES\nClaims data on 3.75 million Blue Cross Blue Shield of Michigan members, and data from the Medi... |
JMIR Medical Informatics | 28153818 | PMC5314102 | 10.2196/medinform.6918 | Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE | BackgroundDiverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks.ObjectiveOur objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem.MethodsWe developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches.ResultsFormative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem.ConclusionsOur strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. | Related WorkSome researchers have recognized the value of using ontologies to better support search activities (eg, [13,45]). The central focus of this research is term extraction and mapping, which is done using text mining and natural language processing techniques. In this body of work, ontologies are used to improve search performance computationally without involving users. The fundamental difference compared with our work is that we use ontologies to help users develop knowledge and domain-specific vocabulary—that is, the focus is on the user rather than on algorithms and other computational processes. Our approach is important in contexts where users have valuable knowledge and context-specific goals that cannot be replaced by computation—in other words, users need to be kept “in the loop.”Other researchers have focused on developing interfaces to MEDLINE as alternatives to PubMed. For example, Wei et al have developed PubTator, a PubMed replacement interface that uses multiple text mining algorithms to improve search results [46]. PubTator also offers some support for document triaging. Whereas PubTator appears interesting and useful, it relies on queries being input into the standard text box, and it presents results in a typical list-based fashion. Thus, it is not aimed at addressing either of the two problems we are attempting to address with OVERT-MED—that is, the vocabulary problem and the information overload problem. Other alternative interfaces that offer interesting features but do not address either of the two problems include SLIM [47] and HubMed [48]. An alternative interface that potentially provides support in addressing the first problem is iPubMed [49], which provides fuzzy matches to search results. An alternative interface that may provide support in addressing the second problem is refMED [50], which provides minimal triaging support through relevance ranking. A for-profit private tool, Quertle, appears to use visualizations to mitigate the information overload problem, although very few details are publicly available. Lu [51] provides a detailed survey that includes many other alternative interfaces to MEDLINE, although none are aimed at solving either of the two problems that we are addressing here.In summary, no extant research explores the combination of (1) ontologies to help build domain-specific knowledge and vocabulary when users need to be kept “in the loop” and (2) triaging support using interactive visualizations to help mitigate the information overload problem. The following sections provide details about our approach to addressing these issues. | [
"18280516",
"20157491",
"23803299",
"11971889",
"24513593",
"27267955",
"15561792",
"17584211",
"11720966",
"15471753",
"11209803",
"16221948",
"25665127",
"21245076",
"18950739",
"23703206",
"16321145",
"16845111",
"20624778",
"21245076",
"22034350"
] | [
{
"pmid": "18280516",
"title": "How to perform a literature search.",
"abstract": "PURPOSE\nEvidence based clinical practice seeks to integrate the current best evidence from clinical research with physician clinical expertise and patient individual preferences. We outline a stepwise approach to an effe... |
Royal Society Open Science | 28280588 | PMC5319354 | 10.1098/rsos.160896 | Understanding human queuing behaviour at exits: an empirical study | The choice of the exit to egress from a facility plays a fundamental role in pedestrian modelling and simulation. Yet, empirical evidence for backing up simulation is scarce. In this contribution, we present three new groups of experiments that we conducted in different geometries. We varied parameters such as the width of the doors, the initial location and number of pedestrians which in turn affected their perception of the environment. We extracted and analysed relevant indicators such as distance to the exits and density levels. The results put in evidence the fact that pedestrians use time-dependent information to optimize their exit choice, and that, in congested states, a load balancing over the exits occurs. We propose a minimal modelling approach that covers those situations, especially the cases where the geometry does not show a symmetrical configuration. Most of the models try to achieve the load balancing by simulating the system and solving optimization problems. We show statistically and by simulation that a linear model based on the distance to the exits and the density levels around the exit can be an efficient dynamical alternative. | 2.Related worksData gathering for the exit choice of pedestrians is performed in real-world [1–4], as well as in virtual environments [5–7]. Participants might behave differently in the virtual environments where the perception is different. However, we observe in both cases that pedestrians are able to dynamically optimize their travel time by choosing adequate exits. In the models, the choice of the exit corresponds to the tactical level of the pedestrian behaviour. Early works consider the shortest path as an adequate solution for uncongested situations [8]. For congested states, the closest exit, if it is congested, may not be the one minimizing the travel time. Therefore, most of the models are based on the distance to the exit and travel time (see e.g. [9–13]). Other factors are also used, such as route preference [4], density level around the exits [1,2], socio-economic factors [7], type of behaviours (egoistic/cooperative, see [3]), or the presence of smoke, the visibility, the herding tendency or again the faster-is-slower effect in the case of emergency [4,14–16]. Several types of modelling are developed. Some of them use log-IT or Prob-IT statistical models [4,5,7,11,17]. Some others are based on notions from game theory of pedestrian rationality and objective function [9,10]. While iterative methods such as the Metropolis algorithm or neural networks allow to reach user or system optima by minimizing individual travel time or marginal cost [2,13].The estimation of travel times in congested situations is a complex problem. Such procedure is realized in general by using simulation of an operational pedestrian model. The coupling to simulation makes the use of the exit model a hard task in terms of computation effort. Yet, there exist strong correlations between the travel time and the density level. They are a consequence of the characteristic fundamental relationship between the flow and the density, that is well established in the literature of traffic theory (see e.g. [18]). Some recent dynamical models are based, among other parameters, on the density levels in the vicinity of the exits (see [1,2,4]). In such models, the density substitutes the travel time. The density levels are simple to measure and, in contrast to the travel time, do not require simulation of the system to be estimated. This makes the density-based models easier to implement than equilibrium-based models. | [
"11028994",
"27605166"
] | [
{
"pmid": "11028994",
"title": "Simulating dynamical features of escape panic.",
"abstract": "One of the most disastrous forms of collective human behaviour is the kind of crowd stampede induced by panic, often leading to fatalities as people are crushed or trampled. Sometimes this behaviour is triggere... |
Scientific Reports | 28230161 | PMC5322330 | 10.1038/srep43167 | Robust High-dimensional Bioinformatics Data Streams Mining by ODR-ioVFDT | Outlier detection in bioinformatics data streaming mining has received significant attention by research communities in recent years. The problems of how to distinguish noise from an exception and deciding whether to discard it or to devise an extra decision path for accommodating it are causing dilemma. In this paper, we propose a novel algorithm called ODR with incrementally Optimized Very Fast Decision Tree (ODR-ioVFDT) for taking care of outliers in the progress of continuous data learning. By using an adaptive interquartile-range based identification method, a tolerance threshold is set. It is then used to judge if a data of exceptional value should be included for training or otherwise. This is different from the traditional outlier detection/removal approaches which are two separate steps in processing through the data. The proposed algorithm is tested using datasets of five bioinformatics scenarios and comparing the performance of our model and other ones without ODR. The results show that ODR-ioVFDT has better performance in classification accuracy, kappa statistics, and time consumption. The ODR-ioVFDT applied onto bioinformatics streaming data processing for detecting and quantifying the information of life phenomena, states, characters, variables and components of the organism can help to diagnose and treat disease more effectively. | Related WorkThere are many ways to categorize outlier detection approaches. To illustrate by the class objective, one-class classification outlier detection approach proposed by Tax17. The artificial outlier is generated by normal instances that are trained by a one-class classifier. Then the combination of one-class and support vector data description algorithms is given to achieve a boundary decision between normal and outlier samples. But the drawback of one-class classification is not able to handle multi-objective dataset. Thus the genetic programming for one-class classifier proposed by Loveard and Cielsieski18 aims to apply for diverse formalisms in its evolutionary processing. Since the multifarious dataset with diversity classes take over most type of dataset, the outlier detection approach for multi-objective is in wilderness demand19. The instances that pertain to the misclassified (ISMs) filtering method exhibit a high level of class overlap for similar instance implemented by Michael R. Smith et al.20. The approach is based on two measure heuristics, one is k-Disagreeing Neighbors (kDN) for taking space of local overlap instances, and another is Disjunct Size (DS) for dividing instances by covering instance of largest disjunct among the dataset. Although this method performs well on outlier reduction, but high cost of time is the biggest drawback of ISMs.The pattern learning outlier detection models are usually categorized into clustering, distance-based, density-based, probabilistic and information-theoretic. Wu, Shu, and Shengrui Wang21 are using an information-theoretic model to share an interesting relationship with other models. A concept of holoentropy that takes both entropy and total correlation into consideration to be the outlier factor of an object, which is solely determined by the object itself and can be updated efficiently. This method constrain the maximum deviation allowed from them normal model. It will be reported as an outlier if it has the large difference. Zeng, Xiangxiang, Xuan Zhang, and Quan Zou22 gave a biological interaction networks for finding out the information between gene, protein, miRNA and disease phtnotype and predicting potential disease-related miRNA based on networks. Ando, Shin23 is giving a scalable minimization algorithm base on information bottleneck formalization that exploits the localized form of the cost function over individual clusters. Bay, Stephen D and Mark Schwabacher24 are displaying a distance-based model that uses a simple nested loop algorithm. It will give near linear time performance in the worst case. Knorr, Edwin M25 gives a K-nearest neighbor distribution of a data point to determine whether it is an outlier. Xuan, Ping et al.26 gave a prediction method HDMP based on weighted K most simialer neighbors to find the similarity between disease and phenotype27. Yousri, Noha A.28 displayed an approach that is clustering considering a complementary problem to outlier analysis. A universal set of clusters is proposed which combines clusters obtained from clustering, and a virtual cluster for the outlier. It optimized clustering model to purposely detect outliers. Breunig, Markus M et al.29 used density-based model to define its outlier score, in which local outlier factor degree depends on how isolated the object is related to the surrounding neighborhood. Cao, Feng et al.30 also present a density-based micro-cluster to summarize the clusters with arbitrary shape, which guarantees the precision of the weights of the micro-clusters with limited memory. Yuen, Ka-Veng, and He-Qing Mu31 gave a probabilistic method for robust parametric identification and outlier detection in linear regression approach. Brink, Wikus et al.32 derived a Gaussian noise model for outlier removing. The probabilistic approach is almost analogous to those clustering algorithms whereby the process of fitting values are used to quantify the outlier scores of data points.Our outlier identify method of incremental optimized very fast decision tree with outlier detection and removal (ODR-ioVFDT) is an nonparametric optimized decision tree classifier based on probability density that is excusive of the outlier detection and removal first, and in the meantime, send the clean data flow to ioVFDT3334. This algorithm aims to reduce the running time of classification and increase the accuracy of prediction by making a quick time-series preprocessing of dataset. | [
"25134094",
"26059461",
"23950912",
"26134276"
] | [
{
"pmid": "25134094",
"title": "Incremental Support Vector Learning for Ordinal Regression.",
"abstract": "Support vector ordinal regression (SVOR) is a popular method to tackle ordinal regression problems. However, until now there were no effective algorithms proposed to address incremental SVOR learni... |
Frontiers in Psychology | 28293202 | PMC5329031 | 10.3389/fpsyg.2017.00260 | Automating Individualized Formative Feedback in Large Classes Based on a Directed Concept Graph | Student learning outcomes within courses form the basis for course completion and time-to-graduation statistics, which are of great importance in education, particularly higher education. Budget pressures have led to large classes in which student-to-instructor interaction is very limited. Most of the current efforts to improve student progress in large classes, such as “learning analytics,” (LA) focus on the aspects of student behavior that are found in the logs of Learning Management Systems (LMS), for example, frequency of signing in, time spent on each page, and grades. These are important, but are distant from providing help to the student making insufficient progress in a course. We describe a computer analytical methodology which includes a dissection of the concepts in the course, expressed as a directed graph, that are applied to test questions, and uses performance on these questions to provide formative feedback to each student in any course format: face-to-face, blended, flipped, or online. Each student receives individualized assistance in a scalable and affordable manner. It works with any class delivery technology, textbook, and learning management system. | Related works/state of the artAn overview of four categories of approaches to analytical activities that are currently being used on data from educational settings is provided by Piety et al. (2014). Their work provides a conceptual framework for considering these different approaches and provides an overview of the state of the art in each of the four categories. Our work falls primarily into their second category, “Learning Analytics/Educational Data Mining.” Their work identifies the areas of overlap between their four different categories and a noticeable gap is left by the current approaches in the educational context of individual students in postsecondary education. This gap is the focal area for our current work and what follows is a description of the state of the art in the Learning Analytics category as it relates to our work.Log based approachesMuch attention has been paid to using information from Learning Management Systems (LMS) logs and other logs of student activity. These logs are used to flag students who are likely to do poorly in a course and/or make satisfactory progress toward graduation. A survey article in the Chronicle of Higher Education (Blumenstyk, 2014) describes this as “personalized education” but considers the term to be “rather fuzzy.” This area is also often referred to as “learning analytics” (LA). Many tools have been developed to help colleges and universities spot students who are more likely to fail (Blumenstyk, 2014; Rogers et al., 2014). Companies with offerings in this area include Blackboard1, Ellucian2, Starfish Retention Solutions3, and GradesFirst4. The details of what data these companies use is not clear from their web sites, but their services generally appear to utilize LMS logs, gradebooks, number and time of meetings with tutors and other behavioral information, as well as student grades in previous courses. Dell has partnered with a number of higher education institutions to apply this type of analytics to increase student engagement and retention, such as at Harper College (Dell Inc, 2014). Their model emphasizes pre-enrollment information, such as high school GPA and current employment status. These efforts often produce insight into progress of the student body as a whole, and to individual students' progress over the semesters, but do not go deeper into individual student's learning progress within a course.Approaches based on student decisionsCivitas Learning5 takes a different approach. It emphasizes the need to inform the student regarding the decisions to be made in choosing the school, the major, the career goals, the courses within the school, etc. These are very important decisions, and certainly can be informed by a “predictive analytics platform,” but they are outside an individual course. Ellucian6 describes their “student success” software in much the same way, but in less detail. Starfish Retention Solutions7 also describes its software in much the same way and gathers data from a variety of campus data sources, including the student information system and the learning management system. The orientation, as described, is at the macroscopic level, outside of individual courses. An example given is that when a student fails to choose a major on time, an intervention should be scheduled to assist in student retention. GradesFirst8 describes its analytics capabilities in terms of tracking individual student course attendance, scheduling tutoring appointments, as well as other time and behavior management functions.Course concept based approachesProducts and services from another group of companies promote the achievement of student learning outcomes within courses by adapting the presentation of material in the subject matter to the progress and behavior of individual students. This is sometimes referred to as ”adaptive education” or “adaptive learning.” One company, Acrobatiq9, distinguishes between the usual learning analytics and their own approach (Hampson, 2014) and does it in the domain of an online course specifically developed to provide immediate feedback to students. This is an interesting and promising method, but its application appears to be limited by the need to develop a new course, rather than being directly applicable to existing courses.Smart Sparrow10 describes its function as “adaptive learning,” looking at problem areas encountered by each student and personalizing the instructional content for each individual student. The company describes this in terms of having the instructor develop content using their authoring tool, which then allows presentation of the next “page” to be based on previous student responses. This appears to be a modern instantiation of Programmed Instruction (Radcliffe, 2007).WebAssign11 is a popular tool used in math and sciences for administering quizzes, homework, practice exercises, and other assessment instruments. Their new Class Insights product appears to provide instructors with the ability to identify questions and topic areas that are challenging to individual students as well as the class collectively (Benien, 2015). It also provides feedback to students to help them identify ways to redirect their efforts if they are struggling to generate correct answers to questions and problems. Aplia12 provides automated grading services for instructors with feedback intended to help students increase their level of engagement. They create study plans for students based on how they performed on their quizzes, which are created using a scaffolded learning path moving students from lower order thinking skills to higher order thinking skills. These plans are not shared with the instructors and are for students only.Textbook publishers have been developing technology solutions to enhance their product offerings. CengageNow13 has pre and post assessments for chapters that create a personalized study plan for students linked to videos and chapters within the book. Other textbook publishers have a similar approach in their technologies. In contrast, the Cengage MindTap14 platform has an engagement tracker that flags students who are not performing well in the class on quizzes and interaction. This is more focused on providing the instructor with information to intervene. A dozen or so student behaviors and interactions are used to calculate an engagement score for each student in MindTap, including student-generated materials within the content. McGraw Hill also offers adaptive learning technology called LearnSmart15 which focuses on determining students' knowledge and strength areas and adapts content to help students focus their learning efforts on material they do not already know. It provides reports for both instructors and students to keep updated on a student's progress in a course.This adaptive learning approach, along with methods to select the path the student should take from one course content segment to the next, is used by many implementations of Adaptive Educational Systems. An example is the Mobile Integrated and Individualized Course (MIIC) system (Brinton et al., 2015), a full presentation platform which includes text, videos, quizzes, and its own social learning network. It is based on a back-end web server and client-device-side software installed on the student's tablet computer. The tests of MIIC used a textbook written by the implementers and so avoided permission concerns. Another service, WileyPLUS with ORION Wiley16, is currently available with two psychology textbooks published by the Wiley textbook company. It appears to use logs and quizzes, along with online access to the textbooks, in following student progress and difficulties. It seems to be the LMS for a single Wiley course/textbook. In this case, there is no development by the instructor needed, but one is limited to the textbooks and approach of this publisher.Shortcomings/limitations of current approachesWhat the varied approaches in the first two categories (Log and Student Decision Based Approaches) apparently do not do constitutes a significant omission; the approaches do not provide assistance to students with learning the content within each course. While informing students can improve their decisions, the approaches described in Student Decision Based Approaches impact a macro level of student decision making; the project described here relates to student decision making at a micro level. Providing individual face-to-face support within a course is time-consuming, which makes doing so expensive. The increasing number of large courses is financially driven, so any solution to improve student learning must be cost effective. Cost is a major limitation of the approaches described in Course Concept Based Approaches. With those approaches, existing instructional content must be adapted to the system, or new instructional content must be developed, essentially constructing a new textbook for the course. That is not a viable option for most individual instructors, forcing them to rely upon the content developed by someone else, such as a textbook publisher. Often, instructors find some aspects of their textbook unsatisfying and it may be difficult to make modifications when a textbook is integrated within a publisher's software system. The tool proposed in this paper avoids that problem. | [] | [] |
PLoS Computational Biology | 28282375 | PMC5345757 | 10.1371/journal.pcbi.1005248 | A human judgment approach to epidemiological forecasting | Infectious diseases impose considerable burden on society, despite significant advances in technology and medicine over the past century. Advanced warning can be helpful in mitigating and preparing for an impending or ongoing epidemic. Historically, such a capability has lagged for many reasons, including in particular the uncertainty in the current state of the system and in the understanding of the processes that drive epidemic trajectories. Presently we have access to data, models, and computational resources that enable the development of epidemiological forecasting systems. Indeed, several recent challenges hosted by the U.S. government have fostered an open and collaborative environment for the development of these technologies. The primary focus of these challenges has been to develop statistical and computational methods for epidemiological forecasting, but here we consider a serious alternative based on collective human judgment. We created the web-based “Epicast” forecasting system which collects and aggregates epidemic predictions made in real-time by human participants, and with these forecasts we ask two questions: how accurate is human judgment, and how do these forecasts compare to their more computational, data-driven alternatives? To address the former, we assess by a variety of metrics how accurately humans are able to predict influenza and chikungunya trajectories. As for the latter, we show that real-time, combined human predictions of the 2014–2015 and 2015–2016 U.S. flu seasons are often more accurate than the same predictions made by several statistical systems, especially for short-term targets. We conclude that there is valuable predictive power in collective human judgment, and we discuss the benefits and drawbacks of this approach. | Related workAs exemplified by the fields of meteorology and econometrics, statistical and computational models are frequently used to understand, describe, and forecast the evolution of complex dynamical systems [12, 13]. The situation in epidemiological forecasting is no different; data-driven forecasting frameworks have been developed for a variety of tasks [14–16]. To assess accuracy, forecasts are typically compared to pre-defined baselines and to other, often competing, forecasts. The focus has traditionally been on comparisons between data-driven methods, but there has been less work toward understanding the utility of alternative approaches, including those based on human judgment. In addition to developing and applying one such approach, we also provide an intuitive point of reference by contrasting the performance of data-driven and human judgment methods for epidemiological forecasting.Methods based on collective judgment take advantage of the interesting observation that group judgment is generally superior to individual judgment—a phenomena commonly known as “The Wisdom of Crowds”. This was illustrated over a century ago when Francis Galton showed that a group of common people was collectively able to estimate the weight of an ox to within one percent of its actual weight [17]. Since then, collective judgment has been used to predict outcomes in a number of diverse settings, including for example finance, economics, politics, sports, and meteorology [18–20]. A more specific type of collective judgment arises when the participants (whether human or otherwise) are experts—a committee of experts. This approach is common in a variety of settings, for example in artificial intelligence and machine learning in the form of committee machines [21] and ensemble classifiers [22]. More relevant examples of incorporating human judgment in influenza research include prediction markets [23, 24] and other crowd-sourcing methods like Flu Near You [25, 26]. | [
"16731270",
"9892452",
"8604170",
"10997211",
"27449080",
"24714027",
"24373466",
"22629476",
"17173231",
"26270299",
"26317693",
"25401381",
"10752360"
] | [
{
"pmid": "16731270",
"title": "Global and regional burden of disease and risk factors, 2001: systematic analysis of population health data.",
"abstract": "BACKGROUND\nOur aim was to calculate the global burden of disease and risk factors for 2001, to examine regional trends from 1990 to 2001, and to pr... |
JMIR mHealth and uHealth | 28246070 | PMC5350460 | 10.2196/mhealth.6395 | Accuracy and Adoption of Wearable Technology Used by Active Citizens: A Marathon Event Field Study | BackgroundToday, runners use wearable technology such as global positioning system (GPS)–enabled sport watches to track and optimize their training activities, for example, when participating in a road race event. For this purpose, an increasing amount of low-priced, consumer-oriented wearable devices are available. However, the variety of such devices is overwhelming. It is unclear which devices are used by active, healthy citizens and whether they can provide accurate tracking results in a diverse study population. No published literature has yet assessed the dissemination of wearable technology in such a cohort and related influencing factors.ObjectiveThe aim of this study was 2-fold: (1) to determine the adoption of wearable technology by runners, especially “smart” devices and (2) to investigate on the accuracy of tracked distances as recorded by such devices.MethodsA pre-race survey was applied to assess which wearable technology was predominantly used by runners of different age, sex, and fitness level. A post-race survey was conducted to determine the accuracy of the devices that tracked the running course. Logistic regression analysis was used to investigate whether age, sex, fitness level, or track distance were influencing factors. Recorded distances of different device categories were tested with a 2-sample t test against each other.ResultsA total of 898 pre-race and 262 post-race surveys were completed. Most of the participants (approximately 75%) used wearable technology for training optimization and distance recording. Females (P=.02) and runners in higher age groups (50-59 years: P=.03; 60-69 years: P<.001; 70-79 year: P=.004) were less likely to use wearables. The mean of the track distances recorded by mobile phones with combined app (mean absolute error, MAE=0.35 km) and GPS-enabled sport watches (MAE=0.12 km) was significantly different (P=.002) for the half-marathon event.ConclusionsA great variety of vendors (n=36) and devices (n=156) were identified. Under real-world conditions, GPS-enabled devices, especially sport watches and mobile phones, were found to be accurate in terms of recorded course distances. | Related WorkAccording to Düking et al [4], wearables “are lightweight, sensor-based devices that are worn close to or on the surface of the skin, where they detect, analyze, and transmit information concerning several internal and external variables to an external device (...),” (p. 2). In particular, GPS-enabled devices can be considered reliable tracking devices, which holds true even for inexpensive systems.As a study conducted by Pugliese et al suggests, the increasing use of wearables among consumers has implications for public health. Monitoring an individual’s personal activity level, for example, steps taken in one day, can result in an increased overall physical activity [5]. A moderate level of physical activity can prevent widespread diseases such as diabetes or hypertension [6-8] and thus result in decreasing costs for public health care systems in the long term [9,10].Yet, in the context of the quantified-self movement, a high accuracy of these consumer-centric devices is desirable. In theory, the measurements obtained by different vendors and device categories (ie, GPS-enabled system vs accelerometer-based) should be comparable with each other [11].Noah et al studied the reliability and validity of 2 Fitbit (Fitbit, San Francisco, CA) activity trackers with 23 participants. There seems to be evidence that these particular devices produce results “valid for activity monitoring” [12].A study by Ferguson et al evaluated several consumer-level activity monitors [13]. The findings suggested the validity of fitness trackers with respect to measurement of steps; however, their study population was limited to 21 young adults.At present, and to the best of our knowledge, no study exists that examines the adoption of consumer-level devices in a broad and diverse population. This is supported by the meta-analysis by Evenson et al: “Exploring the measurement properties of the trackers in a wide variety of populations would also be important in both laboratory and field settings.” We conclude that “more field-based studies are needed” (p. 20) [3]. In particular, this should include all age groups, different fitness levels, and a great variety of related devices. | [
"25592201",
"26858649",
"2144946",
"3555525",
"10593542",
"12900704",
"24777201",
"24007317",
"22969321",
"17720623",
"10993413",
"27068022",
"25789630",
"24268570",
"26950687",
"24497157",
"25973205",
"9152686",
"19887012",
"23812857",
"14719979"
] | [
{
"pmid": "25592201",
"title": "Waste the waist: a pilot randomised controlled trial of a primary care based intervention to support lifestyle change in people with high cardiovascular risk.",
"abstract": "BACKGROUND\nIn the UK, thousands of people with high cardiovascular risk are being identified by a... |
International Journal of Biomedical Imaging | 28367213 | PMC5358478 | 10.1155/2017/9749108 | Image Analysis for MRI Based Brain Tumor Detection and Feature Extraction Using Biologically Inspired BWT and SVM | The segmentation, detection, and extraction of infected tumor area from magnetic resonance (MR) images are a primary concern but a tedious and time taking task performed by radiologists or clinical experts, and their accuracy depends on their experience only. So, the use of computer aided technology becomes very necessary to overcome these limitations. In this study, to improve the performance and reduce the complexity involves in the medical image segmentation process, we have investigated Berkeley wavelet transformation (BWT) based brain tumor segmentation. Furthermore, to improve the accuracy and quality rate of the support vector machine (SVM) based classifier, relevant features are extracted from each segmented tissue. The experimental results of proposed technique have been evaluated and validated for performance and quality analysis on magnetic resonance brain images, based on accuracy, sensitivity, specificity, and dice similarity index coefficient. The experimental results achieved 96.51% accuracy, 94.2% specificity, and 97.72% sensitivity, demonstrating the effectiveness of the proposed technique for identifying normal and abnormal tissues from brain MR images. The experimental results also obtained an average of 0.82 dice similarity index coefficient, which indicates better overlap between the automated (machines) extracted tumor region with manually extracted tumor region by radiologists. The simulation results prove the significance in terms of quality parameters and accuracy in comparison to state-of-the-art techniques. | 2. Related WorksMedical image segmentation for detection of brain tumor from the magnetic resonance (MR) images or from other medical imaging modalities is a very important process for deciding right therapy at the right time. Many techniques have been proposed for classification of brain tumors in MR images, most notably, fuzzy clustering means (FCM), support vector machine (SVM), artificial neural network (ANN), knowledge-based techniques, and expectation-maximization (EM) algorithm technique which are some of the popular techniques used for region based segmentation and so to extract the important information from the medical imaging modalities. An overview and findings of some of the recent and prominent researches are presented here. Damodharan and Raghavan [10] have presented a neural network based technique for brain tumor detection and classification. In this method, the quality rate is produced separately for segmentation of WM, GM, CSF, and tumor region and claims an accuracy of 83% using neural network based classifier. Alfonse and Salem [11] have presented a technique for automatic classification of brain tumor from MR images using an SVM-based classifier. To improve the accuracy of the classifier, features are extracted using fast Fourier transform (FFT) and reduction of features is performed using Minimal-Redundancy-Maximal-Relevance (MRMR) technique. This technique has obtained an accuracy of 98.9%.The extraction of the brain tumor requires the separation of the brain MR images to two regions [12]. One region contains the tumor cells of the brain and the second contains the normal brain cells [13]. Zanaty [14] proposed a methodology for brain tumor segmentation based on a hybrid type of approach, combining FCM, seed region growing, and Jaccard similarity coefficient algorithm to measure segmented gray matter and white matter tissues from MR images. This method obtained an average segmentation score S of 90% at the noise level of 3% and 9%, respectively. Kong et al. [7] investigated automatic segmentation of brain tissues from MR images using discriminative clustering and future selection approach. Demirhan et al. [5] presented a new tissue segmentation algorithm using wavelets and neural networks, which claims effective segmentation of brain MR images into the tumor, WM, GM, edema, and CSF. Torheim et al. [15], Guo et al. [1], and Yao et al. [16] presented a technique which employed texture features, wavelet transform, and SVM's algorithm for effective classification of dynamic contrast-enhanced MR images, to handle the nonlinearity of real data and to address different image protocols effectively. Torheim et al. [15] also claim that their proposed technique gives better predictions and improved clinical factors, tumor volume, and tumor stage in comparison with first-order statistical features.Kumar and Vijayakumar [17] introduced brain tumor segmentation and classification based on principal component analysis (PCA) and radial basis function (RBF) kernel based SVM and claims similarity index of 96.20%, overlap fraction of 95%, and an extra fraction of 0.025%. The classification accuracy to identify tumor type of this method is 94% with total errors detected of 7.5%. Sharma et al. [18] have presented a highly efficient technique which claims accuracy of 100% in the classification of brain tumor from MR images. This method is utilizing texture-primitive features with artificial neural network (ANN) as segmentation and classifier tool. Cui et al. [19] applied a localized fuzzy clustering with spatial information to form an objective of medical image segmentation and bias field estimation for brain MR images. In this method, authors use Jaccard similarity index as a measurement of the segmentation accuracy and claim 83% to 95% accuracy to segment white matter, gray matter, and cerebrospinal fluid. Wang et al. [20] have presented a medical image segmentation technique based on active contour model to deal with the problem of intensity inhomogeneities in image segmentation. Chaddad [21] has proposed a technique of automatic feature extraction for brain tumor detection based on Gaussian mixture model (GMM) using MR images. In this method, using principal component analysis (PCA) and wavelet based features, the performance of the GMM feature extraction is enhanced. An accuracy of 97.05% for the T1-weighted and T2-weighted and 94.11% for FLAIR-weighted MR images are obtained.Deepa and Arunadevi [22] have proposed a technique of extreme learning machine for classification of brain tumor from 3D MR images. This method obtained an accuracy of 93.2%, the sensitivity of 91.6%, and specificity of 97.8%. Sachdeva et al. [23] have presented a multiclass brain tumor classification, segmentation, and feature extraction performed using a dataset of 428 MR images. In this method, authors used ANN and then PCA-ANN and observed the increment in classification accuracy from 77% to 91%.The above literature survey has revealed that some of the techniques are invented to obtain segmentation only; some of the techniques are invented to obtain feature extraction and some of the techniques are invented to obtain classification only. Feature extraction and reduction of feature vectors for effective segmentation of WM, GM, CSF, and infected tumor region and analysis on combined approach could not be conducted in all the published literature. Moreover, only a few features are extracted and therefore very low accuracy in tumor detection has been obtained. Also, all the above literatures are missing with the calculation of overlap that is dice similarity index, which is one of the important parameters to judge the accuracy of any brain tumor segmentation algorithm.In this study, we perform a combination of biologically inspired Berkeley wavelet transformation (BWT) and SVM as a classifier tool to improve diagnostic accuracy. The cause of this study is to extract information from the segmented tumor region and classify healthy and infected tumor tissues for a large database of medical images. Our results lead to conclude that the proposed method is suitable to integrate clinical decision support systems for primary screening and diagnosis by the radiologists or clinical experts. | [
"23790354",
"25265636",
"24240724",
"24802069",
"19893702",
"23645344",
"18194102"
] | [
{
"pmid": "23790354",
"title": "State of the art survey on MRI brain tumor segmentation.",
"abstract": "Brain tumor segmentation consists of separating the different tumor tissues (solid or active tumor, edema, and necrosis) from normal brain tissues: gray matter (GM), white matter (WM), and cerebrospin... |
Frontiers in Neurorobotics | 28381998 | PMC5360715 | 10.3389/fnbot.2017.00013 | Real-Time Biologically Inspired Action Recognition from Key Poses Using a Neuromorphic Architecture | Intelligent agents, such as robots, have to serve a multitude of autonomous functions. Examples are, e.g., collision avoidance, navigation and route planning, active sensing of its environment, or the interaction and non-verbal communication with people in the extended reach space. Here, we focus on the recognition of the action of a human agent based on a biologically inspired visual architecture of analyzing articulated movements. The proposed processing architecture builds upon coarsely segregated streams of sensory processing along different pathways which separately process form and motion information (Layher et al., 2014). Action recognition is performed in an event-based scheme by identifying representations of characteristic pose configurations (key poses) in an image sequence. In line with perceptual studies, key poses are selected unsupervised utilizing a feature-driven criterion which combines extrema in the motion energy with the horizontal and the vertical extendedness of a body shape. Per class representations of key pose frames are learned using a deep convolutional neural network consisting of 15 convolutional layers. The network is trained using the energy-efficient deep neuromorphic networks (Eedn) framework (Esser et al., 2016), which realizes the mapping of the trained synaptic weights onto the IBM Neurosynaptic System platform (Merolla et al., 2014). After the mapping, the trained network achieves real-time capabilities for processing input streams and classify input images at about 1,000 frames per second while the computational stages only consume about 70 mW of energy (without spike transduction). Particularly regarding mobile robotic systems, a low energy profile might be crucial in a variety of application scenarios. Cross-validation results are reported for two different datasets and compared to state-of-the-art action recognition approaches. The results demonstrate, that (I) the presented approach is on par with other key pose based methods described in the literature, which select key pose frames by optimizing classification accuracy, (II) compared to the training on the full set of frames, representations trained on key pose frames result in a higher confidence in class assignments, and (III) key pose representations show promising generalization capabilities in a cross-dataset evaluation. | 2. Related workThe proposed key pose based action recognition approach is motivated and inspired by recent evidences about the learning mechanisms and representations involved in the processing of articulated motion sequences, as well as hardware and software developments from various fields of visual sciences. For instance, empirical studies indicate, that special kinds of events within a motion sequence facilitate the recognition of an action. Additional evidences from psychophysics, as well as neurophysiology suggest that both, form and motion information contribute to the representation of an action. Modeling efforts propose functional mechanisms for the processing of biological motion and show how such processing principles can be transfered to technical domains. Deep convolutional networks make it possible to learn hierarchical object representations, which show an impressive recognition performance and enable the implementation of fast and energy efficient classification architectures, particularly in combination with neuromorphic hardware platforms. In the following sections, we will briefly introduce related work and results from different scientific fields, all contributing to a better understanding of action representation and the development of efficient action recognition approaches.2.1. Articulated and biological motionStarting with the pioneering work of Johansson (1973), perceptual sciences gained more and more insights about how biological motion might be represented in the human brain and what the characteristic properties of an articulated motion sequence are. In psychophysical experiments, humans show a remarkable performance in recognizing biological motions, even when the presented motion is reduced to a set of points moving coherently with body joints (point light stimuli; PLS). In a detection task, subjects were capable of recognizing a walking motion within about 200 ms (Johansson, 1976). These stimuli, however, are not free of – at least configurational – form information and the discussion about the contributions of form and motion in biological motion representation is still ongoing (Garcia and Grossman, 2008). Some studies indicate a stronger importance of motion cues (Mather and Murdoch, 1994), others emphasize the role of configurational form information (Lange and Lappe, 2006). Even less is known about the specific nature and characteristic of the visual cues which facilitate the recognition of a biological motion sequence. In Casile and Giese (2005), a statistical analysis as well as the results of psychophysical experiments indicate that local opponent motion in horizontal direction is one of the critical features for the recognition of PLS. Thurman and Grossman (2008) conclude, that there are specific moments in an action performance which are “more perceptually salient” compared to others. Their results emphasize the importance of dynamic cues in moments when the distance between opposing limbs is the lowest (corresponding to local opponent motion; maxima in the motion energy). On the contrary, more recent findings by Thirkettle et al. (2009) indicate, that moments of a large horizontal body extension (co-occurring with minima in the motion energy) facilitate the recognition of a biological motion in a PLS.In neurophysiology, functional imaging studies (Grossman et al., 2000), as well as single-cell recordings (Oram and Perrett, 1994) indicate the existence of specialized mechanisms for the processing of biological motion in the superior temporal sulcus (STS). STS has been suggested to be a point of convergence of the separate dorsal “where” and the ventral “what” pathways (Boussaoud et al., 1990; Felleman and Van Essen, 1991), containing cells which integrate form and motion information of biological objects (Oram and Perrett, 1996) and selectively respond to, e.g., object manipulation, face, limb and whole body motion (Puce and Perrett, 2003). Besides the evidence that both form and motion information contribute to the registration of biological motion, action specific cells in STS are reported to respond to static images of articulated bodies which in parallel evoke activities in the medio temporal (MT) and medial superior temporal (MST) areas of the dorsal stream (implied motion), although there is no motion present in the input signal (Kourtzi and Kanwisher, 2000; Jellema and Perrett, 2003). In line with the psychophysical studies, these results indicate that poses with a specific feature characteristic (here, articulation) facilitate the recognition of a human motion sequence.Complementary modeling efforts in the field of computational neuroscience suggest potential mechanisms which might explain the underlying neural processing and learning principles. In Giese and Poggio (2003) a model for the recognition of biological movements is proposed, which processes visual input along two separate form and motion pathways and temporally integrates the responses of prototypical motion and form patterns (snapshots) cells via asymmetric connections in both pathways. Layher et al. (2014) extended this model by incorporating an interaction between the two pathways, realizing the automatic and unsupervised learning of key poses by modulating the learning of the form prototypes using a motion energy based signal derived in the motion pathway. In addition, a feedback mechanism is proposed in this extended model architecture which (I) realizes sequence selectivity by temporal association learning and (II) gives a potential explanation for the activities in MT/MST observed for static images of articulated poses in neurophysiological studies.2.2. Action recognition in image sequencesIn computer vision, the term vision-based action recognition summarizes approaches to assign an action label to each frame or a collection of frames of an image sequence. Over the last decades, numerous vision-based action recognition approaches have been developed and different taxonomies have been proposed to classify them by different aspects of their processing principles. In Poppe (2010), action recognition methods are separated by the nature of the image representation they rely on, as well as the kind of the employed classification scheme. Image representations are divided into global representations, which use a holistic representation of the body in the region of interest (ROI; most often the bounding box around a body silhouette in the image space), and local representations, which describe image and motion characteristics in a spatial or spatio-temporal local neighborhood. Prominent examples for the use of whole body representations are motion history images (MHI) (Bobick and Davis, 2001), or the application of histograms of oriented gradients (HOG) (Dalal and Triggs, 2005; Thurau and Hlavác, 2008). Local representations are, e.g., employed in Dollar et al. (2005), where motion and form based descriptors are derived in the local neighborhood (cuboids) of spatio-temporal interest points. Classification approaches are separated into direct classification, which disregard temporal relationships (e.g., using histograms of prototype descriptors, Dollar et al., 2005) and temporal state-space models, which explicitly model temporal transitions between observations (e.g., by employing Hidden Markov models (HMMs) Yamato et al., 1992, or dynamic time warping (DTW) Chaaraoui et al., 2013). For further taxonomies and an exhaustive overview of computer vision action recognition approaches we refer to the excellent reviews in Gavrila (1999); Aggarwal and Ryoo (2011); Weinland et al. (2011).The proposed approach uses motion and form based feature properties to extract key pose frames. The identified key pose frames are used to learn class specific key pose representations using a deep convolutional neural network (DCNN). Classification is either performed framewise or by temporal integration through majority voting. Thus, following the taxonomy of Poppe (2010), the approach can be classified as using global representations together with a direct classification scheme. Key pose frames are considered as temporal events within an action sequence. This kind of action representation and classification is inherently invariant against variations in (recording and execution) speed. We do not argue that modeling temporal relationships between such events is not necessary in general. The very simple temporal integration scheme was chosen to focus on an analysis of the importance of key poses in the context of action representation and recognition. Because of the relevance to the presented approach, we will briefly compare specifically key pose base action recognition approaches in the following.2.3. Key pose based action recognitionKey pose based action recognition approaches differ in their understanding of the concept of key poses. Some take a phenomenological perspective and define key poses as events which possess a specific feature characteristic giving rise to their peculiarity. There is no a priori knowledge available about whether, when and how often such feature-driven events occur within an observed action sequence, neither during the establishment of the key pose representations during training, nor while trying to recognize an action sequence. Others regard key pose selection as the result of a statistical analysis, favoring poses which are easy to separate among different classes or maximally capture the characteristics of an action sequence. The majority of approaches rely on such statistical properties and either consider the intra- or the inter-class distribution of image-based pose descriptors to identify key poses in action sequences.Intra-class based approachesApproaches which evaluate intra-class properties of the feature distributions regard key poses as the most representative poses of an action and measures of centrality are exploited on agglomerations in pose feature spaces to identify the poses which are most common to an action sequence. In Chaaraoui et al. (2013), a contour based descriptor following (Dedeoğlu et al., 2006) is used. Key poses are selected by repetitive k-means clustering of the pose descriptors and evaluating the resulting clusters using a compactness metric. A sequence of nearest neighbor key poses is derived for each test sequence and dynamic time warping (DTW) is applied to account for different temporal scales. The class of the closest matching temporal sequence of key poses from the training set is used as the final recognition result. Based on histograms of oriented gradients (HOG) and histograms of weighted optical flow (HOWOF) descriptors, Cao et al. (2012) adapt a local linear embedding (LLE) strategy to establish a manifold model which reduces descriptor dimensionality, while preserving the local relationship between the descriptors. Key poses are identified by interpreting the data points (i.e., descriptors/poses) on the manifold as an adjacent graph and applying a PageRank (Brin and Page, 1998) based procedure to determine the vertices of the graph with the highest centrality, or relevance.In all, key pose selection based on an intra-class analysis of the feature distribution has the advantage of capturing the characteristics of one action in isolation, independent of other classes in a dataset. Thus, key poses are not dataset specific and – in principle – can also be shared among different actions. However, most intra-class distribution based approaches build upon measures of centrality (i.e., as a part of cluster algorithms) and thus key poses are dominated by frequent poses of an action. Because they are part of transitions between others, frequent poses tend to occur in different classes and thus do not help in separating them. Infrequent poses, on the other hand, are not captured very well, but are intuitively more likely to be discriminative. The authors' are not aware of an intra-class distribution based method which tries to identify key poses based on their infrequency or abnormality (e.g., by evaluating cluster sizes and distances).Inter-class based approachesApproaches based on inter-class distribution, on the other hand, consider highly discriminative poses as key poses to separate different action appearances. Discriminability is here defined as resulting in either the best classification performance or in maximum dissimilarities between the extracted pose descriptors of different classes. To maximize the classification performance, Weinland and Boyer (2008) propose a method of identifying a vocabulary of highly discriminative pose exemplars. In each iteration of the forward selection of key poses, one exemplar at a time is added to the set of key poses by independently evaluating the classification performance of the currently selected set of poses in union with one of the remaining exemplars in the training set. The pose exemplar, which increases classification performance the most is then added to the final key pose set. The procedure is repeated until a predefined number of key poses is reached. Classification is performed based on a distance metric obtained by either silhouette-to-silhouette or silhouette-to-edge matching. Liu et al. (2013) combine the output of the early stages of an HMAX inspired processing architecture (Riesenhuber and Poggio, 1999) with a center-surround feature map obtained by subtracting several layers of a Gaussian pyramid and a wavelet laplacian pyramid feature map into framewise pose descriptors. The linearized feature descriptors are projected into a low-dimensional subspace derived by principal component analysis (PCA). Key poses are selected by employing an adaptive boosting technique (AdaBoost; Freund and Schapire, 1995) to select the most discriminative feature descriptors (i.e., poses). A test action sequence is matched to the thus reduced number of exemplars per action by applying an adapted local naive Bayes nearest neighbor classification scheme (LNBNN; McCann and Lowe, 2012). Each descriptor of a test sequence is assigned to its k nearest neighbors and a classwise voting is updated by the distance of a descriptor to the respective neighbor weighted by the relative number of classes per descriptor. In Baysal et al. (2010), noise reduced edges of an image are chained into a contour segmented network (CSN) by using orientation and closeness properties and transformed into a 2-adjacent segment descriptor (k-AS; Ferrari et al., 2008). The most characteristic descriptors are determined by identifying k candidate key poses per class using the k-medoids clustering algorithm and selecting the most distinctive ones among the set of all classes using a similarity measure on the 2-AS descriptors. Classification is performed by assigning each frame to the class of the key pose with the highest similarity and sequence-wide majority voting. Cheema et al. (2011) follow the same key pose extraction scheme, but instead of selecting only the most distinctive ones, key pose candidates are weighted by the number of false and correct assignments to an action class. A weighted voting scheme is then used to classify a given test sequence. Thus, although key poses with large weights have an increased influence on the final class assignment, all key poses take part in the classification process. Zhao and Elgammal (2008) use an information theoretic approach to select key frames within action sequences. They propose to describe the local neighborhood of spatiotemporal interest points using an intensity gradient based descriptor (Dollar et al., 2005). The extracted descriptors are then clustered, resulting in a codebook of prototypical descriptors (visual words). The pose prototypes are used to estimate the discriminatory power of a frame by calculating a measure based on the conditional entropy given the visual words detected in a frame. The frames with the highest discriminatory power are marked as key frames. Chi-square distances of histogram based spatiotemporal representations are used to compare key frames from the test and training datasets and majority voting is used to assign an action class to a test sequence.For a given pose descriptor and/or classification architecture, inter-class based key pose selection methods in principle minimize the recognition error, either for the recognition of the key poses (e.g., Baysal et al., 2010; Liu et al., 2013) or for the action classification (e.g., Weinland and Boyer, 2008). But, on the other hand, key poses obtained by inter-class analysis inherently do not cover the most characteristic poses of an action, but the ones which are the most distinctive within a specific set of actions. Applying this class of algorithms on two different sets of actions sharing one common action might result in a different selection of key poses for the same action. Thus, once extracted, key pose representations do not necessarily generalize over different datasets/domains and, in addition, sharing of key poses between different classes is not intended.Feature-driven approachesFeature-driven key pose selection methods do not rely on the distribution of features or descriptors at all and define a key pose as a pose which co-occurs with a specific characteristic of an image or feature. Commonly employed features, such as extrema in a motion energy based signal, are often correlated with pose properties such as the degree of articulation or the extendedness. Compared to statistical methods, this is a more pose centered perspective, since parameters of the pose itself are used to select a key pose instead of parameters describing the relationship or differences between poses.Lv and Nevatia (2007) select key poses in sequences of 3D-joint positions by automatically locating extrema of the motion energy within temporal windows. Motion energy in their approach is determined by calculating the sum over the L2 norm of the motion vectors of the joints between two temporally adjacent timesteps. 3D motion capturing data is used to render 2D projections of the key poses from different view angles. Single frames of an action sequence are matched to the silhouettes of the resulting 2D key pose representations using an extension of the Pyramid Match Kernel algorithm (PMK; Grauman and Darrell, 2005). Transitions between key poses are modeled using action graph models. Given an action sequence, the most likely action model is determined using the Viterbi Algorithm. In Gong et al. (2010), a key pose selection mechanism for 3D human action representations is proposed. Per action sequence, feature vectors (three angles for twelve joints) are projected onto the subspace spanned by the first three eigenvectors obtained by PCA. Several instances of an action are synchronized to derive the mean performance (in terms of execution) of an action. Motion energy is then defined by calculating the Euclidean distance between two adjacent poses in the mean performance. The local extrema of the motion energy are used to select the key poses, which after their reconstruction in the original space are used as the vocabulary in a bag of words approach. During recognition, each pose within a sequence is assigned to the key pose with the minimum Euclidean distance resulting in a histogram of key pose occurrences per sequence. These histograms serve as input to a support vector machine (SVM) classifier. In Ogale et al. (2007), candidate key poses are extracted by localizing the extrema of the mean motion magnitude in the estimated optical flow. Redundant poses are sorted out pairwise by considering the ratio between the intersection and the union of two registered silhouettes. The final set of unique key poses is used to construct a probabilistic context-free grammar (PCFG). This method uses an inter-class metric to reject preselected key pose candidates and thus is not purely feature-driven.Feature-driven key pose selection methods are independent of the number of different actions within a dataset. Thus, retraining is not necessary if, e.g., a new action is added to a dataset and the sharing of key poses among different actions is in principle possible. Naturally, there is no guarantee, that the selected poses maximize the separability of pose or action classes. | [
"2358548",
"15929657",
"27651489",
"1822724",
"18000323",
"18346774",
"12612631",
"17934233",
"11054914",
"26157000",
"9377276",
"14527537",
"22392705",
"1005623",
"26147887",
"10769305",
"16540566",
"23757577",
"25104385",
"23962364",
"8836213",
"12689371",
"10526343",
... | [
{
"pmid": "2358548",
"title": "Pathways for motion analysis: cortical connections of the medial superior temporal and fundus of the superior temporal visual areas in the macaque.",
"abstract": "To identify the cortical connections of the medial superior temporal (MST) and fundus of the superior temporal... |
Transactions of the Association for Computational Linguistics | 28344978 | PMC5361062 | null | Large-scale Analysis of Counseling Conversations: An Application of Natural Language Processing to Mental Health | Mental illness is one of the most pressing public health issues of our time. While counseling and psychotherapy can be effective treatments, our knowledge about how to conduct successful counseling conversations has been limited due to lack of large-scale data with labeled outcomes of the conversations. In this paper, we present a large-scale, quantitative study on the discourse of text-message-based counseling conversations. We develop a set of novel computational discourse analysis methods to measure how various linguistic aspects of conversations are correlated with conversation outcomes. Applying techniques such as sequence-based conversation models, language model comparisons, message clustering, and psycholinguistics-inspired word frequency analyses, we discover actionable conversation strategies that are associated with better conversation outcomes. | 2 Related WorkOur work relates to two lines of research:Therapeutic Discourse Analysis & PsycholinguisticsThe field of conversation analysis was born in the 1960s out of a suicide prevention center (Sacks and Jefferson, 1995; Van Dijk, 1997). Since then conversation analysis has been applied to various clinical settings including psychotherapy (Labov and Fanshel, 1977). Work in psycholinguistics has demonstrated that the words people use can reveal important aspects of their social and psychological worlds (Pennebaker et al., 2003). Previous work also found that there are linguistic cues associated with depression (Ramirez-Esparza et al., 2008; Campbell and Pennebaker, 2003) as well as with suicude (Pestian et al., 2012). These findings are consistent with Beck’s cognitive model of depression (1967; cognitive symptoms of depression precede the affective and mood symptoms) and with Pyszczynski and Greenberg’s self-focus model of depression (1987; depressed persons engage in higher levels of self-focus than non-depressed persons).In this work, we propose an operationalized psycholinguistic model of perspective change and further provide empirical evidence for these theoretical models of depression.Large-scale Computational Linguistics Applied to ConversationsLarge-scale studies have revealed subtle dynamics in conversations such as coordination or style matching effects (Niederhoffer and Pennebaker, 2002; Danescu-Niculescu-Mizil, 2012) as well as expressions of social power and status (Bramsen et al., 2011; Danescu-Niculescu-Mizil et al., 2012). Other studies have connected writing to measures of success in the context of requests (Althoff et al., 2014), user retention (Althoff and Leskovec, 2015), novels (Ashok et al., 2013), and scientific abstracts (Guerini et al., 2012). Prior work has modeled dialogue acts in conversational speech based on linguistic cues and discourse coherence (Stolcke et al., 2000). Unsupervised machine learning models have also been used to model conversations and segment them into speech acts, topical clusters, or stages. Most approaches employ Hidden Markov Model-like models (Barzilay and Lee, 2004; Ritter et al., 2010; Paul, 2012; Yang et al., 2014) which are also used in this work to model progression through conversation stages.Very recently, technology-mediated counseling has allowed the collection of large datasets on counseling. Howes et al. (2014) find that symptom severity can be predicted from transcript data with comparable accuracy to face-to-face data but suggest that insights into style and dialogue structure are needed to predict measures of patient progress. Counseling datasets have also been used to predict the conversation outcome (Huang, 2015) but without modeling the within-conversation dynamics that are studied in this work. Other work has explored how novel interfaces based on topic models can support counselors during conversations (Dinakar et al., 2014a; 2014b; 2015; Chen, 2014).Our work joins these two lines of research by developing computational discourse analysis methods applicable to large datasets that are grounded in therapeutic discourse analysis and psycholinguistics. | [] | [] |
Frontiers in Neuroinformatics | 28381997 | PMC5361107 | 10.3389/fninf.2017.00021 | Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System | OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). | 1.3. Related work1.3.1. Generic workflow enginesLike OpenMOLE, other initiatives made the choice not to target a specific community. Kepler (Altintas et al., 2004) was one of the first general-purpose scientific workflow systems, recognizing the need for transparent and simplified access to high performance computing platforms more than a decade ago. Pegasus (Deelman et al., 2005) is a system that initially gained popularity for mapping complex workflows to resources resources in distributed environments without requiring input from the user.PSOM (Pipeline System for Octave and Matlab) (Bellec et al., 2012) is a workflow system centered around Matlab/Octave. Although this is certainly a good asset for this userbase, it revolves around Matlab, a proprietary system. This hinders by definition sharing workflows to the wider community and reduces the reproducibility of experiments.1.3.2. Community-tailored workflow enginesOn the other hand, some communities have seen the emergence of tailored workflow managers. For example, the bioinformatics community has developed Taverna (Oinn et al., 2004) and Galaxy (Goecks et al., 2010) for the needs of their community.In the specific case of the neuroimaging field, two main solutions emerge: NiPype (Gorgolewski et al., 2011) and LONI (Rex et al., 2003). NiPype is organized around three layers. The most promising one is the top-level common interface that provides a Python abstraction of the main neuroimaging toolkits (FSL, SPM, …). It is extremely useful to compare equivalent methods across multiple packages. NiPype also offers pipelining possibilities and a basic workload delegation layer only targeting the cluster environments SGE and PBS. Workflows are delegated to these environments as a whole, without the possibility to exploit a finer grain parallelism among the different tasks.The LONI Pipeline provides a graphical interface for choosing processing blocks from a predefined library to form the pipeline. It supports workload delegation to clusters preconfigured to understand the DRMAA API (Tröger et al., 2012).However, the LONI Pipeline displays limitations at three levels. First, the format used to define new nodes is XML (eXtensible Markup Language), and assumes the packaged tools offer a well-formed command line and its input parameters. On this aspect, the Python interfaces forming NiPype's top layer is far superior to LONI pipeline's approach. Second, one might also regret the impossibility to script workflows, to the best of our knowledge.The third and main drawback of the LONI pipeline is in our opinion its restrictive licensing, which prevents an external user to modify and redistribute the modifications easily. Previous works in the literature have shown the importance of developing and releasing scientific software under Free and Open Source licenses (Stodden, 2009; Peng, 2011). This is of tremendous importance to enable reproducibility and thorough peer-reviewing of scientific results.Finally, we have recently noted another effort developed in Python: FastR3 (Achterberg et al., 2015). It is designed around a plugin system that enables connecting to different data sources or execution environments. At the moment, execution environments can only be addressed through the DRMA (Distributed Resource Management Application) API but more environments should be provided in the future.1.3.3. Level of support of HPC environmentsTable 1 lists the support for various HPC environments in the workflow managers studied in this section. It also sums up the features and domains of application for each tool.Table 1Summary table of the features, HPC environments supported and domains of application of various workflow managers.Workflow engineLocal multi-processingHPC supportGrid supportCloud supportGalaxy4YesDRMAA clustersNoNo (manual cluster deployment)Taverna5YesNoNoNoFastRYesDRMAA clustersNoNoLONI6NoDRMAA clustersNoNo (manual cluster deployment)NiPypeYesPBS/Torque, SGENoNoKepler7YesPBS, Condor, LoadLevelerGlobusNoPegasus8No (need local Condor)Condor, PBSNoNo (manual cluster deployment)PSOMYesNoNoNoOpenMOLEYesCondor, Slurm, PBS, SGE, OARAd hoc grids, gLite/EMI, Dirac, EGIEC2 (fully automated)9Workflow engineScripting supportGUIGeneric/CommunityLicenseGalaxyNoYesBioInformaticsAFL 3.0TavernaNoYesBioInformaticsApache 2.0FastRPythonNoNeuroimagingBSDLONINoYesNeuroimagingProprietary (LONI)NiPypePythonNoNeuroimagingBSDKeplerPartly with RYesGenericBSDPegasusPython, Java, PerlNoGenericApache 2.0PSOMMatlabNoGenericMITOpenMOLEDomain Specific Language, ScalaYesGenericAGPL 3Information was drawn from the web pages in footnote when present, or from the reference paper cited in the section otherwise.To the best of our knowledge, we are not aware of any workflow engine that targets as many environments as OpenMOLE, but more importantly that introduces an advanced service layer to distribute the workload. When it comes to very large scale infrastructures such as grids and clouds, sophisticated submission strategies taking into account the state of the resources as well as implementing a level of fault tolerance must be available. Most of the other workflow engines offer service delegation layers that simply send jobs to a local cluster. OpenMOLE implements expert submission strategies (job grouping, over submission, …), harnesses efficient middlewares such as Dirac, and automatically manages end-to-end data transfer even across heterogeneous computing environments.Compared to other workflow processing engines, OpenMOLE promotes a zero-deployment approach by accessing the computing environments from bare metal, and copies on-the-fly any software component required for a successful remote execution. OpenMOLE also encourages the use of software components developed in heterogeneous programming languages and enables users to easily replace the elements involved in the workflow. | [
"24600388",
"17070705",
"22493575",
"26368917",
"20738864",
"21897815",
"11577229",
"23658616",
"22334356",
"18519166",
"24816548",
"23758125",
"15201187",
"22144613",
"12880830"
] | [
{
"pmid": "24600388",
"title": "Machine learning for neuroimaging with scikit-learn.",
"abstract": "Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation ... |
BMC Medical Informatics and Decision Making | 28330491 | PMC5363029 | 10.1186/s12911-017-0424-6 | Orchestrating differential data access for translational research: a pilot implementation | BackgroundTranslational researchers need robust IT solutions to access a range of data types, varying from public data sets to pseudonymised patient information with restricted access, provided on a case by case basis. The reason for this complication is that managing access policies to sensitive human data must consider issues of data confidentiality, identifiability, extent of consent, and data usage agreements. All these ethical, social and legal aspects must be incorporated into a differential management of restricted access to sensitive data.MethodsIn this paper we present a pilot system that uses several common open source software components in a novel combination to coordinate access to heterogeneous biomedical data repositories containing open data (open access) as well as sensitive data (restricted access) in the domain of biobanking and biosample research. Our approach is based on a digital identity federation and software to manage resource access entitlements.ResultsOpen source software components were assembled and configured in such a way that they allow for different ways of restricted access according to the protection needs of the data. We have tested the resulting pilot infrastructure and assessed its performance, feasibility and reproducibility.ConclusionsCommon open source software components are sufficient to allow for the creation of a secure system for differential access to sensitive data. The implementation of this system is exemplary for researchers facing similar requirements for restricted access data. Here we report experience and lessons learnt of our pilot implementation, which may be useful for similar use cases. Furthermore, we discuss possible extensions for more complex scenarios. | Related workMany different approaches and systems are used for tackling the aims and issues we have addressed in the pilot. Biological material repositories similar to BioSD exist, varying in scope [82, 83], geographical reference area [84] and scale [85, 86]. BioSD is mainly a European reference resource for public biosample data and metadata. A similar variety exists in the arena of clinical data resources [87]. In this field, the LPC Catalogue is among the most prominent biobank catalogues in Europe, while a wide range of biobanks with different scales and scopes exist [88]. Several technologies and approaches are available to manage identities and application access rights [27–32]. For instance, commercial systems like OpenID [89] tend to prefer technical simplicity over advanced features (e.g., identity federation is not a standard feature within OpenID). We have chosen Shibboleth for multiple reasons: it is reliable software based on the SAML standard, it is well-known among research organisations, and the organisations involved in the pilot were already using Shibboleth when we started our work. Permission and access management is an issue wider than technology, which encompasses IT solutions, policies like access audits and new personnel checking and regulatory compliance [16, 90] The access control used in REMS can be seen as a variant of a lists-based access control approach (ACL [91]). Compared to similar products [92–94], REMS is focused on granting resource access based on the commitment to a data access agreement, and the final approval from personnel with the data access control role. Moreover, REMS allows for the definition of workflows to obtain and finalise the access approval procedures, and it logs the actions during the execution of these workflows. Finally, both REMS and the other components we have used are modular and can be composed into a larger system (e.g., with respect to the distribution of identities). While one might prefer simpler options on a smaller scale [95], our approach gives the flexibility to implement larger infrastructures with existing common technologies. The approach used in the pilot does not address the further data protection that is often ensured by establishing different data access levels (e.g., original patient records, de-identified/obfuscated data, aggregated data, disclosure of only summary statistics, computed at the source of data [96]) and by classifying users based on user trustworthiness [97, 98]. The pilot approach is agnostic with respect to the resource that is controlled and the specific protection mechanism that this has in place, which is made possible by the fact that both Shibboleth and REMS essentially see a resource as a reference, such as a URL to a web application or a web link to a file download. For instance, one might adopt our approach for mediating access to resources providing data summaries in ways similar to the BBMRI [98, 99], as well as in case of resources that grant access to web services [100] and local computations [96].The pilot in the context of data access frameworksIn life science increasingly medical data have to be effectively accessed and linked. This expanding volume of human data is stored in various databases, repositories, and patient registries, while protecting data privacy and the legitimate interests of patients as well as data subjects. Regarding the purpose of ensuring protection of human data while enabling data sharing, several approaches have been suggested that range from the creation of a political framework in the form of resolutions or treaties, to operational guidelines for data sharing [101]. Such frameworks include concepts like legitimate public health purpose, minimum information necessary, privacy and security standards, data use agreements [102], ethical codes like the IMIA (International Medical Informatics Association) Code of Ethics for Health Information Professionals [103] and AMIA’s (American Medical Informatics Association) Code of Professional and Ethical Conduct, guidance for genomic data, and potential privacy risks [104]. More concrete approaches are a human rights-based system for an international code of conduct for genomic and clinical data sharing [105], recommendations about clinical databases and privacy protections [106], and healthcare privacy protection based on differential privacy-preserving methods (iDASH, integrating Data for Analysis, Anonymization, and Sharing) [107, 108].Genetic sequence databases are an important part of many biomedical research efforts and contained in many data repositories and biosamples databases. However, human genetic data should only be made available if it can be protected so that the privacy of data subjects is not revealed. The problem is that individual genomic sequence data (e.g. SNPs) are potentially “identifiable” using common identifiers [106, 109, 110]. In biobanking many new population biobanks and cohort studies were created to produce information about possible associations between genotype and phenotype, an association that is important to understand the causes of diseases. Together with BBMRI, different initiatives exist that address the protection of data privacy and that further the standardization and harmonization of data management of genomic data and the sharing of data and biosamples, for example: Public Population Project in Genomics (P3G [111]), International Society for Biological and Environmental Repositories (ISBER [112]), Biobank Standardisation and Harmonisation for Research Excellence projects [113] and the Electronic Medical Records and Genomics (eMERGE) Network [11, 114].The constraints arising from limitations defined by the informed consent of the data subject have to be reflected in data access agreements and data transfer agreements. In general, the rule applies that data can only be made available to the extent that is allowed under the local legal requirements relevant for the data provider including ethics votes, vote by data access committee and the consent by the data subject. Data sharing should be an important part of an overall data management plan, which is a key element to support data access and sustainability. A data sharing agreement should supplement and not supplant the data management plan because the sharing agreement is about relationship building and trust building. It supports the long term planning and finding ways to maximize the use of data.Anonymisation is becoming increasingly more difficult to achieve due to the increase in health data such as genomic data that is potentially identifying. As mentioned above, although anonymisation is protecting the privacy needs of the data subjects, it is an imperfect solution and must be supplemented by additional solutions that build trust and prevent researchers from trying to identify study subjects. In the end, what is necessary for research is a culture of responsibility and data governance when dealing with human data. Building blocks that support and strengthen such culture are data sharing agreements, strict authentication and authorisation methods and the monitoring and tracking of data usage. The created pilot fits into such efforts, because, by using and combining several open source components, it created an efficient authentication and authorisation framework for the access to sensitive data that can support efforts for trust building. The pilot must be seen in connection with the creation of a European Open Science Cloud, a federated environment for scientific data sharing and reuse, based on existing and emerging elements [115]. The complexity of current data sharing practices requires new mechanisms that are more flexible and adjustable and are employing proven components, like the open source authentication components of the pilot. | [
"18182604",
"24008998",
"21734173",
"22392843",
"24144394",
"22112900",
"17077452",
"21632745",
"14624258",
"23305810",
"12039510",
"21890450",
"17130065",
"24513229",
"12904522",
"12424005",
"22505850",
"23413435",
"18560089",
"22096232",
"24265224",
"25361974",
"2652772... | [
{
"pmid": "24008998",
"title": "Some experiences and opportunities for big data in translational research.",
"abstract": "Health care has become increasingly information intensive. The advent of genomic data, integrated into patient care, significantly accelerates the complexity and amount of clinical d... |
JMIR mHealth and uHealth | 28279948 | PMC5364324 | 10.2196/mhealth.6552 | Remote Monitoring of Hypertension Diseases in Pregnancy: A Pilot Study | BackgroundAlthough remote monitoring (RM) has proven its added value in various health care domains, little is known about the remote follow-up of pregnant women diagnosed with a gestational hypertensive disorders (GHD).ObjectiveThe aim of this study was to evaluate the added value of a remote follow-up program for pregnant women diagnosed with GHD.MethodsA 1-year retrospective study was performed in the outpatient clinic of a 2nd level prenatal center where pregnant women with GHD received RM or conventional care (CC). Primary study endpoints include number of prenatal visits and admissions to the prenatal observation ward. Secondary outcomes include gestational outcome, mode of delivery, neonatal outcome, and admission to neonatal intensive care (NIC). Differences in continuous and categorical variables in maternal demographics and characteristics were tested using Unpaired Student’s two sampled t test or Mann-Whitney U test and the chi-square test. Both a univariate and multivariate analysis were performed for analyzing prenatal follow-up and gestational outcomes. All statistical analyses were done at nominal level, Cronbach alpha=.05.ResultsOf the 166 patients diagnosed with GHD, 53 received RM and 113 CC. After excluding 5 patients in the RM group and 15 in the CC group because of the missing data, 48 patients in RM group and 98 in CC group were taken into final analysis. The RM group had more women diagnosed with gestational hypertension, but less with preeclampsia when compared with CC (81.25% vs 42.86% and 14.58% vs 43.87%). Compared with CC, univariate analysis in RM showed less induction, more spontaneous labors, and less maternal and neonatal hospitalizations (48.98% vs 25.00%; 31.63% vs 60.42%; 74.49% vs 56.25%; and 27.55% vs 10.42%). This was also true in multivariate analysis, except for hospitalizations.ConclusionsAn RM follow-up of women with GHD is a promising tool in the prenatal care. It opens the perspectives to reverse the current evolution of antenatal interventions leading to more interventions and as such to ever increasing medicalized antenatal care. | Related WorkRM has already shown benefits in Cardiology and Pneumology [7,8]. In the prenatal care, RM has also shown an added value to improve maternal and neonatal outcomes. Various studies reported a reduction in unscheduled patient visits, low neonatal birth weight, and admissions to neonatal intensive care (NIC) for pregnant women who received RM compared with pregnant women who did not receive these devices. Additionally, RM can contribute to significant reductions in health care costs. RM was also demonstrated to prolong gestational age and to improve feelings of self-efficacy, maternal satisfaction, and gestational age at delivery when compared with a control group which did not received RM [9-16]. Unfortunately, some of the previous mentioned studies are dating back to 1995 and no more recent work is available. This is in contradiction with the rapid technological advancements that have been seen in the last decade. Further, no studies are published about the added value of RM in pregnant women with GHD. To our knowledge, this is the first publication about a prenatal follow-up program for pregnant women with GHD to date. | [
"26104418",
"27010734",
"26969198",
"24529402",
"23903374",
"22720184",
"20846317",
"8942501",
"20628517",
"20044162",
"21822394",
"10203647",
"7485304",
"22512287",
"24735917",
"27222631",
"26158653",
"24504933",
"26606702",
"26455020",
"25864288",
"26048352",
"25450475"... | [
{
"pmid": "26104418",
"title": "Diagnosis, evaluation, and management of the hypertensive disorders of pregnancy.",
"abstract": "OBJECTIVE\nThis guideline summarizes the quality of the evidence to date and provides a reasonable approach to the diagnosis, evaluation and treatment of the hypertensive diso... |
Scientific Reports | 28361913 | PMC5374503 | 10.1038/srep45639 | Multi-scale radiomic analysis of sub-cortical regions in MRI related to autism, gender and age | We propose using multi-scale image textures to investigate links between neuroanatomical regions and clinical variables in MRI. Texture features are derived at multiple scales of resolution based on the Laplacian-of-Gaussian (LoG) filter. Three quantifier functions (Average, Standard Deviation and Entropy) are used to summarize texture statistics within standard, automatically segmented neuroanatomical regions. Significance tests are performed to identify regional texture differences between ASD vs. TDC and male vs. female groups, as well as correlations with age (corrected p < 0.05). The open-access brain imaging data exchange (ABIDE) brain MRI dataset is used to evaluate texture features derived from 31 brain regions from 1112 subjects including 573 typically developing control (TDC, 99 females, 474 males) and 539 Autism spectrum disorder (ASD, 65 female and 474 male) subjects. Statistically significant texture differences between ASD vs. TDC groups are identified asymmetrically in the right hippocampus, left choroid-plexus and corpus callosum (CC), and symmetrically in the cerebellar white matter. Sex-related texture differences in TDC subjects are found in primarily in the left amygdala, left cerebellar white matter, and brain stem. Correlations between age and texture in TDC subjects are found in the thalamus-proper, caudate and pallidum, most exhibiting bilateral symmetry. | Related WorkOur review of relevant work focuses on studies using imaging techniques to identify brain regions/characteristics related to autism, gender and age.Starting with work related to ASD, various studies using MR imaging have shown that young children with autism had a significantly larger brain volume compared to normally developing peers3132. Studies on autism have also identified volume differences in specific brain structures including the cerebellum631, amygdala31, corpus callosum33, caudate nucleus34, hippocampus3135 and putamen536. For example, it was shown that caudate nucleus and pallidus volumes were related to the level of ASD-like symptoms of participants with attention-deficit/hyperactivity disorder, and that the interaction of these two structures was a significant predictor of ASD scores36. In some cases, contradictory results have been reported. While8 found that autistic children had an increase of hippocampal volume which persisted during adolescence, another study including autistic adolescents and young adults reported a decrease in hippocampal volume35. Other investigations, such as ref. 37, have shown no significant differences in hippocampal volume between ASD and control subjects. Recent studies on autism have focused on finding abnormalities related to brain development. In ref. 38, it was found that pre-adolescent children with ASD exhibited a lack of normative age-related cortical thinning, volumetric reduction, and an abnormal age-related increase in cortical gyrification. It has been hypothesized that the abnormal trajectories in brain growth, observed in children with autism, alters patterns of functional and structural connectivity during development39.Morphological differences between male and female brains have been explored, such differences are of interest since the prevalence and symptoms of various disorders are linked with gender40. For instance, autism is diagnosed four to five times more frequently in boys than girls4142, and multiple sclerosis is four times more frequent in females than males43. Likewise, anxiety, depression and eating disorders have a markedly higher incidence in females than male, especially during adolescence4445. The incidence of schizophrenia between males and females also differs across the lifespan46. In terms of brain development, the growth trajectories of several brain regions have been shown to be linked to the sex of a subject, with some regions developing faster in boys and others in girls4748. Various studies have also investigated sexual differences associated with autism. A recent study showed that the cortical volume and ventromedial/orbitofrontal prefrontal gyrification of females is greater than males, in both the ASD and healthy subject groups49. In another study, the severity of repetitive/restricted behaviors, often observed in autism, was found to be associated with sexual differences in the gray matter morphometry of motor regions50.A number of studies have focused on morphometric brain changes associated with aging4. In refs 51, 52, 53, cross-sectional and longitudinal analyses of brain region volumes revealed that the shrinkage of the hippocampus, the entorhinal cortices, the inferior temporal cortex and the prefrontal white matter increased with age. These studies have also highlighted trends towards age-related atrophy in the amygdala and cingulate gyrus of elderly individuals. Conversely, other investigations found no significant volumetric changes of the temporolimbic and cingulate cortical regions during the aging process545556. A recent study applied voxel-based morphometry to compare the white matter, grey matter and cerebral spinal fluid volumes of ASD males to control male subjects57. The results of this analysis have demonstrated highly age-dependent atypical brain morphometry in ASD subjects. Other investigations have reported that neuroanatomical abnormalities in ASD are highly age-dependent5859.The vast majority of morphometric analyses in this review have focused on voxel-wise or volumetric measurements derived from brain MRI data. Texture features provide a complementary basis for analysis by summarizing distributions of localized image measurements, e.g. filter responses, within meaningful image regions. Several studies have begun to investigate texture in brain MRI, for example to identify differences between Alzheimer’s and control groups22, to discriminate between ASD and TDC subjects60, and to evaluate the survival time of GBM patients1761. Texture features can be computed at multiple scales within regions of interest, for example multi-scale textures based on the LoG filter have been proposed for grading cerebral gliomas16.Among the works most closely related to this paper is the approach of Kovalev et al.62, where texture features were used to measure the effects of gender and age on structural brain asymmetry. In this work, 3D texture features based on extended multi-sort co-occurrence matrices were extracted from rectangular or spherical regions in T1-weighted MRI volumes, and compared across left and right hemispheres. This analysis revealed a greater asymmetry in male brains, most pronounced in the superior temporal gyrus, Heschl’s gyrus, thalamus, and posterior cingulate. Asymmetry was also found to increase with age in various areas of the brain, such as the inferior frontal gyrus, anterior insula, and anterior cingulate. While this work also investigated the link between MRI texture, gender and age, it was limited to lateral asymmetry and did not consider textural differences across gender and age groups. Moreover, texture features were obtained from arbitrarily defined sub-volumes that do not correspond to known neuroanatomical regions. In contrast, our work links texture observations to standard neuroanatomical regions obtained from a parcellation atlas, which provide a more physiologically meaningful basis for analysis in comparison to arbitrarily defined regions.To our knowledge, no work has yet investigated MRI texture analysis within neuroanatomical regions, obtained from a parcellation atlas, as a basis for studying differences related to autism, sex and age. | [
"3064974",
"10714055",
"23717269",
"26594141",
"17056234",
"19726029",
"11061265",
"26901134",
"26971430",
"20395383",
"11403201",
"24892406",
"17695498",
"18362333",
"21451997",
"22898692",
"24029645",
"22609452",
"12136055",
"11468308",
"7639631",
"12077008",
"10599796"... | [
{
"pmid": "10714055",
"title": "Subgroups of children with autism by cluster analysis: a longitudinal examination.",
"abstract": "OBJECTIVES\nA hierarchical cluster analysis was conducted using a sample of 138 school-age children with autism. The objective was to examine (1) the characteristics of resul... |
International Journal of Biomedical Imaging | 28408921 | PMC5376475 | 10.1155/2017/1985796 | Phase Segmentation Methods for an Automatic Surgical Workflow Analysis | In this paper, we present robust methods for automatically segmenting phases in a specified surgical workflow by using latent Dirichlet allocation (LDA) and hidden Markov model (HMM) approaches. More specifically, our goal is to output an appropriate phase label for each given time point of a surgical workflow in an operating room. The fundamental idea behind our work lies in constructing an HMM based on observed values obtained via an LDA topic model covering optical flow motion features of general working contexts, including medical staff, equipment, and materials. We have an awareness of such working contexts by using multiple synchronized cameras to capture the surgical workflow. Further, we validate the robustness of our methods by conducting experiments involving up to 12 phases of surgical workflows with the average length of each surgical workflow being 12.8 minutes. The maximum average accuracy achieved after applying leave-one-out cross-validation was 84.4%, which we found to be a very promising result. | 2. Related WorkNumerous methods have been developed for identifying intraoperative activities, segment common phases in a surgical workflow, and combine all gained knowledge into a model of the given workflows [4–7]. In segmentation work over surgical phases, various types of data were used, such as manual annotations by observers [8], sensor data obtained by surgical tracking tools based on frames of recorded videos [9, 10], intraoperative localization systems [4], and surgical robots [11]. In [4], Agarwal et al. incorporated patient monitoring systems used to acquire vital signals of patients during surgery. In [5], Stauder et al. proposed a method to utilize random decision forests to segment surgical workflow phases based on instrument usage data and other easily obtainable measurements.Recently, decision forests have become a very versatile and popular tool in the field of medical image analysis. In [6], Suzuki et al. developed the Intelligent Operating Theater, which has a multichannel video recorder and is able to detect intraoperative incidents. This system is installed in the operating room and analyzes video files that capture surgical staff motions in the operating room. Intraoperative information is then transmitted to another room in real time to provide support for the surgical workflow via a supervisor. In [7], Padoy et al. used three-dimensional motion features to estimate human activities in environments including the operating room and production lines. They defined workflows as ordered groups of activities with different durations and temporal patterns. Three-dimensional motion data are obtained in real time using videos from multiple cameras. A recent methodological review of the literature is available in [12].For medical terms, HMM has been used successfully in several research studies to model surgical activities for skill evaluation [13–15]. In [13], Leong et al. recorded six degrees-of-freedom (DOF) data from a laparoscopic simulator and then used them to train a four-state HMM to classify subjects according to their skill level. In [14], Rosen et al. constructed an HMM using data from two endoscopic tools, including such data as position, orientation, force, and torque. Here the HMM was able to identify differences in the skill levels of subjects with different levels of training. In [15], Bhatia et al. segmented four phases, namely, a patient entering or leaving the room, also the beginning and the end of a surgical workflow by using a combination of support vector machines (SVMs) and HMMs from video images. | [
"21632485",
"20526819",
"21195015",
"17127647",
"24014322",
"16532766"
] | [
{
"pmid": "20526819",
"title": "Analysis of surgical intervention populations using generic surgical process models.",
"abstract": "PURPOSE\nAccording to differences in patient characteristics, surgical performance, or used surgical technological resources, surgical interventions have high variability. ... |
JMIR Medical Informatics | 28347973 | PMC5387113 | 10.2196/medinform.6693 | A Software Framework for Remote Patient Monitoring by Using Multi-Agent Systems Support | BackgroundAlthough there have been significant advances in network, hardware, and software technologies, the health care environment has not taken advantage of these developments to solve many of its inherent problems. Research activities in these 3 areas make it possible to apply advanced technologies to address many of these issues such as real-time monitoring of a large number of patients, particularly where a timely response is critical.ObjectiveThe objective of this research was to design and develop innovative technological solutions to offer a more proactive and reliable medical care environment. The short-term and primary goal was to construct IoT4Health, a flexible software framework to generate a range of Internet of things (IoT) applications, containing components such as multi-agent systems that are designed to perform Remote Patient Monitoring (RPM) activities autonomously. An investigation into its full potential to conduct such patient monitoring activities in a more proactive way is an expected future step.MethodsA framework methodology was selected to evaluate whether the RPM domain had the potential to generate customized applications that could achieve the stated goal of being responsive and flexible within the RPM domain. As a proof of concept of the software framework’s flexibility, 3 applications were developed with different implementations for each framework hot spot to demonstrate potential. Agents4Health was selected to illustrate the instantiation process and IoT4Health’s operation. To develop more concrete indicators of the responsiveness of the simulated care environment, an experiment was conducted while Agents4Health was operating, to measure the number of delays incurred in monitoring the tasks performed by agents.ResultsIoT4Health’s construction can be highlighted as our contribution to the development of eHealth solutions. As a software framework, IoT4Health offers extensibility points for the generation of applications. Applications can extend the framework in the following ways: identification, collection, storage, recovery, visualization, monitoring, anomalies detection, resource notification, and dynamic reconfiguration. Based on other outcomes involving observation of the resulting applications, it was noted that its design contributed toward more proactive patient monitoring. Through these experimental systems, anomalies were detected in real time, with agents sending notifications instantly to the health providers.ConclusionsWe conclude that the cost-benefit of the construction of a more generic and complex system instead of a custom-made software system demonstrated the worth of the approach, making it possible to generate applications in this domain in a more timely fashion. | Related WorkOur proposal takes a similar approach to that in [19]. This paper shows the implementation of a distributed information infrastructure that uses the intelligent agent paradigm for: (1) automatically notifying the patient’s medical team regarding the abnormalities in his or her health status; (2) offering medical advice from a distance; and (3) enabling continuous monitoring of a patient’s health status. In addition, the authors have promoted the adoption of ubiquitous computing systems [20] and apps that allow immediate analysis of a patient’s physiological data such as a personalized feedback of their condition in real time, by using an alarm-and-remember mechanism. In this solution, patients can be evaluated, diagnosed, and cared for through a mode that is both remote and ubiquitous. In the case of rapid deterioration of a patient’s condition, the system automatically notifies the medical team through voice calls or SMS messages, providing a first-level medical response. This proposal differs from ours, in that the resulting application is closed, as opposed to our broader eHealth application generator.The approach in [21] focuses on design and development of a distributed information system based on mobile agents to allow automatic and real-time fetal monitoring. Devices such as a PDA, mobile phone, laptop, and personal computer are used to capture and display the monitored data.In [22], mobile health apps are proposed as solutions for (1) overcoming personalized health service barriers; (2) providing opportune access to critical information on a patient’s health status; (3) avoiding duplication of exams, delays and errors in patient treatment. | [
"24452256",
"24783916"
] | [
{
"pmid": "24452256",
"title": "A mobile multi-agent information system for ubiquitous fetal monitoring.",
"abstract": "Electronic fetal monitoring (EFM) systems integrate many previously separate clinical activities related to fetal monitoring. Promoting the use of ubiquitous fetal monitoring services ... |
Journal of Cheminformatics | 29086119 | PMC5395521 | 10.1186/s13321-017-0209-z | SimBoost: a read-across approach for predicting drug–target binding affinities using gradient boosting machines | Computational prediction of the interaction between drugs and targets is a standing challenge in the field of drug discovery. A number of rather accurate predictions were reported for various binary drug–target benchmark datasets. However, a notable drawback of a binary representation of interaction data is that missing endpoints for non-interacting drug–target pairs are not differentiated from inactive cases, and that predicted levels of activity depend on pre-defined binarization thresholds. In this paper, we present a method called SimBoost that predicts continuous (non-binary) values of binding affinities of compounds and proteins and thus incorporates the whole interaction spectrum from true negative to true positive interactions. Additionally, we propose a version of the method called SimBoostQuant which computes a prediction interval in order to assess the confidence of the predicted affinity, thus defining the Applicability Domain metrics explicitly. We evaluate SimBoost and SimBoostQuant on two established drug–target interaction benchmark datasets and one new dataset that we propose to use as a benchmark for read-across cheminformatics applications. We demonstrate that our methods outperform the previously reported models across the studied datasets. | Related workTraditional methods for drug target interaction prediction typically focus on one particular target of interest. These approaches can again be divided into two types which are target-based approaches [12–14] and ligand-based approaches [15–18]. In target-based approaches the molecular docking of a candidate compound with the protein target is simulated, based on the 3D structure of the target (and the compound). This approach is widely utilized to virtually screen compounds against target proteins; however this approach is not applicable when the 3D structure of a target protein is not available which is often the case, especially for G-protein coupled receptors and ion channels. The intuition in ligand-based methods is to model the common characteristics of a target, based on its known interacting ligands (compounds). One interesting example for this approach is the study [4] which utilizes similarities in the side-effects of known drugs to predict new drug–target interactions. However, the ligand-based approach may not work well if the number of known interacting ligands of a protein target is small.To allow more efficient predictions on a larger scale, i.e. for many targets simultaneously, and to overcome the limitations of the traditional methods, machine learning based approaches have attracted much attention recently. In the chemical and biologicals sciences, machine learning-based approaches have been known as (multi-target) Quantitative structure–activity relationship (QSAR) methods, which relate a set of predictor variables, describing the physico-chemical properties of a drug–target pair, to the response variable, representing the existence or the strength of an interaction.Current machine learning methods can be classified into two types, which are feature-based and similarity-based approaches. In feature-based methods, known drug–target interactions are represented by feature vectors generated by combining chemical descriptors of drugs with descriptors for targets [19–23]. With these feature vectors as input, standard machine learning methods such as Support Vector Machines (SVM), Naïve Bayes (NB) or Neural Networks (NN) can be used to predict the interaction of new drug–target pairs. Vina et al. [24] proposes a method taking into consideration only the sequence of the target and the chemical connectivity of the drug, but without relying on geometry optimization or drug–drug and target–target similarities. Cheng et al. [25] introduces a multi-target QSAR method that integrates chemical substructures and protein sequence descriptors to predict interactions for G-protein coupled receptors and kinases based on two comprehensive data sets derived from the ChEMBL database. Merget et al. [26] evaluates different machine learning methods and data balancing schemes and reports that random forests yielded the best activity prediction and allowed accurate inference of compound selectivity.In similarity-based methods [3, 27–32], similarity matrices for both the drug–drug pairs and the target–target pairs are generated. Different types of similarity metrics can be used to generate these matrices [33]; typically, chemical structure fingerprints are used to compute the similarity among drugs and a protein sequence alignment score is used for targets. One of the simplest ways of using the similarities is a Nearest Neighbor classifier [28], which predicts new interactions from the weighted (by the similarity) sum of the interaction profiles of the most similar drugs/targets. The Kernel method proposed in [27] computes a similarity for all drug–target pairs (a pairwise-kernel) using the drug–drug and target–target similarities and then uses this kernel of drug–target pairs with known labels to train an SVM-classifier. The approaches presented in [28–30] represent drug–target interactions by a bipartite graph and label drug–target pairs as +1 if the edge exists or −1, otherwise. For each drug and for each target, a separate SVM (local model) is trained, which predicts interactions of that drug (target) with all targets (drugs). The similarity matrices are used as kernels for those SVMs, and the final prediction for a pair is obtained by averaging the scores for the respective drug SVM and target SVM.All of the above machine-learning based methods for drug–target interaction prediction formulate the task as a binary classification problem, with the goal to classify a given drug–target pair as binding or non-binding. As pointed out in [4], drawbacks of the binary problem formulation are that true-negative interactions and untested drug–target pairs are not differentiated, and that the whole interaction spectrum, including both true-positive and true-negative interactions, is not covered well. Pahikkala et al. [4] introduces the method KronRLS which predicts continuous drug–target binding affinity values. To the best of our knowledge, KronRLS is the only method in the literature which predicts continuous binding affinities, and we give a detailed introduction to KronRLS below, since we use it as baseline in our experiments. Below, we also introduce Matrix Factorization as it was used in the literature for binary drug–target interaction prediction and as it plays an important role in our proposed method.KronRLSRegularized Least Squares Models (RLS) have previously been shown to be able to predict binary drug target interaction with high accuracy [31]. KronRLS as introduced in [4] can be seen as a generalization of these models for the prediction of continuous binding values. Given a set \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\{ d_{i} \}$$\end{document}{di} of drugs and a set \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\{ t_{j} \}$$\end{document}{tj} of targets, the training data consists of a set \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$X = \{ x_{1} , \ldots ,x_{m} \}$$\end{document}X={x1,…,xm} of drug–target pairs (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$X$$\end{document}X is a subset of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\{ d_{i} \times t_{j } \}$$\end{document}{di×tj}) and an associated vector \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$y = y_{1} , \ldots ,y_{m}$$\end{document}y=y1,…,ym of continuous binding affinities. The goal is to learn a prediction function \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f(x)$$\end{document}f(x) for all possible drug–target pairs \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$x \in \{ d_{i} \times t_{j } \}$$\end{document}x∈{di×tj}, i.e. a function that minimizes the objective:\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$J(f) = \mathop \sum \limits_{i = 1}^{m} (y_{i} - f(x_{i} ))^{2} + \lambda ||f||_{k}^{2}$$\end{document}J(f)=∑i=1m(yi-f(xi))2+λ||f||k2
In the objective function, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$||f||_{k}^{2}$$\end{document}||f||k2 is the norm of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f$$\end{document}f, which is associated to a kernel function \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$k$$\end{document}k (described below), and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\lambda > 0$$\end{document}λ>0 is a user defined regularization parameter. A minimizer of the above objective can be expressed as\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$f(x) = \mathop \sum \limits_{i = 1}^{m} a_{i} k(x,x_{i} )$$\end{document}f(x)=∑i=1maik(x,xi)
The kernel function \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$k$$\end{document}k is a symmetric similarity measure between two of the \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$m$$\end{document}m drug–target pairs, which can be represented by an \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$m \times m$$\end{document}m×m matrix \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$K$$\end{document}K. For two individual similarity matrices \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$K_{d}$$\end{document}Kd and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$K_{t}$$\end{document}Kt for the drugs and targets respectively, a similarity matrix for each drug–target pair can be computed as \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$K_{d} \otimes K_{t}$$\end{document}Kd⊗Kt, where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\otimes$$\end{document}⊗ stands for the Kronecker product. If the training set \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$X$$\end{document}X contains every possible pair of drugs and targets, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$K$$\end{document}K can be computed as \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$K = K_{d} \otimes K_{t}$$\end{document}K=Kd⊗Kt and the parameter vector \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$a$$\end{document}a can be learnt by solving the following system of linear equations:\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$(K + \lambda I)a = y$$\end{document}(K+λI)a=ywhere \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$I$$\end{document}I is the \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$d_{i} \times t_{j}$$\end{document}di×tj identity matrix. If only a subset of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\{ d_{i} \times t_{j} \}$$\end{document}{di×tj} is given as training data, the vector \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$y$$\end{document}y has missing values. To learn the parameter \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$a$$\end{document}a, [4] suggests to use conjugate gradient with Kronecker algebraic optimization to solve the system of linear equations.Matrix factorizationThe Matrix Factorization (MF) technique has been demonstrated to be effective especially for personalized recommendation tasks [34], and it has been previously applied for drug–target interaction prediction [5–7]. In MF, a matrix \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$M \in R^{d \times t}$$\end{document}M∈Rd×t (for the drug–target prediction task, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$M$$\end{document}M represents a matrix of binding affinities of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$d$$\end{document}d drugs and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$t$$\end{document}t targets) is approximated by the product of two latent factor matrices \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$P \in R^{k \times d}$$\end{document}P∈Rk×d and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Q \in R^{k \times t}$$\end{document}Q∈Rk×t.The factor matrices \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$P$$\end{document}P and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Q$$\end{document}Q are learned by minimizing the regularized squared error on the set of observed affinities \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\kappa$$\end{document}κ:\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathop {min}\limits_{Q,P} \mathop \sum \limits_{{(d_{i} ,t_{j} ) \in \kappa }}^{{}} (m_{i,j} - q_{i}^{T} p_{j} )^{2} + \lambda (||p||^{2} + ||q||^{2} )$$\end{document}minQ,P∑(di,tj)∈κ(mi,j-qiTpj)2+λ(||p||2+||q||2)
The term \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$(m_{i,j} - q_{i}^{T} p_{j} )^{2}$$\end{document}(mi,j-qiTpj)2 represents the fit of the learned parameters to the observed binding affinities. The term \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\lambda (||p||^{2} + ||q||^{2} )$$\end{document}λ(||p||2+||q||2) penalizes the magnitudes of the learned parameters to prevent overfitting, and the constant \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\uplambda$$\end{document}λ controls the weight of the two terms. With learned matrices \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$P$$\end{document}P and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Q$$\end{document}Q, a matrix \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$M^{{\prime }}$$\end{document}M′ with predictions for all drug–target pairs can be computed as:\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$M^{{\prime }} = P^{T} Q$$\end{document}M′=PTQ
In SimBoost, the columns of the factor matrices \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$P$$\end{document}P and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$Q$$\end{document}Q are utilized as parts of the feature vectors for the drugs and targets respectively and thus Matrix Factorization is used as a feature extraction step. | [
"23933754",
"24723570",
"26872142",
"26352634",
"19399780",
"17211405",
"8780787",
"18621671",
"19578428",
"21909252",
"24244130",
"21364574",
"17510168",
"19503826",
"23259810",
"12377017",
"19281186",
"22751809",
"18676415",
"19605421",
"17646345",
"18689844",
"21893517... | [
{
"pmid": "23933754",
"title": "Similarity-based machine learning methods for predicting drug-target interactions: a brief review.",
"abstract": "Computationally predicting drug-target interactions is useful to select possible drug (or target) candidates for further biochemical verification. We focus on... |
BMC Medical Informatics and Decision Making | 28427384 | PMC5399417 | 10.1186/s12911-017-0443-3 | Imbalanced target prediction with pattern discovery on clinical data repositories | BackgroundClinical data repositories (CDR) have great potential to improve outcome prediction and risk modeling. However, most clinical studies require careful study design, dedicated data collection efforts, and sophisticated modeling techniques before a hypothesis can be tested. We aim to bridge this gap, so that clinical domain users can perform first-hand prediction on existing repository data without complicated handling, and obtain insightful patterns of imbalanced targets for a formal study before it is conducted. We specifically target for interpretability for domain users where the model can be conveniently explained and applied in clinical practice.MethodsWe propose an interpretable pattern model which is noise (missing) tolerant for practice data. To address the challenge of imbalanced targets of interest in clinical research, e.g., deaths less than a few percent, the geometric mean of sensitivity and specificity (G-mean) optimization criterion is employed, with which a simple but effective heuristic algorithm is developed.ResultsWe compared pattern discovery to clinically interpretable methods on two retrospective clinical datasets. They contain 14.9% deaths in 1 year in the thoracic dataset and 9.1% deaths in the cardiac dataset, respectively. In spite of the imbalance challenge shown on other methods, pattern discovery consistently shows competitive cross-validated prediction performance. Compared to logistic regression, Naïve Bayes, and decision tree, pattern discovery achieves statistically significant (p-values < 0.01, Wilcoxon signed rank test) favorable averaged testing G-means and F1-scores (harmonic mean of precision and sensitivity). Without requiring sophisticated technical processing of data and tweaking, the prediction performance of pattern discovery is consistently comparable to the best achievable performance.ConclusionsPattern discovery has demonstrated to be robust and valuable for target prediction on existing clinical data repositories with imbalance and noise. The prediction results and interpretable patterns can provide insights in an agile and inexpensive way for the potential formal studies.Electronic supplementary materialThe online version of this article (doi:10.1186/s12911-017-0443-3) contains supplementary material, which is available to authorized users. | Problem definition and related worksIn this section, we define the problem we address and review the key related works. Data mining has been extensively applied in healthcare domain, which is believed to be able to uncover new biomedical and healthcare knowledge for clinical and administrative decision making as well as generate scientific hypotheses [3]. We focus on the prediction problem of classification, where for a given (training) dataset D, we would like to utilize the known (labelled) values of a target T to establish (train) a model and method (a classifier) to predict a target of interest (T = t), i.e. positive cases, for future (testing) data where T is not known. Specifically, the dataset \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ \mathrm{D}=\left[\begin{array}{c}\hfill {D}_1\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {D}_n\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {d}_{11}\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {d}_{n1}\hfill \end{array}\begin{array}{c}\hfill \cdots \hfill \\ {}\hfill \ddots \hfill \\ {}\hfill \cdots \hfill \end{array}\begin{array}{c}\hfill {d}_{1 m\hbox{'}}{t}_1\hfill \\ {}\hfill \vdots \hfill \\ {}\hfill {d}_{n m\hbox{'}}{t}_n\hfill \end{array}\right]=\left[{R}_1,{R}_2\dots, {R}_m,\mathrm{T}\right] $$\end{document}D=D1⋮Dn=d11⋮dn1⋯⋱⋯d1m't1⋮dnm'tn=R1,R2…,Rm,Tis with n samples and m + 1 attributes (columns) where for simplicity the first m attributes R = [R1, R2, … Rm] represent the predictor variables and the last attribute T represents the target to predict (response). dij is a value in D for attribute Ri for i = 1, 2, …, n and j = 1, 2, …, m. T is a nominal attribute and one is specifically interested in cases of T = t, compared to cases of any other values. Therefore, we model the problem as binary classification where we would like to distinguish T = t (positive) from T ≠ t (negative, and can be of multiple values in data). We assume there are no missing values of T in training, but R can have certain missing values, reflecting the reality of healthcare data in practice. Furthermore, most targets of clinical interest (T = t) are minorities in real data, e.g. Cardiac death = Yes and Death in 1 year = Yes. In such a case, the prevalence, defined as # (T = t)/n, is considerably smaller than 1/2 (50%), and we interchangeably denote the dataset and prediction problem as imbalanced.We have listed existing interpretable classifiers included for comparisons: logistic regression, Naïve Bayes, and decision tree (C4.5). They were not designed for imbalanced datasets. Naive Bayes would be less influenced as the target proportion could be used as the prior in training. But a moderately high imbalance ratio would overweigh the prior and impact the prediction performance, as will be shown in experimental results and recent work [13]. Both logistic regression and decision tree optimize towards the overall accuracy where the prediction performance of a minority target can be significantly influenced.The other non-interpretable methods, such as k-nearest-neighbor [19], support vector machines [20] and artificial neural nets [3], are beyond our scope of comparison as they do not directly provide explicit human-readable “patterns” to follow up for domain users.The proposed pattern discovery in this work has some resemblance with association rule mining [21], associated motif discovery from biological sequences [22] and feature selection for data mining [23]. Association rule finds only frequent items, but does not model prediction (classification). One critical limitation of association rule based methods is that the target has to be frequent, which is not the case in clinical outcomes of interest [6]. Further extensions of classification after association rule mining suffer from scalability because non-trivial rules (over 3 attributes) can take intractable time to compute [24]. Furthermore, association rule mining works with only exact occurrences which cannot tolerate noises in healthcare data. These two limitations also apply to rule extraction based prediction methods [25]. Motif discovery works on sequential and contiguous patterns which are not the case in mining healthcare data (attributes are disjoint without an order and are not contiguous) [22, 26]. Nonetheless, the approximate matching modeling of biological motifs [27] inspires us to introduce a control to tolerate noise and increase flexibility of the pattern model. Feature selection usually works as an auxiliary method in combination with formal data mining methods for target prediction [23], but it works only on the attribute level (not attribute-value) and does not explicitly generates an prediction model for direct interpretation. On the other hand, the wide spectrum of feature selection methods provides many choices to select attributes for pattern discovery, such as Chi-Squared test based feature selection [28].Motivated by these, this work presents a pattern discovery classifier featuring a highly interpretable predictive pattern model on noisy, imbalanced healthcare data in practice for domain users. | [
"9812073",
"11923031",
"21537851",
"24050858",
"24971157",
"26286871",
"16792098",
"21801360",
"28065840",
"15637633",
"20529874",
"21193520"
] | [
{
"pmid": "9812073",
"title": "Implementation of a computerized cardiovascular information system in a private hospital setting.",
"abstract": "BACKGROUND\nThe use of clinical databases improves quality of care, reduces operating costs, helps secure managed care contracts, and assists in clinical resear... |
BioData Mining | 28465724 | PMC5408444 | 10.1186/s13040-017-0133-9 | Feature analysis for classification of trace fluorescent labeled protein crystallization images | BackgroundLarge number of features are extracted from protein crystallization trial images to improve the accuracy of classifiers for predicting the presence of crystals or phases of the crystallization process. The excessive number of features and computationally intensive image processing methods to extract these features make utilization of automated classification tools on stand-alone computing systems inconvenient due to the required time to complete the classification tasks. Combinations of image feature sets, feature reduction and classification techniques for crystallization images benefiting from trace fluorescence labeling are investigated.ResultsFeatures are categorized into intensity, graph, histogram, texture, shape adaptive, and region features (using binarized images generated by Otsu’s, green percentile, and morphological thresholding). The effects of normalization, feature reduction with principle components analysis (PCA), and feature selection using random forest classifier are also analyzed. The time required to extract feature categories is computed and an estimated time of extraction is provided for feature category combinations. We have conducted around 8624 experiments (different combinations of feature categories, binarization methods, feature reduction/selection, normalization, and crystal categories). The best experimental results are obtained using combinations of intensity features, region features using Otsu’s thresholding, region features using green percentile G
90 thresholding, region features using green percentile G
99 thresholding, graph features, and histogram features. Using this feature set combination, 96% accuracy (without misclassifying crystals as non-crystals) was achieved for the first level of classification to determine presence of crystals. Since missing a crystal is not desired, our algorithm is adjusted to achieve a high sensitivity rate. In the second level classification, 74.2% accuracy for (5-class) crystal sub-category classification. Best classification rates were achieved using random forest classifier.ContributionsThe feature extraction and classification could be completed in about 2 s per image on a stand-alone computing system, which is suitable for real time analysis. These results enable research groups to select features according to their hardware setups for real-time analysis. | Related workIn general, protein crystallization trial image analysis work is compared with respect to the accuracy of classification. The accuracy depends on the number of categories, features, and the ability of classifiers to model the data. Moreover, the hardware resources, training time and real-time analysis of new images are important factors that affect the usability of these methods. Table 1 provides the summary of related work with respect to different factors.
Table 1Summary of related workResearch paperImage categoriesFeature extractionClassification methodClassification accuracyZuk and Ward (1991) [7]NAEdge featuresDetection of lines using Hough transform and line trackingNot providedWalker et al. (2007) [22]7Radial and angular descriptors from Fourier TransformLearning vector quantization14 - 97% for different categoriesXu et al. (2006) [23]2Features from multiscale Laplacian pyramid filtersNeural network95% accuracyWilson (2002) [24]3Intensity and geometric featuresNaive BayesRecall 86% for crystals, 77% for unfavourable objectsHung et al. (2014) [26]3Shape context, Gabor filters and Fourier transformsCascade classifier on naive Bayes and random forest74% accuracySpraggon et al. (2002) [17]6Geometric and texture featuresSelf-organizing neural networks47 to 82% for different categoriesCumba et al. (2003) [8]2Radon transform line features and texture featuresLinear discriminant analysis85% accuracy with roc 0.84Saitoh et al. (2004) [20]5Geometric and texture featuresLinear discriminant analysis80 - 98% for different categoriesBern et al. (2004) [15]5Gradient and geometric featuresDecision tree with hand crafted thresholds12% FN and 14% FPCumba et al. (2005) [9]2Texture features, line measures and energy measuresAssociation rule mining85% accuracy with ROC 0.87Zhu et al. (2004) [10]2Geometric and texture featuresDecision tree with boosting14.6% FP and 9.6% FNBerry et al. (2006) [11]2NALearning vector quantization, self organizing maps and bayesian algorithmNAPan et al. (2006) [12]2Intensity stats, texture features, Gabor wavelet decompositionSupport vector machine2.94% FN and 37.68% FPYang et al. (2006) [14]3Hough transform, DFT, GLCM featuresHand tuned thresholds85% accuracySaitoh et al. (2006) [16]5Texture features, differential image featuresDecision tree and SVM90% for 3-class problemPo and Laine (2008) [13]2Multiscale Laplacian pyramid filters and histogram analysisGenetic algorithm and neural networkAccuracy: 93.5% with 88% TP and 99% TNLiu et al. (2008) [21]Crystal likelihoodFeatures from Gabor filters, integral histograms, and gradient imagesDecision tree with boostingROC 0.92Cumba et al. (2010) [18]3 and 6Basic stats, energy, Euler numbers, Radon-Laplacian, Sobel-edge, GLCMMultiple random forest with bagging and feature subsamplingRecall 80% crystals, 89% precipitate, 98% clear dropsSigdel et al. (2013) [28]3Intensity and blob featuresMultilayer perception neural network1.2% crystal misses with 88% accuracySigdel et al. (2014) [25]3Intensity and blob featuresSemi-supervised75% - 85% overall accuracyDinc et al. (2014) [27]3 and 2Intensity and blob features5 classifiers, feature reduction using PCA96% on non-crystals, 95% on likely-leadsYann et al. (2016) [19]10Deep learnining on grayscale imageDeep CNN with 13 layers90.8% accuracy
The Number of Categories. A significant amount of previous work (for example, Zuk and Ward (1991) [7], Cumba et al. (2003) [8], Cumba et al. (2005) [9], Zhu et al. (2006) [10], Berry et al. (2006) [11], Pan et al. (2006) [12], Po and Laine (2008) [13]) classified crystallization trials into non-crystal or crystal categories. Yang et al. (2006) [14] classified the trials into three categories (clear, precipitate, and crystal). Bern et al. (2004) [15] classified the images into five categories (empty, clear, precipitate, microcrystal hit, and crystal). Likewise, Saitoh et al. (2006) [16] classified into five categories (clear drop, creamy precipitate, granulated precipitate, amorphous state precipitate, and crystal). Spraggon et al. (2002) [17] proposed classification of the crystallization images into six categories (experimental mistake, clear drop, homogeneous precipitant, inhomogeneous precipitant, micro-crystals, and crystals). Cumba et al. (2010) [18] developed a system that classifies the images into three or six categories (phase separation, precipitate, skin effect, crystal, junk, and unsure). Yann et al. (2016) [19] classified into 10 categories (clear, precipitate, crystal, phase, precipitate and crystal, precipitate and skin, phase and crystal, phase and precipitate, skin, and junk). It should be noted that there is no standard for categorizing the images, and different research studies proposed different categories in their own way. Hampton’s scheme specifies 9 possible outcomes of crystallization trials. We intend to classify the crystallization trials according to Hampton’s scale.
Features for Classification. For feature extraction, a variety of image processing techniques have been proposed. Zuk and Ward (1991) [7] used the Hough transform to identify straight edges of crystals. Bern et al. (2004) [15] extract gradient and geometry-related features from the selected drop. Pan et al. (2006) [12] used intensity statistics, blob texture features, and results from Gabor wavelet decomposition to obtain the image features. Research studies by Cumba et al. (2003) [8], Saitoh et al. (2004) [20], Spraggon et al. (2002) [17], and Zhu et al. (2004) [10] used a combination of geometric and texture features as the input to their classifier. Saitoh et al. (2006) [16] used global texture features as well as features from local parts in the image and features from differential images. Yang et al. (2006) [14] derived the features from gray-level co-occurrence matrix, Hough transform and discrete fourier transform (DFT). Liu et al. (2008) [21] extracted features from Gabor filters, integral histograms, and gradient images to obtain 466-dimensional feature vector. Po and Laine (2008) [13] applied multiscale Laplacian pyramid filters and histogram analysis techniques for feature extraction. Similarly, other extracted image features included Hough transform features [13], Discrete Fourier Transform features [22], features from multiscale Laplacian pyramid filters [23], histogram analysis features [9], Sobel-edge features [24], etc. Cumba et al. (2010) [18] presented the most sophisticated feature extraction techniques for the classification of crystallization trial images. Features such as basic statistics, energy, Euler numbers, Radon-Laplacian features, Sobel-edge features, microcrystal features, and gray-level co-occurrence matrix features were extracted to obtain a 14,908 dimensional feature vector. They utilized a web-based distributed system and extracted as many features as possible hoping that the huge set of features could improve the accuracy of the classification [18].
Time Analysis of Classification. Because of the high-throughput rate of image collection, the speed of processing an image becomes an important factor. The system by Pan et al. (2006) [12] required 30s per image for feature extraction. Po and Laine mentioned that it took 12.5s per image for the feature extraction in their system [13]. Because of high computational requirement, they considered implementation of their approach on the Google computing grid. Feature extraction described by Cumba et al. (2010) [18] is the most sophisticated, which could take 5 h per image on a normal system. To speed up the process, they executed the feature extraction using a web-based distributed computing system. Yann et al. (2016) [19] utilized deep convolutional neural network (CNN) where training took 1.5 days for 150,000 weights and around 300 passes and classification takes 86 ms for 128x128 image on their GPU-based system.
Classifiers for Protein Crystallization. To obtain the decision model for classification, various classification technique have been used. Zhu, et al. (2004) [10] and Liu et al. (2008) [21] applied a decision tree with boosting. Bern et al. (2004) [15] used a decision tree classifier with hand-crafted thresholds. Pan et al. (2006) [12] applied a support vector machines (SVM) learning algorithm. Saitoh et al. (2006) [16] applied a combination of decision tree and SVM classifiers. Spraggon et al. (2002) [17] applied self-organizing neural networks. Po et al. (2008) [13] combined genetic algorithms and neural networks to obtain a decision model. Berry et al. (2006) [11] determined scores for each object within a drop using self-organizing maps, learning vector quantization, and Bayesian algorithms. The overall score for the drop was calculated by aggregating the classification scores of individual objects. Cumba et al. (2003) [8] and Saitoh et al. (2004) [20] applied linear discriminant analysis. Yang et al. (2006) [14] applied hand-tuned rules based classification followed by linear discriminant analysis. Cumba et al. (2005) [9] used association rule mining, while Cumba et al. (2010) [18] used multiple random forest classifiers generated via bagging and feature subsampling. In [25], classification performance using semi-supervised approaches was investigated. The recent study by Hung et al. (2014) [26] proposed protein crystallization image classification using elastic net. In our previous work [27], we evaluated the classification performance using 5 different classifiers, and feature reduction using principal components analysis (PCA) and normalization methods for the non-crystal and likely-lead datasets. Yann et al. (2016) [19] utilized deep convolutional neural networks (CNN) with 13 layers: 0) 128x128 image, 1) contrast normalization, 2) horizontal mirroring, 3) transformation, 4) convolution (5x5 filter), 5) max pooling (2x2 filter), 6) convolution (5x5 filter), 7) max pooling (2x2 filter), 8) convolution (5x5 filter), 9) max pooling (2x2 filter), 10) convolution (3x3 filter), 11) 2048 node fully connected layer, 12) 2048 fully connected layer for rectified linear activation, and 13) output layer using softmax.
Accuracy of Classification. With regard to the correctness of a classification, the best reported accuracy for the binary classification (i.e., classification into two categories) is 96.56% (83.6% true positive rate and 99.4% true negative rate) using deep CNN [19]. Despite high accuracy rate, around 16% of crystals are missed. Using genetic algorithms and neural networks [13], an accuracy of 93.5% average true performance (88% true positive and 99% true negative rates) is achieved for binary classification. Saitoh et al. achieved accuracy in the range of 80−98% for different image categories [20]. Likewise, the automated system by Cumba et al. (2010) [18] detected 80% of crystal-bearing images, 89% of precipitate images, and 98% of clear drops accurately. The accuracy also depends on the number of categories. As the number of categories increases, the accuracy goes down since there are more misclassifications possible. For 10-way classification using deep CNN, Yann et al. [19] achieved 91% accuracy with around 76.85% true positive rate for crystals and 8% of crystals categorized into classes not related to crystals. While overall accuracy is important, true positive rate (recall or sensitivity) for crystals may carry more value. As crystallographers would like to trust these automated classification systems, it is not desirable to see successful crystalline cases are missed by these systems.In this study, we will look into whether it is possible to achieve high accuracy with a small set of feature set using a proper classifier considering as many as 10 categories for real-time analysis. We provide an exhaustive set of experiments using all feature combinations and representative classifiers to achieve real-time analysis. | [
"24419610",
"26955046",
"12925793",
"17001091",
"16510974",
"12393922",
"19018095",
"12393921",
"24532991",
"20360022",
"15652250"
] | [
{
"pmid": "24419610",
"title": "Introduction to protein crystallization.",
"abstract": "Protein crystallization was discovered by chance about 150 years ago and was developed in the late 19th century as a powerful purification tool and as a demonstration of chemical purity. The crystallization of protei... |
Frontiers in Computational Neuroscience | 28522969 | PMC5415673 | 10.3389/fncom.2017.00024 | Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation | We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well-defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point or stationary distribution) toward a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged toward their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal “back-propagated” during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not. We also show experimentally that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task. | 4. Related workIn Section 2.3, we have discussed the relationship between Equilibrium Propagation and Backpropagation. In the weakly clamped phase, the change of the influence parameter β creates a perturbation at the output layer which propagates backwards in the hidden layers. The error derivatives and the gradient of the objective function are encoded by this perturbation.In this section, we discuss the connection between our work and other algorihms, starting with Contrastive Hebbian Learning. Equilibrium Propagation offers a new perspective on the relationship between Backpropagation in feedforward nets and Contrastive Hebbian Learning in Hopfield nets and Boltzmann machines (Table 1).Table 1Correspondence of the phases for different learning algorithms: Back-propagation, Equilibrium Propagation (our algorithm), Contrastive Hebbian Learning (and Boltzmann Machine Learning) and Almeida-Pineida's Recurrent Back-Propagation.BackpropEquilibrium PropContrastive Hebbian LearningAlmeida-PineidaFirst PhaseForward PassFree PhaseFree Phase (or Negative Phase)Free PhaseSecond PhaseBackward PassWeakly Clamped PhaseClamped Phase (or Positive Phase)Recurrent Backprop4.1. Link to contrastive hebbian learningDespite the similarity between our learning rule and the Contrastive Hebbian Learning rule (CHL) for the continuous Hopfield model, there are important differences.First, recall that our learning rule is:
(27)ΔWij∝limβ→01β(ρ(uiβ)ρ(ujβ)−ρ(ui0)ρ(uj0)),
where u0 is the free fixed point and uβ is the weakly clamped fixed point. The Contrastive Hebbian Learning rule is:
(28)ΔWij∝ρ(ui∞)ρ(uj∞)−ρ(ui0)ρ(uj0),
where u∞ is the fully clamped fixed point (i.e., fixed point with fully clamped outputs). We choose the notation u∞ for the fully clamped fixed point because it corresponds to β → +∞ with the notations of our model. Indeed Equation (9) shows that in the limit β → +∞, the output unit yi moves infinitely fast toward yi, so yi is immediately clamped to yi and is no longer sensitive to the “internal force” (Equation 8). Another way to see it is by considering Equation (3): as β → +∞, the only value of y that gives finite energy is d.The objective functions that these two algorithms optimize also differ. Recalling the form of the Hopfield energy (Equation 1) and the cost function (Equation 2), Equilibrium Propagation computes the gradient of:
(29)J=12‖y0−d2‖,
where y0 is the output state at the free phase fixed point u0, while CHL computes the gradient of:
(30)JCHL=E(u∞)−E(u0).
The objective function for CHL has theoretical problems: it may take negative values if the clamped phase and free phase stabilize in different modes of the energy function, in which case the weight update is inconsistent and learning usually deteriorates, as pointed out by Movellan (1990). Our objective function does not suffer from this problem, because it is defined in terms of local perturbations, and the implicit function theorem guarantees that the weakly clamped fixed point will be close to the free fixed point (thus in the same mode of the energy function).We can also reformulate the learning rules and objective functions of these algorithms using the notations of the general setting (Section 3). For Equilibrium Propagation we have:
Δθ∝−limβ→01β(∂F∂θ(θ,v,β,sθ,vβ)−∂F∂θ(θ,v,0,sθ,v0))
and
(31)J(θ,v)=∂F∂β(θ,v,0,sθ,v0).
As for Contrastive Hebbian Learning, one has
Δθ∝−(∂F∂θ(θ,v,∞,sθ,v∞)−∂F∂θ(θ,v,0,sθ,v0))
and
(32)JCHL(θ,v)=F(θ,v,∞,sθ,v∞)−F(θ,v,0,sθ,v0),
where β = 0 and β = ∞ are the values of β corresponding to free and (fully) clamped outputs, respectively.Our learning algorithm is also more flexible because we are free to choose the cost function C (as well as the energy funtion E), whereas the contrastive function that CHL optimizes is fully determined by the energy function E.4.2. Link to boltzmann machine learningAgain, the log-likelihood that the Boltzmann machine optimizes is determined by the Hopfield energy E, whereas we have the freedom to choose the cost function in the framework for Equilibrium Propagation.As discussed in Section 2.3, the second phase of Equilibrium Propagation (going from the free fixed point to the weakly clamped fixed point) can be seen as a brief “backpropagation phase” with weakly clamped target outputs. By contrast, in the positive phase of the Boltzmann machine, the target is fully clamped, so the (correct version of the) Boltzmann machine learning rule requires two separate and independent phases (Markov chains), making an analogy with backprop less obvious.Our algorithm is also similar in spirit to the CD algorithm (Contrastive Divergence) for Boltzmann machines. In our model, we start from a free fixed point (which requires a long relaxation in the free phase) and then we run a short weakly clamped phase. In the CD algorithm, one starts from a positive equilibrium sample with the visible units clamped (which requires a long positive phase Markov chain in the case of a general Boltzmann machine) and then one runs a short negative phase. But there is an important difference: our algorithm computes the correct gradient of our objective function (in the limit β → 0), whereas the CD algorithm computes a biased estimator of the gradient of the log-likelihood. The CD1 update rule is provably not the gradient of any objective function and may cycle indefinitely in some pathological cases (Sutskever and Tieleman, 2010).Finally, in the supervised setting presented in Section 2, a more subtle difference with the Boltzmann machine is that the “output” state y in our model is best thought of as being part of the latent state variable s. If we were to make an analogy with the Boltzmann machine, the visible units of the Boltzmann machine would be v = {x, d}, while the hidden units would be s = {h, y}. In the Boltzmann machine, the state of the external world is inferred directly on the visible units (because it is a probabilistic generative model that maximizes the log-likelyhood of the data), whereas in our model we make the choice to integrate in s special latent variables y that aim to match the target d.4.3. Link to recurrent back-propagationDirectly connected to our model is the work by Pineda (1987) and Almeida (1987) on recurrent back-propagation. They consider the same objective function as ours, but formulate the problem as a constrained optimization problem. In Appendix B, we derive another proof for the learning rule (Theorem 1) with the Lagrangian formalism for constrained optimization problems. The beginning of this proof is in essence the same as the one proposed by Pineda (1987); Almeida (1987), but there is a major difference when it comes to solving Equation (75) for the costate variable λ*. The method proposed by Pineda (1987) and Almeida (1987) is to use Equation (75) to compute λ* by a fixed point iteration in a linearized form of the recurrent network. The computation of λ* corresponds to their second phase, which they call recurrent back-propagation. However, this second phase does not follow the same kind of dynamics as the first phase (the free phase) because it uses a linearization of the neural activation rather than the fully non-linear activation5. From a biological plausibility point of view, having to use a different kind of hardware and computation for the two phases is not satisfying.By contrast, like the continuous Hopfield net and the Boltzmann machine, our model involves only one kind of neural computations for both phases.4.4. The model by Xie and SeungPrevious work on the back-propagation interpretation of contrastive Hebbian learning was done by Xie and Seung (2003).The model by Xie and Seung (2003) is a modified version of the Hopfield model. They consider the case of a layered MLP-like network, but their model can be extended to a more general connectivity, as shown here. In essence, using the notations of our model (Section 2), the energy function that they consider is:
(33)EX&S(u) : =12∑i γiui2−∑i<j γjWijρ(ui)ρ(uj)−∑i γibiρ(ui).
The difference with Equation (1) is that they introduce a parameter γ, assumed to be small, that scales the strength of the connections. Their update rule is the contrastive Hebbian learning rule which, for this particular energy function, takes the form:
(34)ΔWij∝−(∂EX&S∂Wij(u∞)−∂EX&S∂Wij(u0)) =γj(ρ(ui∞)ρ(uj∞)−ρ(ui0)ρ(uj0))
for every pair of indices (i, j) such that i < j. Here, u∞ and u0 are the (fully) clamped fixed point and free fixed point, respectively. Xie and Seung (2003) show that in the regime γ → 0 this contrastive Hebbian learning rule is equivalent to back-propagation. At the free fixed point u0, one has ∂EX&S∂si(u0)=0 for every unit si6, which yields, after dividing by γi and rearranging the terms:
(35)si0=ρ′(si0)(∑j < iWijρ(uj0)+∑j>iγj − iWijρ(uj0)+bi).
In the limit γ → 0, one gets si0≈ρ′(si0)(∑j<iWijρ(uj0)+bi), so that the network almost behaves like a feedforward net in this regime.As a comparison, recall that in our model (Section 2) the energy function is:
(36)E(u) : =12∑iui2−∑i < jWijρ(ui)ρ(uj)−∑ibiρ(ui),
the learning rule is:
(37)ΔWij∝−limβ→01β(∂E∂Wij(uβ)−∂E∂Wij(u0)) =limβ→01β(ρ(uiβ)ρ(ujβ)−ρ(ui0)ρ(uj0)),
and at the free fixed point, we have ∂E∂si(u0)=0 for every unit si, which gives:
(38)si0=ρ′(si0)(∑j ≠ iWijρ(uj0)+bi).
Here, are the main differences between our model and theirs. In our model, the feedforward and feedback connections are both strong. In their model, the feedback weights are tiny compared to the feedforward weights, which makes the (recurrent) computations look almost feedforward. In our second phase, the outputs are weakly clamped. In their second phase, they are fully clamped. The theory of our model requires a unique learning rate for the weights, while in their model the update rule for Wij (with i < j) is scaled by a factor γj (see Equation 34). Since γ is small, the learning rates for the weights vary on many orders of magnitude in their model. Intuitively, these multiple learning rates are required to compensate for the small feedback weights. | [
"27870614",
"21212356",
"11283308",
"7054394",
"22920249",
"19325932",
"11919633",
"22080608",
"18255734",
"12180402",
"6587342",
"22807913",
"12590814"
] | [
{
"pmid": "27870614",
"title": "Active Inference: A Process Theory.",
"abstract": "This article describes a process theory based on active inference and belief propagation. Starting from the premise that all neuronal processing (and action selection) can be explained by maximizing Bayesian model evidenc... |
Journal of Cheminformatics | 29086162 | PMC5425364 | 10.1186/s13321-017-0213-3 | Scaffold Hunter: a comprehensive visual analytics framework for drug discovery | The era of big data is influencing the way how rational drug discovery and the development of bioactive molecules is performed and versatile tools are needed to assist in molecular design workflows. Scaffold Hunter is a flexible visual analytics framework for the analysis of chemical compound data and combines techniques from several fields such as data mining and information visualization. The framework allows analyzing high-dimensional chemical compound data in an interactive fashion, combining intuitive visualizations with automated analysis methods including versatile clustering methods. Originally designed to analyze the scaffold tree, Scaffold Hunter is continuously revised and extended. We describe recent extensions that significantly increase the applicability for a variety of tasks. | Related workSeveral other software tools exist that address the challenges regarding the organization and analysis of chemical and biological data. Early tools such as Spotfire [8] were not originally developed to analyze these kinds of data, but are often applied to compound datasets. Simultaneously, workflow environments such as the Konstanz Information Miner (KNIME) [9], Pipeline Pilot [10] or Taverna [11] were developed. The basic idea was to enable scientists in the life sciences to perform tasks which are traditionally in the domain of data analytics specialists. KNIME additionally integrates specific cheminformatics extensions [12]. Some of them focus on the integration of chemical toolkits (e.g., RDKit [13], CDK [14], and Indigo [15]) and some others on analytical aspects (e.g., CheS-Mapper [16, 17]). CDK is likewise available in Taverna [18, 19] and Pipeline Pilot can integrate ChemAxon components [20]. Thus, these tools assist the scientists in their decision making process, e.g., which compounds should undergo further investigation. While these workflow systems facilitate data-orientated tasks such as filtering or property calculations, they lack an intuitive visualization of the chemical space. Hence, it is challenging to evaluate the results and to plan subsequent steps or to draw conclusions from a performed screen. Recently, tools tailored to the specific needs of life scientists in the chemical biology, medicinal chemistry and pharmaceutical domain were developed. These include MONA 2 [21], Screening Assistant 2 [22], DataWarrior [23], the Chemical Space Mapper (CheS-Mapper) [16, 17] and the High-Throughput Screening Exploration Environment (HiTSEE) [24, 25]. The last two tools complement the workflow environment KNIME with a visualization node. To the best of the authors’ knowledge, HiTSEE is not publicly available at present. Screening Assistant 2, CheS-Mapper and DataWarrior are open-source tools based on Java, which leads to a platform independent applicability. MONA 2 focuses on set operations and is particularly useful for the comparison of datasets and the tracking of changes. DataWarrior has a wider range of features and is beyond the scope of a pure analysis software. For example, it is capable of generating combinatorial libraries. Screening Assistant 2 was originally developed to manage screening libraries, and is able to deal with several million substances [22]. Furthermore, during import, datasets are scanned for problematic molecules or features of molecules like Pan Assay Interference Compounds (PAINS), which may disturb the assay setup or unspecifically bind to diverse proteins [26]. CheS-Mapper focuses on the assessment of quantitative structure activity relationship (QSAR) studies. Hence, it facilitates the discovery of changes in molecular structure to explain (Q)SAR models by visualizing the results (either predicted or experimentally determined). CheS-Mapper utilizes the software R [27] and the WEKA toolkit [28] to visually embed analyzed molecules in the 3D space. In summary, DataWarrior and CheS-Mapper as well as Scaffold Hunter are able to assist the discovery and analysis of SAR by utilizing different visualization techniques, see Table 1. All three tools use dimension reduction techniques and clustering methods. DataWarrior and Scaffold Hunter support a set of different visualizations in order to cope with more diverse issues and aspects regarding the raw data. Both are able to visualize bioactivity data related to chemical structures smoothly. DataWarrior utilizes self organizing maps, principal component analysis and 2D rubber band scaling to reduce data dimensionality. In contrast, Scaffold Hunter employs the scaffold concept, which provides the basis for the scaffold tree view, the innovative molecule cloud view and the heat map view, which enables the user to analyze multiple properties such as bioactivities referring to different selectivity within a protein family. Altogether, Scaffold Hunter provides an unique collection of data visualizations to solve most frequent molecular design and drug discovery tasks.Table 1Comparison of visualization techniques of cheminformatics software supporting visualizationTechniqueDataWarriorCheS-MapperScaffold HunterPlotYes–YesDim. reductionPCA, 2D-RBSa, SOMPCA, MDSMDSSpreadsheetYes–YesClusteringHierarchicalWEKA/R methodsHierarchicalSpecial features2D-RBSa
3D space, web applicationScaffold concept, collaborative features, fast heuristic clustering [29]
a2D rubber band scaling | [
"27779378",
"19561620",
"17238248",
"22769057",
"22424447",
"24397863",
"22166170",
"22644661",
"26389652",
"23327565",
"25558886",
"16180923",
"16859312",
"27485979",
"20426451",
"22537178",
"19561619",
"20481515",
"20394088",
"17125213",
"21615076"
] | [
{
"pmid": "27779378",
"title": "What Can We Learn from Bioactivity Data? Chemoinformatics Tools and Applications in Chemical Biology Research.",
"abstract": "The ever increasing bioactivity data that are produced nowadays allow exhaustive data mining and knowledge discovery approaches that change chemic... |
Frontiers in Psychology | 28588533 | PMC5439009 | 10.3389/fpsyg.2017.00824 | A Probabilistic Model of Meter Perception: Simulating Enculturation | Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that the induction of meter is a result of predictive coding: interpreting onsets in a rhythm relative to a periodic meter facilitates prediction of future onsets. Such prediction, we hypothesize, is based on previous exposure to rhythms. As such, predictive coding provides a possible explanation for the way meter perception is shaped by the cultural environment. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties of the relation between rhythm and meter to infer meter from quantized rhythms. We show that our model can successfully predict annotated time signatures from quantized rhythmic patterns derived from folk melodies. Furthermore, we show that by inferring meter, our model improves prediction of the onsets of future events compared to a similar probabilistic model that does not infer meter. Finally, as a proof of concept, we demonstrate how our model can be used in a simulation of enculturation. From the results of this simulation, we derive a class of rhythms that are likely to be interpreted differently by enculturated listeners with different histories of exposure to rhythms. | 1.2. Related workOur approach in some respects resembles other recent probabilistic models, in particular a generative model presented by Temperley (2007). Temperley (2007, ch. 2) models meter perception as probabilistic inference on a generative model whose parameters are estimated using a training corpus. Meter is represented as a multi-leveled hierarchical framework, which the model generates level by level. The probability of onsets depends only on the metrical status of the corresponding onset time. Temperley (2009) generalizes this model to polyphonic musical structure, and introduces a metrical model that conditions onset probability on whether onsets occur on surrounding metrically stronger beats. This approach introduces some sensitivity to rhythmic context into the model. In later work, Temperley (2010) evaluates this model, the hierarchical position model, and compares its performance to other metrical models with varying degrees of complexity. One model, called the first-order metrical position model, was found to perform slightly better than the hierarchical position model, but this increase in performance comes at the cost of a higher number of parameters. Temperley concludes that the hierarchical position model provides the best trade-off between model-complexity and performance.In a different approach, Holzapfel (2015) employs Bayesian model selection to investigate the relation between usul (a type of rhythmic mode, similar in some ways to meter) and rhythm in Turkish makam music. The representation of metrical structure does not assume hierarchically organization, allowing for arbitrary onset distributions to be learned. Like the models compared by Temperley (2010), this model is not presented explicitly as a meter-finding model, but is used to investigate the statistical properties of a corpus of rhythms.The approach presented here diverges from these models in that it employs a general purpose probabilistic model of sequential temporal expectation based on statistical learning (Pearce, 2005) combined with an integrated process of metrical inference such that expectations are generated given an inferred meter. The sequential model is a variable-order metrical position model. Taking into account preceding context widens the range of statistical properties of rhythmic organization that can be learned by the model. In particular, the model is capable of representing not only the frequency of onsets at various metrical positions, but also the probability of onsets at metrical positions conditioned on the preceding rhythmic sequence. The vastly increased number of parameters of this model introduces a risk of over-fitting; models with many parameters may start to fit to noise in their training data, which harms generalization performance. However, we employ sophisticated smoothing techniques that avoid over-fitting (Pearce and Wiggins, 2004). Furthermore, we to some extent safe-guard against over-fitting by evaluating our model using cross-validation. | [
"23663408",
"21553992",
"22659582",
"23605956",
"15937014",
"26594881",
"15462633",
"22352419",
"15660851",
"16105946",
"25295018",
"27383617",
"7155765",
"8637596",
"22414591",
"23707539",
"2148588",
"21180358",
"22847872",
"10195184",
"26124105",
"16495999",
"25324813",... | [
{
"pmid": "23663408",
"title": "Whatever next? Predictive brains, situated agents, and the future of cognitive science.",
"abstract": "Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match ... |
JMIR Medical Informatics | 28487265 | PMC5442348 | 10.2196/medinform.7235 | Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies | BackgroundExtracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time.ObjectiveOur goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results.MethodsA clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction.ResultsThree datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports—each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%.ConclusionsIDEAL-X adopts a unique online machine learning–based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. | Related WorkA number of research efforts have been made in different fields of medical information extraction. Successful systems include caTIES [5], MedEx [6], MedLEE [7], cTAKES [8], MetaMap [9], HITEx [10], and so on. These methods either take a rule-based approach, a traditional machine learning–based approach, or a combination of both.Different online learning algorithms have been studied and developed for classification tasks [11], but their direct application to information extraction has not been studied. Especially in the clinical environment, the effectiveness of these algorithms is yet to be examined. Several pioneering projects have used learning processes that involve user interaction and certain elements of IDEAL-X. I2E2 is an early rule-based interactive information extraction system [12]. It is limited by its restriction to a predefined feature set. Amilcare [13,14] is adaptable to different domains. Each domain requires an initial training that can be retrained on the basis of the user’s revision. Its algorithm (LP)2 is able to generalize and induce symbolic rules. RapTAT [15] is most similar to IDEAL-X in its goals. It preannotates text interactively to accelerate the annotation process. It uses a multinominal naïve Baysian algorithm for classification but does not appear to use contextual information beyond previously found values in its search process. This may limit its ability to extract certain value types.Different from online machine learning but related is active learning [16,17], it assumes the ability to retrieve labels for the most informative data points while involving the users in the annotation process. DUALIST [18] allows users to select system-populated rules for feature annotation to support information extraction. Other example applications in health care informatics include word sense disambiguation [19] and phenotyping [20]. Active learning usually requires comprehending the entire corpus in order to pick the most useful data point. However, in a clinical environment, data arrive in a steaming fashion over time that limits our ability to choose data points. Hence, an online learning approach is more suitable.IDEAL-X adopts the Hidden Markov Model for its compatibility with online learning, and for its efficiency and scalability. We will also describe a broader set of contextual information used by the learning algorithm to facilitate extraction of values of all types. | [
"20442142",
"20064797",
"15187068",
"20819853",
"11825149",
"16872495",
"24431336",
"23364851",
"23851443",
"23665099",
"17329723",
"21508414"
] | [
{
"pmid": "20442142",
"title": "caTIES: a grid based system for coding and retrieval of surgical pathology reports and tissue specimens in support of translational research.",
"abstract": "The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application t... |
Materials | null | PMC5449064 | 10.3390/ma5122465 | A Novel Fractional Order Model for the Dynamic Hysteresis of Piezoelectrically Actuated Fast Tool Servo | The main contribution of this paper is the development of a linearized model for describing the dynamic hysteresis behaviors of piezoelectrically actuated fast tool servo (FTS). A linearized hysteresis force model is proposed and mathematically described by a fractional order differential equation. Combining the dynamic modeling of the FTS mechanism, a linearized fractional order dynamic hysteresis (LFDH) model for the piezoelectrically actuated FTS is established. The unique features of the LFDH model could be summarized as follows: (a) It could well describe the rate-dependent hysteresis due to its intrinsic characteristics of frequency-dependent nonlinear phase shifts and amplitude modulations; (b) The linearization scheme of the LFDH model would make it easier to implement the inverse dynamic control on piezoelectrically actuated micro-systems. To verify the effectiveness of the proposed model, a series of experiments are conducted. The toolpaths of the FTS for creating two typical micro-functional surfaces involving various harmonic components with different frequencies and amplitudes are scaled and employed as command signals for the piezoelectric actuator. The modeling errors in the steady state are less than ±2.5% within the full span range which is much smaller than certain state-of-the-art modeling methods, demonstrating the efficiency and superiority of the proposed model for modeling dynamic hysteresis effects. Moreover, it indicates that the piezoelectrically actuated micro systems would be more suitably described as a fractional order dynamic system. | 2. A Brief Review of Related WorkMrad and Hu and Hu et al. extended the classical Preisach model to describe the rate-dependent behaviors of hysteresis by use of an explicit weighting function with respect to the average change rate of the input signal [21,22,23]. To capacitate the Preisach model to represent the dynamic behaviors of controlled PEA, Yu et al. modified the weighting function to be dependent on the variation rates of input signal; To avoid the ill-behaviors caused by the great variations of input signal, an adjustment function with respect to the variation rate of input signal was introduced, which should be fitted through experimental data [24]. Recently, various rate-dependent Prandtl–Ishlinskii (PI) elementary operators have been introduced to model dynamic hysteresis effects. Ang et al. proposed a modified dynamic PI model, the rate-dependent hysteresis was modeled by the rate-dependent weighting values which were derived from the linear slope model of the hysteresis curve [25,26]. Janaideh et al. introduced a dynamic threshold, which was a function of input variation rates, the relationship between the threshold and the variation rate of input signal is in the logarithmic form to describe the essential characteristics of the hysteresis [27,28,29]. In both the generalized Preisach model and the PI model, the hysteresis loops were modeled by the sum of a number of elementary operators, and the rate-dependent behaviors were further described by modified dynamic weighting values, which were often functions of the derivation of input signal. The main disadvantage of these modeling methods is that they have a large number of parameters to be identified, which may limit their applications in real-time control.Besides the well-known Preisach model and PI model, neural network (NN) based methods have also been extensively employed to model the dynamic hysteresis effects. Dong et al. employed a feedforward NN to model the hysteresis of the PEA, the variation rate was used to construct the expanded input space [30]. Zhang and Tan proposed a parallel hybrid model for the rate-dependent hysteresis, a neural submodel was established to simulate the static hysteresis loop, meanwhile, first-order differential operators with time delays based submodel were employed to describe the dynamics of the hysteresis [31]. However, there exist inherent defects of NN based modeling, which can be summarized as follows: (a) There is no universal rules to optimally determine the structure of the NN; (b) NN has the shortcomings of overfitting and sinking into local optima [32]; (c) The capacities of fitting and prediction could not be well balanced.Some other novel mathematical models for dynamic hysteresis have been proposed. For instance, by transforming the multi-valued mapping of hysteresis into a one-to-one mapping, Deng and Tan proposed a nonlinear auto-regressive moving average with exogenous inputs (NARMAX) model to describe the dynamic hysteresis [33]. Similarly, Wong et al. formulated the modeling as a regression process and proposed the online updating least square support vector machines (LS-SVM) model and the relevance vector machine (RVM) model to capture the dynamic hysteretic behaviors [32]. Nevertheless, a compromise should be made between the modeling accuracy and the updating time, which meant that it would be challenged to apply it for high-frequency working conditions. Rakotondrabe et al. modeled the dynamic hysteresis to be a combination of the static Bouc-Wen model and a second-order linear dynamic part [34]. In [35] and [36], Gu and Zhu proposed an ellipse-based hysteresis model where the frequency and amplitude of the input signal was modeled by adjusting the major and minor axes and orientation of the ellipse. However, the model parameters were difficult to be determined to well describe and predict the dynamic hysteresis characteristics, and the ability of describing responses to the input signals with multi-frequencies would be limited.Fractional order calculus (FOC) theory, which is a generalization of the conventional calculus theory, has found a rapidly increasing application in various fields [37,38,39]. It has been widely believed that FOC can be used to describe a real process more accurately and more flexibly than classical methods [38,39,40]. A typical implementation of FOC is the description of dynamic properties of visco-elastic materials [41,42]. Motivated by the fractional order models for visco-elastic materials, Sunny et al. proposed two models to describe the resistance-strain hysteresis of a conductive polymer sample by combining a series of fractional/integer order functions [43]. However, both the developed models contained too many parameters to be identified and the existing hysteresis phenomenon was different from that of PEAs. Guyomar et al. described the ferroelectric hysteresis dynamics based on fractional order derivatives covering a wide range of frequency bandwidth [44,45]. In this method, the fractional order derivative term was employed to represent the viscous-like energy loss, and the derivative order was especially set as 0.5. Although the fixed order would present the unique characteristics of fractional calculus, it would significantly decrease the flexibility of the model and block the application of this method. Similar with the work presented by Sunny et al., the hysteresis between the electrical polarization and the mechanical strain was also much different from that of the PEA. However, all these results have demonstrated the potentials of fractional order models in modeling both the static and the dynamic hysteresis behaviors, and provided a fresh idea towards this topic. | [
"20815625"
] | [
{
"pmid": "20815625",
"title": "High-speed tracking control of piezoelectric actuators using an ellipse-based hysteresis model.",
"abstract": "In this paper, an ellipse-based mathematic model is developed to characterize the rate-dependent hysteresis in piezoelectric actuators. Based on the proposed mod... |
Frontiers in Neuroscience | 28701911 | PMC5487436 | 10.3389/fnins.2017.00350 | An Event-Driven Classifier for Spiking Neural Networks Fed with Synthetic or Dynamic Vision Sensor Data | This paper introduces a novel methodology for training an event-driven classifier within a Spiking Neural Network (SNN) System capable of yielding good classification results when using both synthetic input data and real data captured from Dynamic Vision Sensor (DVS) chips. The proposed supervised method uses the spiking activity provided by an arbitrary topology of prior SNN layers to build histograms and train the classifier in the frame domain using the stochastic gradient descent algorithm. In addition, this approach can cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature for real-world SNN applications, where neural activation must fade away after some time in the absence of inputs. Consequently, this way of building histograms captures the dynamics of spikes immediately before the classifier. We tested our method on the MNIST data set using different synthetic encodings and real DVS sensory data sets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology and feature maps. We demonstrate the effectiveness of our approach by achieving the highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS (100%) real DVS data sets to date with a spiking convolutional network. Moreover, by using the proposed method we were able to retrain the output layer of a previously reported spiking neural network and increase its performance by 2%, suggesting that the proposed classifier can be used as the output layer in works where features are extracted using unsupervised spike-based learning methods. In addition, we also analyze SNN performance figures such as total event activity and network latencies, which are relevant for eventual hardware implementations. In summary, the paper aggregates unsupervised-trained SNNs with a supervised-trained SNN classifier, combining and applying them to heterogeneous sets of benchmarks, both synthetic and from real DVS chips. | 3.7. Summary of results and comparison with related workTable 3 and Figure 7 summarize the results of this work for all data sets. More specifically, Figure 7A shows how the classification accuracy of an SNN improves as a function of the percentage of input events. This seems to be consistent with all data sets, both synthetically generated (Latency and Poisson encoding) and from a DVS sensor, and is in accordance with previously published studies (Neil and Liu, 2014; Diehl et al., 2015; Stromatias et al., 2015b). With Fast-Poker-DVS data set, there is a decrease in the performance in the last 20% of the input events due to the deformation of the card symbol when it disappears. Figure 7B presents the classification accuracy as a function of the absolute number of input events, in log scale, for the different data sets. This information is useful because in neuromorphic systems the overall energy consumption depends on the total number of events that are going to be processed (Stromatias et al., 2013; Merolla et al., 2014; Neil and Liu, 2014) and this might have an impact on deciding which data set to use based on the energy and latency constraints Figure 9, presents the network latency for each data set. We identify network latency as the time lapsed from the first input spike to the first output spike.Figure 9The mean and standard deviation of the classification latency of the SNNs for each data set.Table 5 presents a comparison of the current work with results in the literature on the MNIST data set and SNNs. The current state-of-the-art results come from a spiking CNN with 7 layers and max-pooling achieving a score of 99.44% and from a 4 layer spiking FC network achieving a score of 98.64% (Diehl et al., 2015). Both approaches were trained offline using frames and backpropagation and then mapped the network parameters to an SNN. However, even though this approach works very well with Poisson spike-trains or direct use of pixels, performance drops significantly with real DVS data. In addition, a direct comparison is not fair because the focus of this paper was to develop a classifier that works with both synthetic and DVS data and not to train a complete neural network with multiple layers.Table 5Comparison of classification accuracies (CA) of SNNs on the MNIST data set.ArchitectureNeural codingLearning-typeLearning-ruleCA (%)Spiking RBM (Neftci et al., 2015)PoissonUnsupervisedEvent-Based CD91.9FC (2 layer network) (Querlioz et al., 2013)PoissonUnsupervisedSTDP93.5FC (4 layer network) (O'Connor et al., 2013)PoissonUnsupervisedCD94.1FC (2 layer network) (Diehl and Cook, 2015)PoissonUnsupervisedSTDP95.0Synaptic Sampling Machine (3 layer network) (Neftci et al., 2016)PoissonUnsupervisedEvent-Based CD95.6FC (4 layer network) (this work(O'Connor et al., 2013))PoissonSupervisedStochastic GD97.25FC (4 layer network) (O'Connor and Welling, 2016)–SupervisedFractional SGD97.8FC (4 layer network) (Hunsberger and Eliasmith, 2015)Not reportedSupervisedBackprop soft LIF neurons98.37FC (4 layer network) (Diehl et al., 2015)PoissonSupervisedStochastic GD98.64CNN (Kheradpisheh et al., 2016)LatencyUnsupervisedSTDP98.4CNN (Diehl et al., 2015)PoissonSupervisedStochastic GD99.14Sparsely Connected Network (×64) (Esser et al., 2015)PoissonSupervisedBackprop99.42CNN (Rueckauer et al., 2016)PoissonSupervisedStochastic GD99.44CNN (this work)LatencySupervisedStochastic GD98.42CNN (this work)PoissonSupervisedStochastic GD98.20Table 6 gathers the results in literature using the N-MNIST data set and SNN. The best classification accuracy reported is 98.66% using a FC 3 layer network (Lee et al., 2016). Using a CNN, this work reports the best classification accuracy 97.77% until now. Again, the focus of this paper is not beating the classification accuracy, there is no optimization done to improve the performance, but to provide a valid SNN classifier training method with an insignificant Classifier Loss compared to frame based classification accuracy.Table 6Comparison of classification accuracies (CA) of SNNs on the N-MNIST data set.ArchitecturePreprocessingLearning-typeLearning-ruleCA (%)CNN (Orchard et al., 2015b)NoneUnsupervisedHFirst71.15FC (2 layer network) (Cohen et al., 2016)NoneSupervisedOPIUM (van Schaik and Tapson, 2015)92.87CNN (Neil and Liu, 2016)CenteringSupervised–95.72FC (3 layer network) (Lee et al., 2016)NoneSupervisedBackpropagation98.66CNN (this work)NoneSupervisedSGD97.77Finally Table 7 shows the literature results for the 40 card fast-poker-dvs data set. With this work, we demonstrate that 100% of classification accuracy is obtained using LOOCV method.Table 7Comparison of classification accuracies (CA) of SNNs on the 40 cards Fast-Poker-DVS data set.ArchitectureLearning-typeLearning-ruleCA (%)CNN (Pérez-Carrasco et al., 2013)SupervisedBackprop90.1 − 91.6CNN (Orchard et al., 2015b)UnsupervisedHFirst97.5 ± 3.5CNN (Lagorce et al., 2016)SupervisedHOTS100CNN (this work)SupervisedStochastic GD100 | [
"22386501",
"27199646",
"1317971",
"26941637",
"23197532",
"16764513",
"16873662",
"18292226",
"26017442",
"27877107",
"27853419",
"17305422",
"25104385",
"24574952",
"25873857",
"27445650",
"24115919",
"26635513",
"26353184",
"24051730",
"25462637",
"26733794",
"26217169... | [
{
"pmid": "22386501",
"title": "Extraction of temporally correlated features from dynamic vision sensors with spike-timing-dependent plasticity.",
"abstract": "A biologically inspired approach to learning temporally correlated patterns from a spiking silicon retina is presented. Spikes are generated fro... |
JMIR Public Health and Surveillance | 28630032 | PMC5495967 | 10.2196/publichealth.7157 | What Are People Tweeting About Zika? An Exploratory Study Concerning Its Symptoms, Treatment, Transmission, and Prevention | BackgroundIn order to harness what people are tweeting about Zika, there needs to be a computational framework that leverages machine learning techniques to recognize relevant Zika tweets and, further, categorize these into disease-specific categories to address specific societal concerns related to the prevention, transmission, symptoms, and treatment of Zika virus.ObjectiveThe purpose of this study was to determine the relevancy of the tweets and what people were tweeting about the 4 disease characteristics of Zika: symptoms, transmission, prevention, and treatment.MethodsA combination of natural language processing and machine learning techniques was used to determine what people were tweeting about Zika. Specifically, a two-stage classifier system was built to find relevant tweets about Zika, and then the tweets were categorized into 4 disease categories. Tweets in each disease category were then examined using latent Dirichlet allocation (LDA) to determine the 5 main tweet topics for each disease characteristic.ResultsOver 4 months, 1,234,605 tweets were collected. The number of tweets by males and females was similar (28.47% [351,453/1,234,605] and 23.02% [284,207/1,234,605], respectively). The classifier performed well on the training and test data for relevancy (F1 score=0.87 and 0.99, respectively) and disease characteristics (F1 score=0.79 and 0.90, respectively). Five topics for each category were found and discussed, with a focus on the symptoms category.ConclusionsWe demonstrate how categories of discussion on Twitter about an epidemic can be discovered so that public health officials can understand specific societal concerns within the disease-specific categories. Our two-stage classifier was able to identify relevant tweets to enable more specific analysis, including the specific aspects of Zika that were being discussed as well as misinformation being expressed. Future studies can capture sentiments and opinions on epidemic outbreaks like Zika virus in real time, which will likely inform efforts to educate the public at large. | Related WorksA study by Oyeyemi et al [10] concerning misinformation about Ebola on Twitter found that 44.0% (248/564) of the tweets about Ebola were retweeted at least once, with 38.3% (95/248) of those tweets being scientifically accurate, whereas 58.9% (146/248) were inaccurate. Furthermore, most of the tweets containing misinformation were never corrected. Another study about Ebola by Tran and Lee [4] found that the first reported incident of the doctor with Ebola had more impact and received more attention than any other incident, showing that people pay more attention and react more strongly to a new issue.Majumder et al attempted to estimate the basic R0 and Robs for Zika using HealthMap and Google Trends [11]. R0 is known as the basic reproduction number and is the number of expected new infections per first infected individual in a disease-free population. Robs is the observed number of secondary cases per infected individual. Their results indicate that the ranges for Robs were comparable between the traditional method and the novel method. However, traditional methods had higher R0 estimates than the HealthMap and Google Trend data. This indicates that digital surveillance methods can estimate transmission parameters in real time in the absence of traditional methods.Another study collected tweets on Zika for 3 months [12]. They found that citizens were more concerned with the long-term issues than the short-term issues such as fever and rash. Using hierarchical clustering and word co-occurrence analysis, they found underlying themes related to immediate effects such as the spread of Zika. Long-term effects had themes such as pregnancy. One issue with this paper was that they never employed experts to check the relevance of the tweets with respect to these topics, which is a common problem in mining social media data.A study by Glowacki et al [13] collected tweets during an hour-long live CDC twitter chat. They only included words used in more than 4 messages to do a topic analysis and found that the 10-topic solution best explained the themes. Some of the themes were virology of Zika, spread, consequences for infants and pregnant women, sexual transmission, and symptoms. This was a curated study where only tweets to and from the CDC were explored, whereas the aim of our larger study was to determine what the general public was discussing about Zika.A study by Fu et al [14] analyzed tweets from May 1, 2015 to April 2, 2016 and found 5 themes using topic modeling: (1) government, private and public sector, and general public response to the outbreak; (2) transmission routes; (3) societal impacts of the outbreak; (4) case reports; and (5) pregnancy and microcephaly. This study did not check for noise within the social media data. Moreover, the computational analysis was limited to 3 days of data, which may not reflect the themes in the larger dataset.In many of these studies, the need for checking the performance of the system as well as a post hoc error analysis on checking for the generalizability of their method is overlooked. We address this in our study by employing machine learning techniques on an annotated data set, as well as a post hoc error analysis on a test dataset, to ensure the generalizability of our system.Figure 1Block diagram of the pragmatic function-oriented content retrieval using a hierarchical supervised classification technique, followed by deeper analysis for characteristics of disease content.In this study, an exploratory analysis focused on finding important subcategories of discussion topics from Zika-related tweets was performed. Specifically, we addressed 4 key characteristics of Zika: symptoms, transmission, treatment, and prevention. Using the system described in Figure 1, the following research questions were addressed:R1. Dataset Distribution Analysis: What proportion of male and female users tweeted about Zika, what were the polarities of the tweets by male and female users, and what were the proportions of tweets that discussed topics related to the different disease characteristics—symptoms, transmission, treatment, and prevention?R2. Classification Performance Analysis: What was the agreement among annotators’ labels that were used as the ground truth in this study, what was the classification performance to detect the tweets relevant to Zika, and how well were the classifiers able to distinguish between tweets on the different disease characteristics?R3. Topical Analysis: What were the main discussion topics in each of these categories, and what were the most persistent concerns or misconceptions regarding the Zika virus? | [
"26965962",
"25315514",
"27251981",
"27544795",
"27566874",
"23092060"
] | [
{
"pmid": "26965962",
"title": "Zika virus infection-the next wave after dengue?",
"abstract": "Zika virus was initially discovered in east Africa about 70 years ago and remained a neglected arboviral disease in Africa and Southeast Asia. The virus first came into the limelight in 2007 when it caused an... |
BMC Medical Informatics and Decision Making | 28673289 | PMC5496182 | 10.1186/s12911-017-0498-1 | Semantic relatedness and similarity of biomedical terms: examining the effects of recency, size, and section of biomedical publications on the performance of word2vec | BackgroundUnderstanding semantic relatedness and similarity between biomedical terms has a great impact on a variety of applications such as biomedical information retrieval, information extraction, and recommender systems. The objective of this study is to examine word2vec’s ability in deriving semantic relatedness and similarity between biomedical terms from large publication data. Specifically, we focus on the effects of recency, size, and section of biomedical publication data on the performance of word2vec.MethodsWe download abstracts of 18,777,129 articles from PubMed and 766,326 full-text articles from PubMed Central (PMC). The datasets are preprocessed and grouped into subsets by recency, size, and section. Word2vec models are trained on these subtests. Cosine similarities between biomedical terms obtained from the word2vec models are compared against reference standards. Performance of models trained on different subsets are compared to examine recency, size, and section effects.ResultsModels trained on recent datasets did not boost the performance. Models trained on larger datasets identified more pairs of biomedical terms than models trained on smaller datasets in relatedness task (from 368 at the 10% level to 494 at the 100% level) and similarity task (from 374 at the 10% level to 491 at the 100% level). The model trained on abstracts produced results that have higher correlations with the reference standards than the one trained on article bodies (i.e., 0.65 vs. 0.62 in the similarity task and 0.66 vs. 0.59 in the relatedness task). However, the latter identified more pairs of biomedical terms than the former (i.e., 344 vs. 498 in the similarity task and 339 vs. 503 in the relatedness task).ConclusionsIncreasing the size of dataset does not always enhance the performance. Increasing the size of datasets can result in the identification of more relations of biomedical terms even though it does not guarantee better precision. As summaries of research articles, compared with article bodies, abstracts excel in accuracy but lose in coverage of identifiable relations. | Related workIn this section, we first briefly introduce word2vec and then survey the related work that used word2vec on biomedical publications. These studies primarily focused on the effects of architectures and parameter settings on experimental results. A few empirical studies were identified on how to configure the method to get better performance. | [
"16875881",
"19649320",
"27195695",
"25160253",
"27531100"
] | [
{
"pmid": "16875881",
"title": "Measures of semantic similarity and relatedness in the biomedical domain.",
"abstract": "Measures of semantic similarity between concepts are widely used in Natural Language Processing. In this article, we show how six existing domain-independent measures can be adapted t... |
JMIR Public Health and Surveillance | 28642212 | PMC5500778 | 10.2196/publichealth.6577 | Filtering Entities to Optimize Identification of Adverse Drug Reaction From Social Media: How Can the Number of Words Between Entities in the Messages Help? | BackgroundWith the increasing popularity of Web 2.0 applications, social media has made it possible for individuals to post messages on adverse drug reactions. In such online conversations, patients discuss their symptoms, medical history, and diseases. These disorders may correspond to adverse drug reactions (ADRs) or any other medical condition. Therefore, methods must be developed to distinguish between false positives and true ADR declarations.ObjectiveThe aim of this study was to investigate a method for filtering out disorder terms that did not correspond to adverse events by using the distance (as number of words) between the drug term and the disorder or symptom term in the post. We hypothesized that the shorter the distance between the disorder name and the drug, the higher the probability to be an ADR.MethodsWe analyzed a corpus of 648 messages corresponding to a total of 1654 (drug and disorder) pairs from 5 French forums using Gaussian mixture models and an expectation-maximization (EM) algorithm .ResultsThe distribution of the distances between the drug term and the disorder term enabled the filtering of 50.03% (733/1465) of the disorders that were not ADRs. Our filtering strategy achieved a precision of 95.8% and a recall of 50.0%.ConclusionsThis study suggests that such distance between terms can be used for identifying false positives, thereby improving ADR detection in social media. | Related WorkThe current technological challenges include the difficulty for text mining algorithms to interpret patient lay vocabulary [23].After the review of multiple approaches, Sarker et al [9] concluded that following data collection, filtering was a real challenge. Filtering methods are likely to aid in the ADR detection process by removing most irrelevant information. Based on our review of prior research, two types of filtering methods can be used: semantic approaches and statistical approaches.Semantic filtering relies on semantic information, for example, negation rules and vocabularies, to identify messages not corresponding to an ADR declaration. Liu and al [24] developed negation rules and incorporated linguistic and medical knowledge bases in their algorithms to filter out negated ADRs, then remove drug indications and non- and unreported cases on FAERS (FDA’s Adverse Event Reporting System) database. In their use case of 1822 discussions about beta blockers, 71% of the related medical events were adverse drug events, 20% were drug indications, and 9% were negated adverse drug events.Powell et al [25] developed “Social Media Listening,” a tool to augment postmarketing safety. This tool consisted on the removal of questionable Internet pharmacy advertisements (named “Junk”), posts in which a drug was discussed (named “mention”), posts in which a potential event was discussed (called “Proto-AE”), and any type of medical interaction description (called “Health System Interaction”). Their study revealed that only 26% of the considered posts contained relevant information. The distribution of post classifications by social media source varied considerably among drugs. Between 11% (7/63) and 50.5% (100/198) of the posts contained Proto-AEs (between 3.2% (4/123) and 33.64% (726/2158) for over-the-counter products). The final step was a manual evaluation.The second type of filtering was based on statistical approaches using the topic models method [26]. Yang et al [27] used latent Dirichlet allocation probabilistic modeling [28] to filter topics and thereby reduce the dataset to a cluster of posts to evoke an ADR declaration. This method was evaluated by the comparison of 4 benchmark methods (example adaption for text categorization [EAT], positive examples and negative examples labeling heuristics [PNLH], active semisupervised clustering based two-stage text classification [ACTC], and Laplacian SVM) and the calculation of F scores (the harmonic mean of precision and recall) on ADRs posts. These 4 methods were improved by the use of this approach. The F score gains fluctuated between 1.94% and 6.14%. Sarker and Gonzalez [16] improved their ADR detection method by using different features for filtering. These multiple features were selected by the use of leave-one-out classification scores and were evaluated with accuracy and F scores. These features were based on n-grams (accuracy 82.6%, F score 0.654), computing the Tf-idf values for the semantic types (accuracy 82.6%, F score 0.652), polarity of sentences (accuracy 84.0%, F score 0.669), the positive or negative outcome (accuracy 83.9%, F score 0.665), ADR lexicon match (accuracy 83.5%, F score 0.659), sentiment analysis in posts (accuracy 82.0%), and filtering by topics (accuracy 83.7%, F score 0.670) for filtering posts without mention of ADRs. The use of all features for the filtering process provided an accuracy of 83.6% and an F score of 0.678. Bian et al [29] utilized SVM to filter the noise in tweets. Their motivation for classifying tweets arose from the fact that most posts were not associated with ADRs; thus, filtering out nonrelevant posts was crucial.Wei and al [30] performed an automatic chemical-diseases relation extraction on a corpus of PubMed articles. Their process was divided in two subtasks. The first one was a disease named entity recognition (DNER) subtask based on the 1500 PubMed titles and abstracts. The second subtask was a chemical-induced disease (CID) relation extraction (on the same corpus as DNER subtask). Chemicals and diseases were described utilizing the medical subject headings (MeSH) controlled vocabulary. They evaluated several approaches and obtained an average precision, recall, and standard F score of 78.99%, 74.81%, and 76.03%, respectively for DNER step and an average of 43.37% of F score with the CID step. The best result for CID step was obtained by combining two SVM approaches. | [
"7791255",
"9002492",
"22549283",
"25005606",
"20658130",
"12608885",
"25895907",
"25720841",
"26271492",
"26163365",
"21820083",
"25451103",
"24559132",
"24304185",
"25151493",
"25755127",
"26147850",
"26518315",
"26798054",
"25688695",
"26994911",
"20679242"
] | [
{
"pmid": "7791255",
"title": "Incidence of adverse drug events and potential adverse drug events. Implications for prevention. ADE Prevention Study Group.",
"abstract": "OBJECTIVES\nTo assess incidence and preventability of adverse drug events (ADEs) and potential ADEs. To analyze preventable events to... |
BMC Medical Informatics and Decision Making | 28699564 | PMC5506580 | 10.1186/s12911-017-0464-y | Detecting clinically relevant new information in clinical notes across specialties and settings | BackgroundAutomated methods for identifying clinically relevant new versus redundant information in electronic health record (EHR) clinical notes is useful for clinicians and researchers involved in patient care and clinical research, respectively. We evaluated methods to automatically identify clinically relevant new information in clinical notes, and compared the quantity of redundant information across specialties and clinical settings.MethodsStatistical language models augmented with semantic similarity measures were evaluated as a means to detect and quantify clinically relevant new and redundant information over longitudinal clinical notes for a given patient. A corpus of 591 progress notes over 40 inpatient admissions was annotated for new information longitudinally by physicians to generate a reference standard. Note redundancy between various specialties was evaluated on 71,021 outpatient notes and 64,695 inpatient notes from 500 solid organ transplant patients (April 2015 through August 2015).ResultsOur best method achieved at best performance of 0.87 recall, 0.62 precision, and 0.72 F-measure. Addition of semantic similarity metrics compared to baseline improved recall but otherwise resulted in similar performance. While outpatient and inpatient notes had relatively similar levels of high redundancy (61% and 68%, respectively), redundancy differed by author specialty with mean redundancy of 75%, 66%, 57%, and 55% observed in pediatric, internal medicine, psychiatry and surgical notes, respectively.ConclusionsAutomated techniques with statistical language models for detecting redundant versus clinically relevant new information in clinical notes do not improve with the addition of semantic similarity measures. While levels of redundancy seem relatively similar in the inpatient and ambulatory settings in the Fairview Health Services, clinical note redundancy appears to vary significantly with different medical specialties. | Related workA number of approaches have previously been reported around quantifying redundancy in clinical notes. For example, Weir et al., manually reviewed 1,891 notes in the Salt Lake City Veterans Affairs (VA) health care system and found that approximately 20% of notes contained copied text [11]. With respect to automated methods, 167,076 progress notes for 1,479 patients from the VA Computerized Patient Record System (CPRS) were examined using pair-wise comparison of all patient documents to identify matches of at least 40 consecutive word sequences in two documents. They found 9% of progress notes contained copied text [12]. Wrenn et al. used global alignment to quantify the percentage of redundant information in a collection of 1,670 inpatient notes (including sign-out note, progress note, admission note and discharge note) and found an average of 78% and 54% redundant information in sign-out and progress notes, respectively [13]. More recently, Cohen et al. used Smith-Waterman text alignment algorithm to quantify redundancy both in terms of word and semantic concept repetition [8]. They found that corpus redundancy had a negative impact on the quality of text-mining and topic modeling, and suggested that redundancy of the corpus must be accounted for in applying subsequent text-mining techniques for many secondary clinical applications [8].Other work has looked at techniques using a modification of classic global alignment with a sliding window and lexical normalization [14]. This work demonstrated a cyclic pattern in the quantity of redundant information longitudinally in ambulatory clinical notes for a given patient, and that the overall amount of redundant information increases over time. Subsequently, statistical language models were used to identify and visualize relevant new information via highlighting texts [15]. Work quantifying information redundancy between each note also demonstrates that, in most cases, clinicians tend to copy information exclusively from the most recent note. New information proportion (i.e., percentage of counterpart of redundant information) also appears to be a helpful metric for navigating clinicians or researchers to notes or information in notes that is clinically significant [16]. Moreover, categorizing clinically relevant new information based on semantic type group [17] (e.g., medication, problem/disease, laboratory) can potentially improve information navigation for specific event types [10]. | [
"16720812",
"20399309",
"23304398",
"21292706",
"25882031",
"25717418",
"12695797",
"20064801",
"22195227",
"23920658",
"11604736",
"15120654",
"12835272",
"16875881",
"20351894",
"22195148",
"16221939",
"25954591",
"25954438",
"20442139",
"25954438"
] | [
{
"pmid": "23304398",
"title": "A qualitative analysis of EHR clinical document synthesis by clinicians.",
"abstract": "Clinicians utilize electronic health record (EHR) systems during time-constrained patient encounters where large amounts of clinical text must be synthesized at the point of care. Qual... |
Frontiers in Neuroscience | 28769745 | PMC5513987 | 10.3389/fnins.2017.00406 | Sums of Spike Waveform Features for Motor Decoding | Traditionally, the key step before decoding motor intentions from cortical recordings is spike sorting, the process of identifying which neuron was responsible for an action potential. Recently, researchers have started investigating approaches to decoding which omit the spike sorting step, by directly using information about action potentials' waveform shapes in the decoder, though this approach is not yet widespread. Particularly, one recent approach involves computing the moments of waveform features and using these moment values as inputs to decoders. This computationally inexpensive approach was shown to be comparable in accuracy to traditional spike sorting. In this study, we use offline data recorded from two Rhesus monkeys to further validate this approach. We also modify this approach by using sums of exponentiated features of spikes, rather than moments. Our results show that using waveform feature sums facilitates significantly higher hand movement reconstruction accuracy than using waveform feature moments, though the magnitudes of differences are small. We find that using the sums of one simple feature, the spike amplitude, allows better offline decoding accuracy than traditional spike sorting by expert (correlation of 0.767, 0.785 vs. 0.744, 0.738, respectively, for two monkeys, average 16% reduction in mean-squared-error), as well as unsorted threshold crossings (0.746, 0.776; average 9% reduction in mean-squared-error). Our results suggest that the sums-of-features framework has potential as an alternative to both spike sorting and using unsorted threshold crossings, if developed further. Also, we present data comparing sorted vs. unsorted spike counts in terms of offline decoding accuracy. Traditional sorted spike counts do not include waveforms that do not match any template (“hash”), but threshold crossing counts do include this hash. On our data and in previous work, hash contributes to decoding accuracy. Thus, using the comparison between sorted spike counts and threshold crossing counts to evaluate the benefit of sorting is confounded by the presence of hash. We find that when the comparison is controlled for hash, performing sorting is better than not. These results offer a new perspective on the question of to sort or not to sort. | Related workBesides the work of Ventura and Todorova (2015), on which this work is based, there has been other work using waveform features for decoding. Chen et al. (2012) and Kloosterman et al. (2014) used spike waveform features to decode from hippocampal recordings. Their approach is based on the spatial-temporal Poisson process, where a Poisson process describes the spike arrival time and a random vector describes the waveform features of the spike. Later, Deng et al. (2015) presented a marked point-process decoder that uses waveform features as the marks of the point-process and tested it on hippocampal recordings. This approach is similar in spirit to the spatial-temporal Poisson process. The primary difference between these approaches and ours is in the way time is segmented. The spatial-temporal Poisson process and marked point-process operate on single spikes, which requires a high refresh rate and somewhat more sophisticated Bayesian inference. Our approach works on time bins, which allow lower refresh rates and compatibility with the relatively simple Kalman and Wiener filters. However, operating on time bins requires some way to summarize the waveform shape information of all the spikes which occurred during the bin, hence Ventura and Todorova's moments and our sums. These statistics entail their own assumptions (linearity of tuning, stationarity of waveform shape, etc.) and approximations (using a finite number of moments or sums).Todorova et al. (2014) decoded motor intent from threshold crossing counts and the spike amplitude waveform feature. Their model was non-parametric and fitted using the expectation-maximization scheme of Ventura (2009). Due to the non-parametric model, decoding required a computationally-expensive particle filter. This drawback led to the search for a more computationally efficient method, the result of which is the waveform feature moments framework of Ventura and Todorova (2015).Also related are earlier work by Ventura on spike sorting using motor tuning information (Ventura, 2009) and on sorting entire spike trains to take advantage of history information (Ventura and Gerkin, 2012). These ideas are similar to waveform feature decoding in that they also combine spike shape information and neural tuning, but different in that the goal is spike sorting. | [
"24739786",
"14624244",
"21775782",
"25504690",
"19349143",
"25380335",
"19721186",
"24089403",
"10221571",
"28066170",
"21919788",
"12728268",
"27097901",
"26133797",
"23742213",
"17670985",
"25082508",
"18085990",
"19548802",
"22529350",
"25380335",
"7173942",
"16354382... | [
{
"pmid": "24739786",
"title": "Restoring sensorimotor function through intracortical interfaces: progress and looming challenges.",
"abstract": "The loss of a limb or paralysis resulting from spinal cord injury has devastating consequences on quality of life. One approach to restoring lost sensory and ... |
Scientific Reports | 28720794 | PMC5515977 | 10.1038/s41598-017-05988-5 | Percolation-theoretic bounds on the cache size of nodes in mobile opportunistic networks | The node buffer size has a large influence on the performance of Mobile Opportunistic Networks (MONs). This is mainly because each node should temporarily cache packets to deal with the intermittently connected links. In this paper, we study fundamental bounds on node buffer size below which the network system can not achieve the expected performance such as the transmission delay and packet delivery ratio. Given the condition that each link has the same probability p to be active in the next time slot when the link is inactive and q to be inactive when the link is active, there exists a critical value p
c from a percolation perspective. If p > p
c, the network is in the supercritical case, where we found that there is an achievable upper bound on the buffer size of nodes, independent of the inactive probability q. When p < p
c, the network is in the subcritical case, and there exists a closed-form solution for buffer occupation, which is independent of the size of the network. | Related WorksMONs is a natural evolution from traditional mobile ad hoc networks. In MONs, the links are intermittently connected due to node mobility and power on/off, mobile nodes communicate with each other opportunistically and route packets in a store-carry-and-forward style. In the past several years, much effort has been expended to improve the performance of opportunistic routing algorithms in terms of reducing the forwarding delay or increasing the packet delivery ratio. Some valuable results have been achieved that provide theoretical guidance for performance optimization. We introduce them in detail.Cache-aware Opportunistic Routing AlgorithmsConsidering the limited buffer size of portable devices, cache-aware solutions become very important in MONs. A. Balasubramanian et al. take MONs routing as a resource allocation problem and turn the forwarding metric into per-packet utilities that incorporate two factors: one is the expected contribution of a packet if it were replicated, and the other is the packet size. The utility, then, is the ratio of the former factor over the latter, which determines how a packet should be replicated in the system17. To deal with the short contacts and fragmented bundles, M. J. Pitkanen and J. Ott18 integrated application level erasure coding on top of existing protocols. They used Reed Solomon codes to divide single bundles into multiple blocks and observed that the block redundancy increased the catch hit rate and reduced the response latency.S. Kaveevivitchai and H. Esaki proposed a message deletion strategy for a multi-copy routing scheme19. They employed extra nodes deployed at the system’s hot regions to relay the acknowledgement (ACK) messages, and copies matching the ID of ACK messages are dropped from the buffer. A. T. Prodhan et al. proposed TBR20, which ranks messages with their TTL, hop count and number of copies. A node will delete the copy of a message if it receives a higher priority message and its buffer is full. Recently, D. Pan et al.21 developed a comprehensive cache schedule algorithm that integrates different aspects of storage managements including queue strategy, buffer replacement and redundancy deletion.Performance Analysis of Cache-aware Opportunistic Routing AlgorithmsSome analytical results mainly focus on metrics such as the flooding time13, 22, 23, network diameter14 and delay-capacity tradeoff15, in which the buffer size of nodes is usually assumed to be unlimited. Several works discuss the congestion issue with the epidemic algorithm. For example, A. Krifa et al.24 proposed an efficient buffer management policy by modeling the relationship between the number of copies and the mean delivery delay/rate. When a new packet copy arrives at a node and the node finds its buffer full, it drops the packets with the minimal marginal utility value. G. Zhang and Y. Liu employed revenue management and dynamic programming to study the congestion management strategy of MONs25. Given a class of utility functions, they showed that one arrived packet should be accepted and that the solution is optimal if and only if the value of the benefit function is greater than that of the cost function.The authors of26 evaluated the impact of buffer size on the efficiency of four kinds of routing algorithms. They observed that these protocols reacted differently to the increase of buffer size in mobile vehicle networks. Generally speaking, both the Epidemic and MaxProp benefit from the increased buffer size on all nodes (i.e., the mobile and terminal nodes). PROPHET and SprayWait instead have no significant improvement when only the buffer size of terminal nodes increases. X. Zhuo et al.27 explored the influence of contact duration on data forwarding performance. To maximize the delivery rate, they modeled the data replication problem with mixed integer programming technology, subject to the storage constraint. | [] | [] |
Scientific Reports | 28729710 | PMC5519723 | 10.1038/s41598-017-05778-z | Learning a Health Knowledge Graph from Electronic Medical Records | Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google’s manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01). | Related workIn recent work, Finlayson et al. quantify the relatedness of 1 million concepts by computing their co-occurrence in free-text notes in the EMR, releasing a “graph of medicine”22. Sondhi et al. measure the distance between mentions of two concepts within a clinical note for determination of edge-strength in the resulting graph23. Goodwin et al. use natural language processing to incorporate the belief state of the physician for assertions in the medical record, which is complementary to and could be used together with our approach24. Importantly, whereas the aforementioned works consider purely associative relations between medical concepts, our methodology models more complex relationships, and our evaluation focuses on whether the proposed algorithms can derive known causal relations between diseases and symptoms. | [
"17098763",
"3295316",
"1762578",
"2695783",
"7719792",
"4552594",
"19159006",
"25977789",
"18171485",
"12123149",
"27441408"
] | [
{
"pmid": "17098763",
"title": "Googling for a diagnosis--use of Google as a diagnostic aid: internet based study.",
"abstract": "OBJECTIVE\nTo determine how often searching with Google (the most popular search engine on the world wide web) leads doctors to the correct diagnosis.\n\n\nDESIGN\nInternet b... |
GigaScience | 28327936 | PMC5530313 | 10.1093/gigascience/giw013 | GUIdock-VNC: using a graphical desktop sharing system to provide a browser-based interface for containerized software | Abstract
Background: Software container technology such as Docker can be used to package and distribute bioinformatics workflows consisting of multiple software implementations and dependencies. However, Docker is a command line–based tool, and many bioinformatics pipelines consist of components that require a graphical user interface. Results: We present a container tool called GUIdock-VNC that uses a graphical desktop sharing system to provide a browser-based interface for containerized software. GUIdock-VNC uses the Virtual Network Computing protocol to render the graphics within most commonly used browsers. We also present a minimal image builder that can add our proposed graphical desktop sharing system to any Docker packages, with the end result that any Docker packages can be run using a graphical desktop within a browser. In addition, GUIdock-VNC uses the Oauth2 authentication protocols when deployed on the cloud. Conclusions: As a proof-of-concept, we demonstrated the utility of GUIdock-noVNC in gene network inference. We benchmarked our container implementation on various operating systems and showed that our solution creates minimal overhead. | Related workSoftware containers and DockerA software container packages an application with everything it needs to run, including supporting libraries and system resources. Containers differ from traditional virtual machines (VMs) in that the resources of the operating system (OS), and not the hardware, are virtualized. In addition, multiple containers share a single OS kernel, thus saving considerable resources over multiple VMs.Linux has supported OS-level virtualization for several years. Docker (http://www.docker.com/) is an open source project that provides tools to setup and deploy Linux software containers. While Docker can run natively on Linux hosts, a small Linux VM is necessary to provide the virtualization services on Mac OS and Windows systems. On non-Linux systems, a single Docker container consists of a mini-VM, the Docker software layer, and the software container. However, multiple Docker containers can share the same mini-VM, saving considerable resources over using multiple individual VMs. Recently, support for OS-level virtualization has been added to Windows and the Macintosh operating system (Mac OS). Beta versions of Docker for both Windows and Mac OS are now available that allow Docker to run natively. Subsequently, these beta versions allow native Windows and Mac OS software to be containerized and deployed in a similar manner [7]. Docker containers therefore provide a convenient and light method for deploying open source workflows on multiple platforms.GUIdock-X11Although Docker provides a container with the original software environment, the host system, where the container software is executed, is responsible for rendering graphics. Our previous work, GUIdock-X11 [3], is one of the solutions in bridging the graphical information from user and Docker containers by using the X11 common graphic interface. GUIdock-X11 passes the container X11 commands to a host X11 client, which renders the GUI. Security is handled by encrypting the commands through secure shell (ssh) tunneling. We demonstrated the use of GUIdock-X11 [3] for systems biology applications, including Bioconductor packages written in R, C++, and Fortran, as well as Cytoscape, a standalone Java-based application with a graphical user interface. Neither Windows nor Mac OS uses X11 natively to render their graphics. Additional software such as MobaXterm [8] or socat [9] is needed to emulate X11 and locally render the graphics commands exported by the Docker container. However, a major advantage of the X11 method is that the commands to render the graphics and not the graphics themselves are transmitted, potentially reducing the total bandwidth required.Table 1 summarizes the differences between GUIdock-VNC and our previous work, GUIdock-X11.Table 1:Comparison between GUIdock-X11 and GUIdock-VNCFeatureGUIdock-X11GUIdock-VNCCan be deployed on phones/tablets?NoYesSecurityssh-tunnelOAuth2BandwidthLowLow to mediumCloud integration difficultyMediumSimpleDockerfile setupManual editingAutomatic conversion of base Docker imagesCase study: inference of gene networksThe inference of gene networks is a fundamental challenge in systems biology. We use gene network inference as a case study to demonstrate that GUIdock-X11 and GUIdock-VNC can be used to yield reproducible results from bioinformatics workflows. We have previously developed inference methods using a regression-based framework, in which we searched for candidate regulators (i.e., parent nodes) for each target gene [10–12]. Our methods are implemented in R, C++, and Fortran, and the implementation is available as a Bioconductor package called networkBMA (http://bioconductor.org/packages/release/bioc/html/networkBMA.html) [13]. In order to visualize the resulting gene networks, we previously developed a Cytoscape app called CyNetworkBMA (http://apps.cytoscape.org/apps/cynetworkbma) [14]. Cytoscape is a Java-based stand-alone application with a GUI to analyze and visualize graphs and networks [15–17]. Our app, CyNetworkBMA [14], integrates our networkBMA Bioconductor package into Cytoscape, allowing the user to directly visualize the resulting gene networks inferred from networkBMA using the Cytoscape utilities. The integration of multiple pieces of software, each with its own software dependencies, makes CyNetworkBMA an ideal proof-of-concept application for the illustration of the utility of GUIdock-VNC. | [
"26913191",
"22084118",
"22898396",
"24742092",
"14597658",
"17947979",
"25485619",
"20308593"
] | [
{
"pmid": "26913191",
"title": "BioShaDock: a community driven bioinformatics shared Docker-based tools registry.",
"abstract": "Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientific software. The ease of ... |
Journal of the Association for Information Science and Technology | 28758138 | PMC5530597 | 10.1002/asi.23063 | Author Name Disambiguation for PubMed | Log analysis shows that PubMed users frequently use author names in queries for retrieving scientific literature. However, author name ambiguity may lead to irrelevant retrieval results. To improve the PubMed user experience with author name queries, we designed an author name disambiguation system consisting of similarity estimation and agglomerative clustering. A machine-learning method was employed to score the features for disambiguating a pair of papers with ambiguous names. These features enable the computation of pairwise similarity scores to estimate the probability of a pair of papers belonging to the same author, which drives an agglomerative clustering algorithm regulated by 2 factors: name compatibility and probability level. With transitivity violation correction, high precision author clustering is achieved by focusing on minimizing false-positive pairing. Disambiguation performance is evaluated with manual verification of random samples of pairs from clustering results. When compared with a state-of-the-art system, our evaluation shows that among all the pairs the lumping error rate drops from 10.1% to 2.2% for our system, while the splitting error rises from 1.8% to 7.7%. This results in an overall error rate of 9.9%, compared with 11.9% for the state-of-the-art method. Other evaluations based on gold standard data also show the increase in accuracy of our clustering. We attribute the performance improvement to the machine-learning method driven by a large-scale training set and the clustering algorithm regulated by a name compatibility scheme preferring precision. With integration of the author name disambiguation system into the PubMed search engine, the overall click-through-rate of PubMed users on author name query results improved from 34.9% to 36.9%. | Related WorkDue to the limitations of manual authorship management, numerous recent name disambiguation studies focus on automatic techniques for large-scale literature systems. To process large-scale data efficiently, it is necessary to define the scope (block) of author name disambiguation appropriately to minimize the computation cost without losing significant clustering opportunities. A block is a set of name variants that are considered candidates for possible identity. Several blocking methods handling name variants are discussed (On, Lee, Kang, & Mitra, 2005) to find the appropriate block size. Collective entity resolution (Bhattacharya & Getoor, 2007) shows that clustering quality can be improved at the cost of additional computational complexity by examining clustering beyond block division. In our work, a block (namespace) consists of citations sharing a common last name and first name initial.Normally disambiguation methods estimate authorship relationships within the same block by clustering similar citations with high intercitation similarity while separating citations with low similarity. The intercitation similarities are estimated based on different features. Some systems use coauthor information alone (On et al., 2005; Kang, Na, Lee, Jung, Kim, et al., 2009). Some systems combine various citation features (Han, Zha, & Giles, 2005; Torvik, Weeber, Swanson, & Smalheiser, 2005; Soler, 2007; Yin et al., 2007), and some combine citation features with disambiguating heuristics based on predefined patterns (Cota, Ferreira, Nascimento, Gonçalves, & Laender, 2010). Besides conventional citation information, some works (Song et al., 2007; Yang, Peng, Jiang, Lee, & Ho, 2008; Bernardi & Le, 2011) also exploit topic models to obtain features. McRae-Spencer and Shadbolt (2006) include self-citation information as features and Levin et al. (2012) expand features to the citation information between articles. Similar to some previous works, the features in our machine-learning method are also created from PubMed metadata which are strongly indicative of author identity. However, instead of using the name to be disambiguated as part of the similarity profile, we use author name compatibility in restricting the clustering process.Numerous methods have been developed to convert authorship features to intercitation distance. In unsupervised learning (Mann & Yarowsky, 2003; Ferreira, Veloso, Gonçalves, & Laender, 2010), extraction patterns based on biographic data and recurring coauthorship patterns are employed to create positive training data. Supervised machine-learning approaches require training data sets to learn the feature weighting (Han, Giles, Zha, Li, & Tsioutsiouliklis, 2004; On et al., 2005; Torvik et al., 2005; Treeratpituk & Giles, 2009). Some supervised methods (Han et al., 2004; On et al., 2005; Huang et al., 2006) train machine-learning kernels such as SVM with training data to find the optimal separating hyperplane and perform feature selection as well. In Treeratpituk and Giles (2009), a Random Forest approach is shown to benefit from variable importance in feature selection, which helps to outperform other techniques. We use a supervised machine-learning method to characterize the same-author relationship with large training data sets. Our positive training data are assembled based on the assumption that a rare author name generally represents the same author throughout, similar to the construction of matching data in Torvik and Smalheiser (2009). However, our training data sets are much larger and the positive data are filtered with a name compatibility check and a publication year check to minimize false-positive pairs.Typical methods for computing authorship probability include computing similarity based on term frequency (On et al., 2005; Treeratpituk & Giles, 2009), and estimating the prior probability from random samples (Soler, 2007; Torvik & Smalheiser, 2009). In our work, the similarity profile for a citation pair is converted to a pairwise probability with the PAV algorithm. In this process, both name frequency and a prior estimation of the proportion of positive pairs play an important role.Name disambiguation is implemented by assigning citations to a named author based on the relationship among citations, usually during a clustering process. Various clustering algorithms include agglomerative clustering with intercitation distance (Mann & Yarowsky, 2003; Culotta, Kanani, Hall, Wick, & McCallum, 2007; Song et al., 2007; Kang et al., 2009; Torvik & Smalheiser, 2009), K-way spectral clustering (Han et al., 2005), boosted-tree (Wang, Berzins, Hicks, Melkers, Xiao, et al., 2012), affinity propagation (Fan, Wang, Pu, Zhou, & Lv, 2011), quasi-cliques (On, Elmacioglu, Lee, Kang, & Pei, 2006), latent topic model (Shu, Long, & Meng, 2009), and density-based spatial clustering of applications with noise (DBSCAN) (Huang et al., 2006). To minimize the impact of data noise on clustering quality, transitivity violations of pairwise relationships are corrected in DBSCAN (Huang et al., 2006) and with a triplet correction scheme in (Torvik & Smalheiser, 2009). To correct transitivity violations, we apply a correction scheme similar to Torvik and Smalheiser (2009) with a stronger boosting for false-negative pairs. As previously (Mann & Yarowsky, 2003; Torvik & Smalheiser, 2009), an unsupervised agglomerative clustering algorithm disambiguates citations within a namespace. As pointed out before (Mann & Yarowsky, 2003; Tang, 2012), clustering the most similar citations first can help create a high-quality clustering result. In addition to ordering clustering by similarity level, our clustering process is also regulated by another ordering scheme based on name compatibility, which schedules merging clusters with closer name information at earlier stages. | [
"17971238"
] | [
{
"pmid": "17971238",
"title": "PubMed related articles: a probabilistic topic-based model for content similarity.",
"abstract": "BACKGROUND\nWe present a probabilistic topic-based model for content similarity called pmra that underlies the related article search feature in PubMed. Whether or not a docu... |
Nutrients | 28653995 | PMC5537777 | 10.3390/nu9070657 | NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment | Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86.72%, along with an accuracy of 94.47% on a detection dataset containing 130,517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson’s disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55%, which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson’s disease patients. | 1.1. Related WorkWhile there have not been any dedicated drink image recognition systems, there have been multiple approaches to food image recognition in the past, and we will briefly mention the most important ones here. In 2009, an extensive food image and video dataset was built to encourage further research in the field: the Pittsburgh Fast-Food Image Dataset (PFID), containing 4545 still images, 606 stereo image pairs, 303 360° food videos and 27 eating videos of 101 different food items, such as “chicken nuggets” and “cheese pizza” [4]. Unfortunately, this dataset focuses only on fast-food items, not on foods in general. The authors provided the results of two baseline recognition methods tested on the PFID dataset, both using an SVM (Support Vector Machine) classifier to differentiate between the learned features; they achieved a classification accuracy of 11% with the color histogram method and 24% with the bag-of-SIFT-features method. The latter method counts the occurrences of local image features described by the popular SIFT (Scale-Invariant Feature Transform) descriptor [5]. These two methods were chosen based on their popularity in computer vision applications, but the low classification accuracy showed that food image recognition is a challenging computer vision task, requiring a more complex feature representation.In the same year, a food image recognition system that uses the multiple kernel learning method was introduced, which tested different feature extractors, and their combination, on a self-acquired dataset [6]. This proved to be a step in the right direction, as the authors achieved an accuracy of 26% to 38% for the individual features they used and an accuracy of 61.34% when these features were combined; the features include color, texture and SIFT information. Upon conducting a real-world test on 166 food images taken with mobile phones, the authors reported a lower classification accuracy of 37.35%, which was due to factors like occlusion, noise and additional items being present in the real-world images. The fact that the combination of features performed better than the individual features further hinted at the need for a more in-depth representation of the food images. Next year, the pairwise local features method, which applies the specifics of food images to their recognition, was presented [7]. This method analyzes the ingredient relations in the food image, such as the relations between bread and meat in a sandwich, by computing pairwise statistics between the local features. The authors performed an evaluation of their algorithm on the PFID dataset and achieved an accuracy of 19% to 28%, depending on which measure they employed in the pairwise local features method. However, they also noted that the dataset had narrowly-defined food classes, and after joining them into 7 classes, they reported an accuracy of 69% to 78%. This further confirmed the limitations of food image recognition approaches of that time: if a food image recognition algorithm achieved a high classification accuracy, it was only because the food classes were very general (e.g., “chicken”).In 2014, another approach was presented that uses an optimized bag-of-features model for food image recognition [8]. The authors tested 14 different color and texture descriptors for this model and found that the HSV-SIFT descriptor provided the best result. This descriptor describes the local textures in all three color channels of the HSV color space. The model was tested on a food image dataset that was built for the aims of the project Type 1 Diabetes Self-Management and Carbohydrate Counting: A Computer Vision Based Approach (GoCARB) [9], in the scope of which they constructed a food recognition system for diabetes patients. The authors achieved an accuracy of 77.80%, which was considerably higher than previous approaches.All of the previously-described solutions are based on manually-defined feature extractors that rely on specific features, such as color or texture, to recognize the entire range of food images. Furthermore, the images used in the recognition systems presented in these solutions were taken under strict conditions, containing only one food dish per image and often perfectly cropped. The images that contained multiple items were manually segmented and annotated, so the final inputs for these hand-crafted recognition systems were always ideally-prepared images. The results from these research works are therefore not indicative of general real-world performance due to the same problems with real-world images as listed above.These issues show that hand-crafted approaches are not ideal for a task as complex as food image recognition, where it seems the best approach is to use a complex combination of a large number of features, which is why deep convolutional neural networks, a method that automatically learns appropriate image features, achieved the best results in the field. Deep neural networks can also learn to disregard surrounding noise with sufficient training data, eliminating the need for perfect image cropping. Another approach for the image segmentation task is to train a neural network that performs semantic segmentation, which directly assigns class labels to each region of the input image [10,11]. Furthermore, deep neural networks can be trained in such a way that they perform both object detection and recognition in the same network [12,13].In 2014, Kawano et al. used deep convolutional neural networks to complement hand-crafted image features [14] and achieved a 72.26% accuracy on the University of Electro-Communications Food 100 (UEC-FOOD100) dataset that was made publicly available in 2012 [15]; this was the highest accuracy on the dataset at that time. Also in 2014, a larger version of the UEC-FOOD100 dataset was introduced, the University of Electro-Communications Food 256 (UEC-FOOD256), which contains 256 as opposed to 100 food classes [16]; while UEC-FOOD100 is composed of mostly Japanese food dishes, UEC-FOOD256 expands on this dataset with some international dishes. At that time, another food image dataset was made publicly available: the Food-101 dataset. This dataset contains 101,000 images of 101 different food items, and the authors used the popular random forest method for the recognition task, with which they achieved an accuracy of 50.76% [17]. They reported that while this result outperformed other hand-crafted efforts, it could not match the accuracy that deep learning approaches provided. This was further confirmed by the subsequently published research works, such as by Kagaya et al., who tested both food detection and food recognition using deep convolutional neural networks on a self-acquired dataset and achieved encouraging results: a classification accuracy of 73.70% for the recognition and 93.80% for the detection task [18]. In 2015, Yanai et al. improved on the best UEC-FOOD100 result, again with deep convolutional neural networks, only this time, with pre-training on the ImageNet dataset [19]. The accuracy they achieved was 78.77% [20]. A few months later, Christodoulidis et al. presented their own food recognition system that uses deep convolutional neural networks, and with it, they achieved an accuracy of 84.90% on a self-acquired and manually-annotated dataset [21].In 2016, Singla et al. used the famous GoogLeNet deep learning architecture [22], which is described in Section 2.2, on two datasets of food images, collected using cameras and combined with images from existing image datasets and social media. With a pre-trained model, they reported a recognition accuracy of 83.60% and a detection accuracy of 99.20% [23]. Also in 2016, Liu et al. achieved similarly encouraging results on the UEC-FOOD100, UEC-FOOD256 and Food-101 datasets by using an optimized convolution technique in their neural network architecture [24], which allowed them to reach an accuracy of 76.30%, 54.70% and 77.40%, respectively. Furthermore, Tanno et al. introduced DeepFoodCam, which is a smartphone food image recognition application that uses deep convolutional neural networks with a focus on recognition speed [25]. Another food image dataset was made publicly available in that year: the University of Milano-Bicocca 2016 (UNIMIB2016) dataset [26]. This dataset is composed of images of 1027 food trays from an Italian canteen, containing a total of 3616 food instances, divided into 73 food classes. The authors tested a combined segmentation and recognition deep convolutional neural network model on this dataset and achieved an accuracy of 78.30%. Finally, in 2016, Hassannejad et al. achieved the current best classification accuracy values of 81.45% on the UEC-FOOD100 dataset, 76.17% on the UEC-FOOD256 dataset and 88.28% on the Food-101 dataset [27]. All three results were obtained by using a deep neural network model based on the Google architecture Inception; this architecture is the basis for the previously-mentioned GoogLeNet.It seems that deep learning is a very promising approach in the field of food image recognition. Previous deep learning research reported high classification accuracy values, thus confirming the viability of the approach, but they focused on smaller food image datasets, often limited to 100 different food items or less. Moreover, none of these solutions recognize drink images. In this paper, we will present our solution that addresses these issues. We developed a new deep convolutional neural network architecture called NutriNet and trained it on images acquired from web searches for individual food and drink items. With this architecture, we achieved a higher classification accuracy than most of the results presented above and found that, on our recognition dataset, it performs better than AlexNet, which is the deep learning architecture it is based on; the results are described in-depth in Section 3. Additionally, we developed an online training component that automatically fine-tunes the deep learning image recognition model upon receiving new images from users, thus increasing the number of recognizable items and the classification accuracy over time. The online training is described in Section 2.3.By trying to solve the computer vision problem of recognizing food and drink items from images, we are hoping to alleviate the issue of dietary assessment, which is why our recognition system is integrated into the PD Nutrition application for the dietary assessment of Parkinson’s disease patients [28], which is being developed in the scope of the project mHealth Platform for Parkinson’s Disease Management (PD_manager) [29]. In practice, the system works in the following way: Parkinson’s disease patients take an image of food or drink items using a smartphone camera, and our system performs recognition using deep convolutional neural networks on this image. The result is a food or drink label, which is then matched against a database of nutritional information, thus providing the patients with an automatic solution for food logging and dietary assessment. | [
"26017442",
"14449617",
"25014934",
"28114043"
] | [
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognitio... |
Frontiers in Psychology | 28824478 | PMC5541010 | 10.3389/fpsyg.2017.01255 | Toward Studying Music Cognition with Information Retrieval Techniques: Lessons Learned from the OpenMIIR Initiative | As an emerging sub-field of music information retrieval (MIR), music imagery information retrieval (MIIR) aims to retrieve information from brain activity recorded during music cognition–such as listening to or imagining music pieces. This is a highly inter-disciplinary endeavor that requires expertise in MIR as well as cognitive neuroscience and psychology. The OpenMIIR initiative strives to foster collaborations between these fields to advance the state of the art in MIIR. As a first step, electroencephalography (EEG) recordings of music perception and imagination have been made publicly available, enabling MIR researchers to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking or tempo estimation on this new kind of data. This paper reports on first results of MIIR experiments using these OpenMIIR datasets and points out how these findings could drive new research in cognitive neuroscience. | 3. Related workRetrieval based on brain wave recordings is still a very young and largely unexplored domain. EEG signals have been used to recognize emotions induced by music perception (Lin et al., 2009; Cabredo et al., 2012) and to distinguish perceived rhythmic stimuli (Stober et al., 2014). It has been shown that oscillatory neural activity in the gamma frequency band (20–60 Hz) is sensitiv to accented tones in a rhythmic sequence (Snyder and Large, 2005) and that oscillations in the beta band (20–30 Hz) increase in anticipation of strong tones in a non-isochronous sequence (Fujioka et al., 2009, 2012; Iversen et al., 2009). While listening to rhythmic sequences, the magnitude of steady state evoked potentials (SSEPs), i.e., reflecting neural oscillations entrained to the stimulus, changes for frequencies related to the metrical structure of the rhythm as a sign of entrainment to beat and meter (Nozaradan et al., 2011, 2012).EEG studies by Geiser et al. (2009) have further shown that perturbations of the rhythmic pattern lead to distinguishable electrophysiological responses–commonly referred to as Event-Related Potentials (ERPs). This effect appears to be independent of the listener's level of musical proficiency. Furthermore, Vlek et al. (2011) showed that imagined auditory accents imposed on top of a steady metronome click can be recognized from ERPs. However, as usual for ERP analysis to deal with noise in the EEG signal and reduce the impact of unrelated brain activity, this requires averaging the brain responses recorded for many events. In contrast, retrieval scenarios usually only consider single trials. Nevertheless, findings from ERP studies can guide the design of single-trial approaches as demonstrated in subsection 4.1.EEG has also been successfully used to distinguish perceived melodies. In a study conducted by Schaefer et al. (2011), 10 participants listened to 7 short melody clips with a length between 3.26 and 4.36 s. For single-trial classification, each stimulus was presented for a total of 140 trials in randomized back-to-back sequences of all stimuli. Using quadratically regularized linear logistic-regression classifier with 10-fold cross-validation, they were able to successfully classify the ERPs of single trials. Within subjects, the accuracy varied between 25 and 70%. Applying the same classification scheme across participants, they obtained between 35 and 53% accuracy. In a further analysis, they combined all trials from all subjects and stimuli into a grand average ERP. Using singular-value decomposition, they obtained a fronto-central component that explained 23% of the total signal variance. The related time courses showed significant differences between stimuli that were strong enough for cross-participant classification. Furthermore, a correlation with the stimulus envelopes of up to .48 was observed with the highest value over all stimuli at a time lag of 70–100 ms.Results from fMRI studies by Herholz et al. (2012) and Halpern et al. (2004) provide strong evidence that perception and imagination of music share common processes in the brain, which is beneficial for training MIIR systems. As Hubbard (2010) concludes in his review of the literature on auditory imagery, “auditory imagery preserves many structural and temporal properties of auditory stimuli” and “involves many of the same brain areas as auditory perception”. This is also underlined by Schaefer (2011, p. 142) whose “most important conclusion is that there is a substantial amount of overlap between the two tasks [music perception and imagination], and that ‘internally’ creating a perceptual experience uses functionalities of ‘normal’ perception.” Thus, brain signals recorded while listening to a music piece could serve as reference data for a retrieval system in order to detect salient elements in the signal that could be expected during imagination as well.A recent meta-analysis of Schaefer et al. (2013) summarized evidence that EEG is capable of detecting brain activity during the imagination of music. Most notably, encouraging preliminary results for recognizing purely imagined music fragments from EEG recordings were reported in Schaefer et al. (2009) where 4 out of 8 participants produced imagery that was classifiable (in a binary comparison) with an accuracy between 70 and 90% after 11 trials.Another closely related field of research is the reconstruction of auditory stimuli from EEG recordings. Deng et al. (2013) observed that EEG recorded during listening to natural speech contains traces of the speech amplitude envelope. They used ICA and a source localization technique to enhance the strength of this signal and successfully identify heard sentences. Applying their technique to imagined speech, they reported statistically significant single-sentence classification performance for 2 of 8 subjects with performance increasing when several sentences were combined for a longer trial duration.More recently, O'Sullivan et al. (2015) proposed a method for decoding attentional selection in a cocktail party environment from single-trial EEG recordings of approximately one minute length. In their experiment, 40 subjects were presented with 2 classic works of fiction at the same time—each one to a different ear—for 30 trials. In order to determine which of the 2 stimuli a subject attended to, they reconstructed both stimuli envelopes from the recorded EEG. To this end, they trained two different decoders per trial using a linear regression approach—one to reconstruct the attended stimulus and the other to reconstruct the unattended one. This resulted in 60 decoders per subject. These decoders where then averaged in a leave-on-out cross-validation scheme. During testing, each decoder would predict the stimulus with the best reconstruction from the EEG using the Pearson correlation of the envelopes as measure of quality. Using subject-specific decoders averaged from 29 training trials, the prediction of the attended stimulus decoder was correct for 89% of the trials whereas the mean accuracy of the unattended stimulus decoder was 78.9%. Alternatively, using a grand-average decoding method that combined the decoders from every other subject and every other trial, they obtained a mean accuracy of 82 and 75% respectively. | [
"23787338",
"18254699",
"24650597",
"23297754",
"19673759",
"22302818",
"19100973",
"24431986",
"15178179",
"24239590",
"22360595",
"23558170",
"20192565",
"19673755",
"9950738",
"19081384",
"21945275",
"21753000",
"23223281",
"24429136",
"23298753",
"20541612",
"15922164... | [
{
"pmid": "23787338",
"title": "Representation learning: a review and new perspectives.",
"abstract": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different e... |
BMC Medical Informatics and Decision Making | 28789686 | PMC5549299 | 10.1186/s12911-017-0512-7 | Developing a cardiovascular disease risk factor annotated corpus of Chinese electronic medical records | BackgroundCardiovascular disease (CVD) has become the leading cause of death in China, and most of the cases can be prevented by controlling risk factors. The goal of this study was to build a corpus of CVD risk factor annotations based on Chinese electronic medical records (CEMRs). This corpus is intended to be used to develop a risk factor information extraction system that, in turn, can be applied as a foundation for the further study of the progress of risk factors and CVD.ResultsWe designed a light annotation task to capture CVD risk factors with indicators, temporal attributes and assertions that were explicitly or implicitly displayed in the records. The task included: 1) preparing data; 2) creating guidelines for capturing annotations (these were created with the help of clinicians); 3) proposing an annotation method including building the guidelines draft, training the annotators and updating the guidelines, and corpus construction. Meanwhile, we proposed some creative annotation guidelines: (1) the under-threshold medical examination values were annotated for our purpose of studying the progress of risk factors and CVD; (2) possible and negative risk factors were concerned for the same reason, and we created assertions for annotations; (3) we added four temporal attributes to CVD risk factors in CEMRs for constructing long term variations. Then, a risk factor annotated corpus based on de-identified discharge summaries and progress notes from 600 patients was developed. Built with the help of clinicians, this corpus has an inter-annotator agreement (IAA) F1-measure of 0.968, indicating a high reliability.ConclusionTo the best of our knowledge, this is the first annotated corpus concerning CVD risk factors in CEMRs and the guidelines for capturing CVD risk factor annotations from CEMRs were proposed. The obtained document-level annotations can be applied in future studies to monitor risk factors and CVD over the long term. | Related works based on CEMRsWang et al. [16] focused on recognizing and normalizing the names of symptoms in traditional Chinese medicine EMRs. To perform judgements, this system used a set of manually annotated clinical symptom names. Jiang et al. [14] proposed a complete annotation scheme for building a corpus of word segmentation and part-of-speech (POS) from CEMRs. Yang et al. [11] focused on designing an annotation scheme and constructing a corpus of named entities and entity relationships from CEMRs; they formulated an annotation specification and built a corpus based on 992 medical discharge summaries and progress notes. Lei [17] and Lei et al. [18] focused on recognizing named entities in Chinese medical discharge summaries. They classified the entities into four categories: clinical problems, procedures, labs, and medications. Finally, they annotated an entities corpus based on CEMRs. Xu et al. [19] studied a joint model that performed segmentation and named entity recognition in Chinese discharge summaries and built a set of 336 annotated Chinese discharge summaries. Wang et al. [20] researched the extraction of tumor-related information from Chinese-language operation notes of patients with hepatic carcinomas, and annotated a corpus contains 961 entities. He et al. [21] proposed a comprehensive corpus of syntactic and semantic annotations from Chinese clinical texts.Despite the similar intent of these works, research into extracting CVD risk factors from CEMRs has not yet been studied. Meanwhile, for the IE tasks in the biomedical field, the number of accessible corpora is far fewer than those for more general extractions. However, corpora are important for building IE system. Thus, constructing a CVD risk factor annotated corpus is both a necessary and fundamental task. Moreover, unlike annotation tasks for texts that require less specialized knowledge, linguists require the assistance of medical experts to perform annotations in the biomedical field. | [
"24070769",
"20089162",
"24347408",
"23934949",
"24486562",
"17947624",
"19390096",
"20819854",
"20819855",
"21685143",
"23564629",
"23872518",
"26210362",
"26004790",
"25147248",
"15684123"
] | [
{
"pmid": "24070769",
"title": "Supervised methods for symptom name recognition in free-text clinical records of traditional Chinese medicine: an empirical study.",
"abstract": "Clinical records of traditional Chinese medicine (TCM) are documented by TCM doctors during their routine diagnostic work. The... |
Frontiers in Genetics | 28848600 | PMC5552671 | 10.3389/fgene.2017.00104 | A Novel Efficient Graph Model for the Multiple Longest Common Subsequences (MLCS) Problem | Searching for the Multiple Longest Common Subsequences (MLCS) of multiple sequences is a classical NP-hard problem, which has been used in many applications. One of the most effective exact approaches for the MLCS problem is based on dominant point graph, which is a kind of directed acyclic graph (DAG). However, the time and space efficiency of the leading dominant point graph based approaches is still unsatisfactory: constructing the dominated point graph used by these approaches requires a huge amount of time and space, which hinders the applications of these approaches to large-scale and long sequences. To address this issue, in this paper, we propose a new time and space efficient graph model called the Leveled-DAG for the MLCS problem. The Leveled-DAG can timely eliminate all the nodes in the graph that cannot contribute to the construction of MLCS during constructing. At any moment, only the current level and some previously generated nodes in the graph need to be kept in memory, which can greatly reduce the memory consumption. Also, the final graph contains only one node in which all of the wanted MLCS are saved, thus, no additional operations for searching the MLCS are needed. The experiments are conducted on real biological sequences with different numbers and lengths respectively, and the proposed algorithm is compared with three state-of-the-art algorithms. The experimental results show that the time and space needed for the Leveled-DAG approach are smaller than those for the compared algorithms especially on large-scale and long sequences. | 2.1. Preliminaries and related workFirst of all, let Σ denote the alphabet of the sequences, i.e., a finite set of symbols. For example, the alphabet of the DNA sequences is Σ = {A, C, G, T}.Definition 1. Let Σ denote the alphabet and s = c1c2…cn be a sequence of length n with each symbol ci ∈ Σ, for i = 1, 2, ⋯ , n. The i-th symbol of s is denoted by s[i] = ci. If a sequence s′ is obtained by deleting zero or more symbols (not necessarily consecutive) from s, i.e., s′=ci1ci2…cik satisfying 1 ≤ i1 < i2 < ⋯ < ik ≤ n, then s′ is called a length k subsequence of s.Definition 2. Given d sequences s1, s2, …, sd on Σ, if a sequence s′=ci1ci2…cik satisfies: (1) It is a subsequence of each of these d sequences. (2) It is the longest subsequence of these d sequences. Then s′ is called a Longest Common Subsequence (LCS) of these d sequences.Generally, LCS of multiple sequences is not unique. For example, given three DNA sequences ACTAGTGC, TGCTAGCA and CATGCGAT, there exists two LCSs of length 4, which are CAGC and CTGC, respectively. The multiple longest common subsequences (MLCS) problem is to find all the longest common subsequences of three or more sequences.Many algorithms have been proposed for the MLCS problem in the past decades. According to the models on which the algorithms are based, the existing MLCS algorithms can be classified into two categories: the dynamic programming based approaches and the dominant point based approaches. Next, we will give an brief introduction to each of the two approaches.2.1.1. Dynamic programming based approachesThe classical approaches for the MLCS problem are based on dynamic programming (Sankoff, 1972; Smith and Waterman, 1981). Given d sequences s1, s2, …, sd of length n1, n2, …, nd, respectively, these approaches recursively construct a score table T having n1 × n2 × … × nd cells, in which the cell T[i1, i2, …, id] records the length of MLCS of the prefixes s1[1…i1], s2[1…i2], …, sd[1…id]. Specifically, T[i1, i2, …, id] can be computed recursively by the following formula:(1)T[i1,i2,…,id] ={0if ∃j(1≤j≤d),ij=0T[i1−1,…,id−1]+1if s1[i1]=s2[i2]=…=sd[id]max(T¯)otherwisewhere T-={T[i1-1,i2,…,id],T[i1,i2-1,…,id],…,T[i1,i2,…,id-1]}. Once the score table T is constructed, the MLCS can be collected by tracing back from the last cell T[n1, n2, …, nd] to the first cell T[0, 0, …, 0]. Figure 1A shows the score table T of two sequences s1 = ACTAGCTA and s2 = TCAGGTAT. The MLCS of these two sequences, which are TAGTA and CAGTA, can be found by tracing back from T[8, 8] to T[0, 0], as shown in Figure 1B.Figure 1(A) The score table of two DNA sequences ACTAGCTA and TCAGGTAT. (B) Constructing the MLCS from the score table, where the shaded cells conspond to dominant points.Obviously, both time and space complexity of dynamic programming based approaches for a MLCS problem with d sequences of length n are O(nd) (Hsu and Du, 1984). Many methods have been proposed to improve the efficiency, Hirschberg (1977), Apostolico et al. (1992), Masek and Paterson (1980), and Rick et al. (1994). However, with the increase of d and n, all these approaches are still inefficient from practical use.2.1.2. Dominant point based approachesIn order to reduce the time and space complexity of the dynamic programming based approaches, many other methods have been proposed, among which the dominant point based approaches are the most efficient ones until now. Before discussing the dominant point based approaches, some related definitions are introduced first:Definition 3. Given d sequences s1, s2, …, sd on Σ, a vector p = (p1, p2, …, pd) is called a match point of the d sequences, if s1[p1] = s2[p2] = … = sd[pd] = δ, i.e., δ is a common symbol appearing at the position pi of sequence si for i = 1, 2, ⋯ , d. The corresponding symbol δ of match point p is denoted by C(p).Definition 4. Given two match points p = (p1, p2, …, pd) and q = (q1, q2, …, qd) of d sequences, we call: (1) p = q if and only if pi = qi (1 ≤ i ≤ d). (2) p dominates q (or q is dominated by p), which is denoted by p ≼ q, if pi ≤ qi for each i (1 ≤ i ≤ d), and pj < qj for some j (1 ≤ j ≤ d). (3) p strongly dominates q (or q is strongly dominated by p), which is denoted by p ≺ q, if pi < qi for each i (1 ≤ i ≤ d). (4) q is a successor of p (or p is a precursor of q), if p ≺ q and there is no match point r such that p ≺ r ≺ q and C(q) = C(r).Not that, one match point can have at most |Σ| successors with each successor corresponding to one symbol in Σ.Definition 5. The level of a match point p = (p1, p2, …, pd) is defined to be L(p) = T[p1, p2, …, pd], where T is the score table computed by Formula (1). A match point p is called a k-dominant point (k-dominant for short) if and only if: (1) L(p) = k. (2) There is no other match point q such that: L(q) = k and q ≼ p. All the k-dominants form a set Dk.The motivation of the dominant point based approaches is to reduce the time and space complexity of the basic dynamic programming based methods. The key idea is based on the observation that only the dominant points can contribute to the construction of the MLCS (as shown in Figure 1B, the shaded cells correspond to the dominant points). Since the number of dominant points can be much smaller than the number of all cells in the score table T, a dominant point approach that only identifies the dominant points, without filling the whole score table, can greatly reduce the time and space complexity.The search space of the dominant point based approaches can be organized to a directed acyclic graph (DAG): a node in DAG represents a match point, while an edge 〈p, q〉 in DAG represents that q is a successor of p, i.e., p ≺ q and L(q) = L(p) + 1. Initially, the DAG contains only a source node (0, 0, …, 0) with no incoming edges as well as an end node (∞, ∞, …, ∞) with no outgoing edges. Next, the DAG is constructed level by level as follows: at first, let the level k = 0, and D0 = {(0, 0, …, 0)}, and then, with a forward iteration procedure, the (k + 1)-dominants Dk + 1 are computed based on the k-dominants Dk, and this procedure is denoted by Dk → Dk + 1. Specifically, each node in Dk is expanded by generating all its |Σ| successors, then a pruning operation called Minima is performed to identify those successors who dominant others, and only those dominants are reserved to Dk + 1. Once all the nodes in the graph have been expanded, the whole DAG is constructed, in which a longest path from the source node to the end node corresponds to a LCS, thus, the MLCS problem becomes finding all the longest paths from the source node to the end node. In the following, we will use a simple example to illustrate the above procedure.Example 1. Finding the MLCS of sequences ACTAGCTA and TCAGGTAT based on the dominant point based approaches, as shown in Figure 2.Step 0. Set source node (0, 0) and end node (∞, ∞).Step 1. Construct nodes in level 1. For symbol A, the components of match point (1, 3) are the first positions of A in the two input sequences from beginning. Thus, node A(1, 3) is a successor of the source node corresponding to symbol A in level 1. Similarly, node C(2, 2), G(5, 4), and T(3, 1) are also the successors of the source node corresponding to symbol C, G and T, respectively. Among these four nodes in level 1, find and delete dominated node G(5, 4) (using the Minima operation), as shown in gray in Figure 2. The left three dominant nodes form D1 = {A(1, 3), C(2, 2), T(3, 1)}.Step 2. Construct nodes in level 2. For each node in D1, e.g., for A(1, 3)∈D1, symbol A with match point (4, 7) is the first common symbol A in the two sequences after symbol A with match point (1, 3) (i.e., after node A(1, 3)∈D1). Thus, node A(4, 7) is a successor of A(1, 3) corresponding to symbol A in level 2. Similarly, nodes T(3, 6) and G(5, 4) are also successors of A(1, 3) corresponding to symbol T and G, respectively, in level 2. In the same way, node C(2, 2) in level 1 can generate three successors A(4, 3), G(5, 3) and T(3, 6) in level 2, and node T(3, 1) in level 1 can generate four successors A(4, 3), C(6, 2), G(5, 4), and T(7, 6) in level 2. Note that, some nodes appear more than one times. Among these ten nodes in level 2, find and delete the duplicated nodes (using Minima) A(4, 3), G(5, 4) (delete two times) and T(3, 6) as shown in black in level 2. Also, find and delete all dominated nodes (using Minima) (4, 7), (5, 4), and (7, 6) as shown in gray in level 2. The left dominant points form the set D2 = {T(3, 6), A(4, 3), C(6, 2)}, which forms the final level 2 of the graph. Note that, if a node has no successors, let the end node to be its only successor.Step 3. Repeat the above construction process level by level until the whole DAG is constructed.Figure 2The DAG of two sequences ACTAGCTA and TCAGGTAT constructed by the general dominant point based algorithms, in which the black and gray nodes will be eliminated by the Minima operation.It can be seen from the above example that the dominant point based approaches have the following main drawbacks:
There are a huge number of duplicated nodes and dominated nodes in each level, which consumes a lot of memory.All these duplicated nodes and dominated nodes in each level should be deleted, and finding all these nodes in each level needs a lot of pairwise comparisons of two d dimensional vectors (while each pairwise comparison of two d dimensional vectors needs d pairwise comparisons of two integers). Thus, the deletions of duplicated nodes and dominated nodes in all levels will be very time consuming.Hunt and Szymanski (1977) proposed the first dominant point based algorithm for two sequences with time complexity O((r + n)logn), where r is the number of nodes in DAG and n is the length of the two sequences. Afterwards, to further improve the efficiency, a variety of dominant point based LCS/MLCS algorithms have been presented. Korkin (2001) proposed the first parallel MLCS algorithm with time complexity O(|Σ||D|), where |D| is the number of dominants in the graph. Chen et al. (2006) presented an efficient MLCS algorithm—FAST-LCS for DNA sequences, it introduced a novel data structure called successor table to obtain the successors of nodes in constant time and used a pruning operation to eliminate the non-dominant nodes in each level. Wang et al. (2011) proposed an algorithm Quick-DPAR to improve the FAST-MLCS algorithm, it uses a divide-and-conquer strategy to eliminate the non-dominant nodes, which is very suitable for parallelization, it is indicated that the parallelized algorithm Quick-DPPAR had gained a near-linear speedup compared to its serial version. Li et al. (2012) and Yang et al. (2010) made efforts to develop efficient parallel algorithms on GPUs for the LCS problem and on cloud platform for the MLCS problem, respectively. Unfortunately, Yang et al. (2010) is not suitable for the MLCS problem with many sequences due to the large synchronous costs. Recently, Li et al. (2016b,a) proposed two algorithms: PTop-MLCS and RLP-MLCS based on dominant points, these algorithms used a novel graph model called Non-redundant Common Subsequence Graph (NCSG) which can greatly reduce the redundant nodes during processing, and adopted a two-passes topological sorting procedure to find the MLCS. The authors claimed that the time and space complexity of their algorithms is linear to the number of nodes in NCSG.In practice, for MLCS problems with large number of sequences, the traditional algorithms usually need a long time and large space to find the optimal solution (the complete MLCS), to address this issue, approximate algorithms have been investigated to quickly produce a suboptimal solution (partial MLCS) and gradually improve it when given more time, until an optimal one is found. Yang et al. (2013) proposed an approximate algorithm Pro-MLCS as well as its efficient parallelization based on the dominant point model. Pro-MLCS can find an approximate solution quickly, which only takes around 3% of the entire running time, and then progressively generates better solutions until obtaining the optimal one. Recently, Yang et al. (2014) proposed another two approximate algorithms SA-MLCS and SLA-MLCS. SA-MLCS used an iterative beam widening search strategy to reduce space usage during the iterative process of finding better solutions. Based on SA-MLCS, SLA-MLCS, a space-bounded algorithm, is developed to avoid space usage from exceeding the available memory. | [
"28187279",
"17217522",
"4500555",
"7265238",
"25400485"
] | [
{
"pmid": "28187279",
"title": "Next-Generation Sequencing of Circulating Tumor DNA for Early Cancer Detection.",
"abstract": "Curative therapies are most successful when cancer is diagnosed and treated at an early stage. We advocate that technological advances in next-generation sequencing of circulati... |
JMIR Medical Informatics | 28760726 | PMC5556254 | 10.2196/medinform.7140 | Triaging Patient Complaints: Monte Carlo Cross-Validation of Six Machine Learning Classifiers | BackgroundUnsolicited patient complaints can be a useful service recovery tool for health care organizations. Some patient complaints contain information that may necessitate further action on the part of the health care organization and/or the health care professional. Current approaches depend on the manual processing of patient complaints, which can be costly, slow, and challenging in terms of scalability.ObjectiveThe aim of this study was to evaluate automatic patient triage, which can potentially improve response time and provide much-needed scale, thereby enhancing opportunities to encourage physicians to self-regulate.MethodsWe implemented a comparison of several well-known machine learning classifiers to detect whether a complaint was associated with a physician or his/her medical practice. We compared these classifiers using a real-life dataset containing 14,335 patient complaints associated with 768 physicians that was extracted from patient complaints collected by the Patient Advocacy Reporting System developed at Vanderbilt University and associated institutions. We conducted a 10-splits Monte Carlo cross-validation to validate our results.ResultsWe achieved an accuracy of 82% and F-score of 81% in correctly classifying patient complaints with sensitivity and specificity of 0.76 and 0.87, respectively.ConclusionsWe demonstrate that natural language processing methods based on modeling patient complaint text can be effective in identifying those patient complaints requiring physician action. | Related WorkThe bulk of the textual artifacts in health care can be found in two main sources: clinical and nonclinical. Clinical textual artifacts are largely entries in the medical chart, comments on the case, or physician notes. Medical chart notes tend to be consciously made and well structured, whereas case comments and physician notes focus on treatment (including diagnoses) of the patient. Nonclinical textual artifacts include unsolicited patient feedback and often revolve around complaints. The text is variable, may contain abbreviations, and may extend beyond the actual treatment or diagnosis.Previous research has focused on clinical textual artifacts [8]. Recent research demonstrates the possibility to apply natural language processing (NLP) on electronic medical records to identify postoperative complications [9]. Bejan and Denny [10] showed how to identify treatment relationships in clinical text using a supervised learning system that is able to predict whether or not a whether or not a treatment relation exists between any two medical concepts mentioned in the clinical notes exists between any two medical concepts mentioned in the clinical notes.Cui et al [11] explored a large number of consumer health questions. For each question, they selected a smaller set of the most relevant concepts adopting the idea of the term frequency-inverse document frequency (TF-IDF) metric. Instead of computing the TF-IDF based on the terms, they used concept unique identifiers. Their results indicate that we can infer more information from patient comments than commonly thought. However, questions are short and limited, whereas patient complaints are rich and elaborate.Sakai et al [12] concluded that how risk assessment and classification is configured is often a decisive intervention in the reorganization of the work process in emergency services. They demonstrated the textual analysis of feedback provided by nurses can expose the sentiment and feelings of the emergency workers and help improve the outcomes.Temporal information in discharge summaries has been successfully used [13] to classify encounters, enabling the placement of data within the structure to provide a foundational representation on which further reasoning, including the addition of domain knowledge, can be accomplished.Additional research [14] extended the clinical Text Analysis and Knowledge Extraction System (cTAKES) with a simplified feature extraction, and the development of both rule and machine learning-based document classifiers. The resulting system, the Yale cTAKES Extensions (YTEX), can help classify radiology reports containing findings suggestive of hepatic decompensation. A recent systematic literature review of 85 articles focusing on the secondary use of structured patient records showed that electronic health record data structuring methods are often described ambiguously and may lack clear definition as such [15]. | [
"21226384",
"17971689",
"25916627",
"11367775",
"24195197",
"1635463",
"21862746",
"16169282",
"21622934",
"25991152",
"12668687",
"20808728",
"16119262"
] | [
{
"pmid": "21226384",
"title": "Best practices for basic and advanced skills in health care service recovery: a case study of a re-admitted patient.",
"abstract": "BACKGROUND\nService recovery refers to an organizations entire process for facilitating resolution of dissatisfactions, whether or not visib... |
JMIR mHealth and uHealth | 28778851 | PMC5562934 | 10.2196/mhealth.7521 | Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation | BackgroundInertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development.ObjectiveThe study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network.MethodsWe applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was collected. The ability of the proposed method to automatically classify the exercise being completed was assessed using this dataset. For comparative purposes, classification using the same dataset was also performed using the more conventional approach of feature-extraction and classification using random forest classifiers.ResultsWith the collected dataset and the proposed method, the different exercises could be recognized with a 95.89% (3827/3991) accuracy, which is competitive with current state-of-the-art techniques in ED.ConclusionsThe high level of accuracy attained with the proposed approach indicates that the waveform morphologies in the time-series plots for each of the exercises is sufficiently distinct among the participants to allow the use of machine vision approaches. The use of high-level machine learning frameworks, coupled with the novel use of machine vision techniques instead of complex manually crafted features, may facilitate access to research in the HAR field for individuals without extensive digital signal processing or machine learning backgrounds. | Related WorkThe three main topics in this section are as follows: (1) a brief overview of the current human activity recognition (HAR) and exercise detection (ED) literature, (2) an account of some of the newer advances in the field that are using neural networks for certain parts of the feature discovery and reduction process, and (3) an introduction to transfer learning, highlighting its benefits in terms of time and resource savings, and working with smaller datasets.Activity Classification for Inertial Sensor DataOver the past 15 years, inertial sensors have become increasingly ubiquitous due to their presence in mobile phones and wearable activity trackers [2]. This has enabled countless applications in the monitoring of human activity and performance spanning applications in general HAR, gait analysis, the military field, the medical field, and exercise recognition and analysis [3-6]. Across all these application spaces, there are common challenges and steps which must be overcome and implemented to successfully create functional motion classification systems.Human activity recognition with wearable sensors usually pertains to the detection of gross motor movements such as walking, jogging, cycling, swimming, and sleeping [5,7]. In this field of motion tracking with inertial sensors, the key challenges are often considered to be (1) the selection of the attributes to be measured; (2) the construction of a portable, unobtrusive, and inexpensive data acquisition system; (3) the design of feature extraction and inference methods; (4) the collection of data under realistic conditions; (5) the flexibility to support new users without the need for retraining the system; and (6) the implementation in mobile devices meeting energy and processing requirements [3,7]. With the ever-increasing computational power and battery life of mobile devices, many of these challenges are becoming easier to overcome.Whereas system functionality is dependent on hardware constraints, the accuracy, sensitivity, and specificity of HAR systems are most reliant on building large, balanced, labeled datasets; the identification of strong features for classification; and the selection of the best machine learning method for each application [3,8-10]. Investigating the best features and machine learning methods for each HAR application requires an individual or team appropriately skilled in signal processing and machine learning and a large amount of time. They must understand how to compute time-domain, frequency-domain, and time-frequency domain features from inertial sensor data and train and evaluate multiple machine learning methods (eg, random forests [11], support vector machines [12], k-nearest neighbors [13], and logistical regression [14]) with such features [3-5]. This means that those who may be most interested in the output of inertial sensor based activity recognition systems (eg, medical professionals, exercise professionals, and biomechanists) are unable to design and create the systems without significant engagement with machine learning experts [4].The above challenges in system design and implementation are replicated in activity recognition pertaining to more specific or acute movements. In the past decade, there has been a vast amount of work in the detection and quantification of specific rehabilitation and strength and conditioning exercises [15-17]. Such work has also endeavored to detect aberrant exercise technique and specific mistakes that system users make while exercising, which can increase their chance of injury or decrease their body’s beneficial adaptation due to the stimulus of exercise [17,18]. The key steps in the development of such systems have been recently outlined as (1) inertial sensor data collection, (2) data preprocessing, (3) feature extraction, and (4) classification (Figure 1) [4]. Whereas the first step can generally be completed by exercise professionals (eg, physiotherapists and strength and conditioning coaches), the remaining steps require skills outside that included in the training of such experts. Similarly, when analyzing gait with wearable sensors, feature extraction and classification have been highlighted as essential in the development of each application [19,20]. This again limits the type of professional who can create such systems and the rate at which hypotheses for new systems can be tested.Figure 1Steps involved in the development of an inertial measurement unit (IMU)-based exercise classification system.Neural Networks and Activity RecognitionIn the past few years, CNNs have been applied in a variety of manners to HAR, in both the fields of ambient and wearable sensing. Mo et al applied a novel approach utilizing machine vision methods to recognize twelve daily living tasks with the Microsoft Kinect. Rather than extract features from the Kinect data streams, they developed 144×48 images using 48 successive frames from skeleton data and 15×3 joint position coordinates and 11×3×3 joint rotation matrices. These images were then used as input to a multilayer CNN which automatically extracted features from the images that were fed in to a multilayer perceptron for classification [21]. Stefic and Patras utilized CNNs to extract areas of gaze fixation in raw image training data as participants watched videos of multiple activities [22]. This produced strong results in identifying salient regions of images that were then used for action recognition. Ma et al also combined a variety of CNNs to complete tasks, such as segmenting hands and objects from first-person camera images and then using these segmented images and motion images to train an action-based and motion-based CNN [23]. This novel use of CNNs allowed an increase in activity recognition rates of 6.6%, on average. These research efforts demonstrated the power of utilizing CNNs in multiple ways for HAR.Research utilizing CNNs for HAR with wearable inertial sensors has also been published recently. Zeng et al implemented a method based on CNNs which captures the local dependency and scale invariance of an inertial sensor signal [24]. This allows features for activity recognition to be identified automatically. The motivation for developing this method was the difficulties in identifying strong features for HAR. Yang et al also highlighted the challenge and importance of identifying strong features for HAR [25]. They also employed CNNs for feature learning from raw inertial sensor signals. The strength of CNNs in HAR was again demonstrated here as its use in this circumstance outperformed other HAR algorithms, on multiple datasets, which utilized heuristic hand-crafting of features or shallow learning architectures for feature learning. Radu et al also recently demonstrated that the use of CNNs to identify discriminative features for HAR when using multiple sensor inputs from various mobile phones and smartwatches, which have different sampling rates, data generation models, and sensitivities, outperforms classic methods of identifying such features [26]. The implementation of such feature learning techniques with CNNs is clearly beneficial but is complex and may not be suitable for HAR system developers without strong experience in machine learning and DSP. From a CNN perspective, these results are interesting and suggest significant scope for further exploration for machine learning researchers. However, for the purposes of this paper, their inclusion is to both succinctly acknowledge that CNN has been applied to HAR previously and to distinguish the present approach which seeks to use well developed CNN platforms tailored for machine vision tasks in a transfer learning context for HAR recognition using basic time series as the only user created features.Transfer Learning in Machine VisionDeep learning-based machine vision techniques are used in many disciplines, from speech, video, and audio processing [27], through to HAR [21] and cancer research [28].Training deep neural networks is a time consuming and resource intensive task, not only needing specialized hardware (graphics processing unit [GPU]) but also large datasets of labeled data. Unlike other machine learning techniques, once the training work is completed, querying the resulting models to predict results on new data is fast. In addition, trained networks can be repurposed for other specific uses which are not required to be known in advance of the initial training [29]. This arises from the generalized vision capabilities that can emerge with suitable training. More precisely, each layer of the network learns a number of features from the input data and that knowledge is refined through iterations. In fact, the learning that happens at different layers seems to be nonspecific to the dataset, including the identification of simple edges in the first few layers, the subsequent identification of boundaries and shapes, and growing toward object identification in the last few layers. These learned visual operators are applicable to other sets of data [30]. Transfer learning then is the generic name given to a classification effort when a pretrained network is reused for a task for which it was not specifically trained for. Deep learning frameworks such as Caffe [31] and TensorFlow can make use of pretrained networks, many of which have been made available by researchers in repositories such as the Caffe Model Zoo, available in their github repository.Retraining requires not only a fraction of the time that a full training session would need (min/h instead of weeks), but more importantly in many cases, allows for the use of much smaller datasets. An example of this is the inception model provided by Google, whose engineers reportedly spent several weeks training on ImageNet [32] (a dataset of over 14 million images in over 2 thousand categories), using multiple GPUs and the TensorFlow framework. In their example [33], they use in the order of 3500 pictures of flowers in 5 different categories to retrain the generic model, producing a model with a fair accuracy rating on new data. In fact, during the retraining stage, the network is left almost intact. The final classifier is the only part that is fully replaced, and “bottlenecks” (the layer before the final one) are calculated to integrate the new training data into the already “cognizant” network. After that, the last layer is trained to work with the new classification categories. This happens in image batches of a size that can be adapted to the needs of the new dataset (alongside other hyperparameters such as learning rate and training steps).Each step of the training process outputs values for training accuracy, validation accuracy, and cross entropy. A large difference between training and validation accuracy can indicate potential “overfitting” of the data, which can be a problem especially with small datasets, whereas the cross entropy is a loss function that provides an indication of how the training is progressing (decreasing values are expected). | [
"27782290",
"19342767",
"25004153",
"21258659",
"25246403",
"24807526",
"21816105",
"21889629",
"22520559",
"16212968",
"22438763",
"26017442",
"24579166",
"22498149",
"19936042"
] | [
{
"pmid": "27782290",
"title": "Technology in Rehabilitation: Evaluating the Single Leg Squat Exercise with Wearable Inertial Measurement Units.",
"abstract": "BACKGROUND\nThe single leg squat (SLS) is a common lower limb rehabilitation exercise. It is also frequently used as an evaluative exercise to s... |
Frontiers in Neurorobotics | 28883790 | PMC5573722 | 10.3389/fnbot.2017.00043 | Impedance Control for Robotic Rehabilitation: A Robust Markovian Approach | The human-robot interaction has played an important role in rehabilitation robotics and impedance control has been used in the regulation of interaction forces between the robot actuator and human limbs. Series elastic actuators (SEAs) have been an efficient solution in the design of this kind of robotic application. Standard implementations of impedance control with SEAs require an internal force control loop for guaranteeing the desired impedance output. However, nonlinearities and uncertainties hamper such a guarantee of an accurate force level in this human-robot interaction. This paper addresses the dependence of the impedance control performance on the force control and proposes a control approach that improves the force control robustness. A unified model of the human-robot system that considers the ankle impedance by a second-order dynamics subject to uncertainties in the stiffness, damping, and inertia parameters has been developed. Fixed, resistive, and passive operation modes of the robotics system were defined, where transition probabilities among the modes were modeled through a Markov chain. A robust regulator for Markovian jump linear systems was used in the design of the force control. Experimental results show the approach improves the impedance control performance. For comparison purposes, a standard H∞ force controller based on the fixed operation mode has also been designed. The Markovian control approach outperformed the H∞ control when all operation modes were taken into account. | 5.1. Related workThe impedance control configuration used is based on Hogan (1985) and it is aimed at the regulation of the dynamical behavior in the interaction port by variables that do not depend on the environment. The actuator together with the controller are modeled as an impedance, Zr, with velocity inputs (angular) and force outputs (torque). The environment is considered an admittance, Ye, in the interaction port. Colgate and Hogan (1988, 1989) presented sufficient conditions for the determination of stability of two coupled systems and explained how two physically coupled systems with Zr and Ye with passive port functions can guarantee stability. These concepts have been useful for the implementation of interaction controls for almost three decades. The stability of two coupled systems is given by Zr and Ye eigenvalues and the performance is evaluated by through the impedance Zr.Buerger and Hogan (2007) described a methodology in which an interaction control is designed for a robot module used for rehabilitation purposes. They considered an environment with restricted uncertain characteristics, therefore the admittance is rewritten as Ye(s) = Yn(s) + W(s)Δ(s). The authors also used a second-order dynamics to model the stiffness, damping, and inertia of the human parameters. Complementary stability for interacting systems was defined, where stability is determined by an environment subject to uncertainties. Therefore, a coupled stability problem is considered a robust stability problem.Regarding the human modeling, the dynamic properties of the lower limbs and muscular activities vary considerably among subjects. This is relevant since SRPAR has been designed for users that suffer diseases that affect the human motor control system, e.g., stroke and other conditions that cause hemiplegia. Typically, such diseases change stiffness and damping in the ankle and knee joints, hence producing spasticity or hypertonia (Lin et al., 2006; Chow et al., 2012). Therefore, the development of a control strategy that guarantees a safe interaction between patient and platform, mainly in virtue of uncertainties related to the human being, is fundamental.Li et al. (2017) and Pan et al. (2017) proposed adaptive control schemes for SEA-driven robots. They considered two operation modes in the adaptation process, namely robot-in-charge and human-in-charge, which are close related to the passive and resistive operation modes, respectively, proposed in this paper. However, the control adaptation is based on changes in the desired position input of the SEA controller and estimation of coordinate accelerations through nonlinear filtering.In human-robot interaction control systems, the efficiency of the force actuator operation deserves special attention. Although SEAs are characterized by a low output impedance, an important requirement for improving such efficiency is the achievement of a precise and proportional output torque with respect to the desired input. Pratt (2002), Au et al. (2006), Kong et al. (2009), Mehling and O'Malley (2014), and dos Santos et al. (2015) developed force controllers for ankle actuators using SEA. In this paper, we proposed a force control methodology that can deal with system uncertainties and guarantee robust mean square stability. Similar performance was obtained in different tests performed. Accuracies of 98.14% for resistive mode and 92.47% for passive mode were obtained in the pure stiffness configuration. In the stiffness-damping configuration, with Kv = 15 and Bv = 5, the accuracy obtained in the resistive case was of 97.47% for stiffness and 97.2% for damping, and in the passive case was of 93.94% for stiffness and 94% for damping.On the contrary, using a fixed-gain control approach based on H∞ synthesis, the performance was not similar among operation modes. We showed that this strategy can guarantee coupled stability; nevertheless, force control performance decreases when the system is in the passive operation mode. This is reflected in the impedance control accuracy for the pure stiffness configuration, falling from 90.4% in the resistive mode to 46.67% in the passive mode. | [
"12048669",
"24396811",
"22325644",
"1088404",
"19004697",
"27766187",
"16571398",
"12098155",
"21708707"
] | [
{
"pmid": "12048669",
"title": "Risks of falls in subjects with multiple sclerosis.",
"abstract": "OBJECTIVES\nTo quantify fall risk among patients with multiple sclerosis (MS) and to report the importance of variables associated with falls.\n\n\nDESIGN\nRetrospective case-control study design with a 2-... |
Frontiers in Neurorobotics | 28900394 | PMC5581837 | 10.3389/fnbot.2017.00046 | Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision | We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. | Related workOur proposal involves the use of visual graphs, in which each node stores images associated to landmarks, and the arcs represent the paths that the UAV must follow to reach the next node. Therefore, these graphs can be used to generate the best path for an UAV to reach a specific destination, as it has been suggested in other works. Practically each traditional method used in ground robots for trajectory planning has been considered for aerial ones (Goerzen et al., 2010). Some of those methods use graph-like models and generally they use algorithms such as improved versions of the classic A* (MacAllister et al., 2013; Zhan et al., 2014) and Rapidly-exploring Random Tree Star (RRT) (Noreen et al., 2016) or reinforcement learning (RL) (Sharma and Taylor, 2012) for planning. RL is even used by methods that consider the path-planning task in cooperative multi-vehicle systems (Wang and Phillips, 2014), in which coordinated maneuvers are required (Lopez-Guede and Graña, 2015).On the other hand, the obstacle avoidance task is also addressed here, which is particularly important in the UAV domain because of a collision in flight surely implies a danger and the partial or total destruction of the vehicle. Thus, the Collision Avoidance System (CAS) (Albaker and Rahim, 2009; Pham et al., 2015) is a fundamental part of control systems. Its goal is to allow UAVs to operate safely within the non-segregated civil and military airspace on a routinely basis. Basically, the CAS must detect and predict traffic conflicts in order to perform an avoidance maneuver to avoid a possible collision. Specific approaches are usually defined for outdoor or indoor vehicles. Predefined collision avoidance based on sets of rules and protocols are mainly used outdoors (Bilimoria et al., 1996) although classic methods such as artificial potential fields are also employed (Gudmundsson, 2016). These and other conventional well-known techniques in wheeled and legged robots are also considered for being used in UAVs (Bhavesh, 2015).Since most of the current UAVs have monocular onboard cameras as main source of information several computer vision techniques are used. A combination of the Canny edge detector and the Hough transform is used to identify corridors and staircases for trajectory planning (Bills et al., 2011). Also, feature points detectors such as SURF (Aguilar et al., 2017) and SIFT (Al-Kaff et al., 2017) are used to analyze the images and to determine free collision trajectories. However, the most usual technique is optic flow (Zufferey and Floreano, 2006; Zufferey et al., 2006; Beyeler et al., 2007; Green and Oh, 2008; Sagar and Visser, 2014; Bhavesh, 2015; Simpson and Sabo, 2016). Sometimes optic flow is combined with artificial neural networks (Oh et al., 2004) or other computer vision techniques (Soundararaj et al., 2009). Some of those techniques (de Croon, 2012) are based on the analysis of the image textures for estimating the possible number of objects that are captured. Finally, as in other scopes of research, deep learning techniques are also been used to explore alternatives to the traditional approaches (Yang et al., 2017). | [
"28481277",
"10607637"
] | [
{
"pmid": "28481277",
"title": "Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs.",
"abstract": "One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance... |
Research Synthesis Methods | 28677322 | PMC5589498 | 10.1002/jrsm.1252 | An exploration of crowdsourcing citation screening for systematic reviews | Systematic reviews are increasingly used to inform health care decisions, but are expensive to produce. We explore the use of crowdsourcing (distributing tasks to untrained workers via the web) to reduce the cost of screening citations. We used Amazon Mechanical Turk as our platform and 4 previously conducted systematic reviews as examples. For each citation, workers answered 4 or 5 questions that were equivalent to the eligibility criteria. We aggregated responses from multiple workers into an overall decision to include or exclude the citation using 1 of 9 algorithms and compared the performance of these algorithms to the corresponding decisions of trained experts. The most inclusive algorithm (designating a citation as relevant if any worker did) identified 95% to 99% of the citations that were ultimately included in the reviews while excluding 68% to 82% of irrelevant citations. Other algorithms increased the fraction of irrelevant articles excluded at some cost to the inclusion of relevant studies. Crowdworkers completed screening in 4 to 17 days, costing $460 to $2220, a cost reduction of up to 88% compared to trained experts. Crowdsourcing may represent a useful approach to reducing the cost of identifying literature for systematic reviews. | 2.1Related workOver the past decade, crowdsourcing has become an established methodology across a diverse set of domains.6 Indeed, researchers have demonstrated the promise of harnessing the “wisdom of the crowd” with respect to everything from conducting user studies7 to aiding disaster relief.8, 9
Perhaps most relevant to the task of citation screening for systematic reviews, crowdsourcing has also been used extensively to collect relevance judgements to build and evaluate information retrieval (IR) systems.10 In such efforts, workers are asked to determine how relevant retrieved documents are to a given query. In the context of IR system evaluation, crowdsourcing has now been established as a reliable, low cost means of acquiring “gold standard” relevance judgements.11 Using crowdsourcing to acquire assessments of the relevance of articles with respect to systematic reviews is thus a natural extension of this prior work. However, the notion of “relevance” is stricter here than in general IR tasks, because of a well‐defined set of inclusion criteria (codified in the specific questions).A related line of work concerns “citizen science” initiatives.12 These involve interested remote, distributed individuals—usually volunteers—to contribute to a problem by completing small tasks. A prominent example of this is the Galaxy zoo project,13 in which crowdworkers were tasked with classifying galaxies by their morphological features. This project has been immensely successful in turn demonstrating that having laypeople volunteer to perform scientific tasks is an efficient, scalable approach. While we have used paid workers in the present work, we believe that in light of the nature of systematic reviews, recruiting volunteer workers (citizen scientists) may represent a promising future direction.Indeed, members of the Cochrane collaboration have investigated leveraging volunteers to identify randomized controlled trials.14 This project has been remarkable in its success; over 200 000 articles have now been labeled as being randomized controlled trials (or not). Noel‐Stor et al of the Cochrane collaboration have also explored harnessing distributed workers to screen a small set of 250 citations for a diagnostic test accuracy review (Noel‐Stor, 2013). In this case, however, 92% of the workers had some knowledge of the subject matter, which contrasts to the use of laypeople in our project.The above work has demonstrated that crowdsourcing is a useful approach generally, and for some large‐scale scientific tasks specifically. However, as far as we are aware, ours is the first study to investigate the use of crowdsourcing citation screening for specific systematic reviews to laypersons. | [
"10517715",
"25588314",
"23305843",
"24236626",
"19586682",
"19755348"
] | [
{
"pmid": "25588314",
"title": "Using text mining for study identification in systematic reviews: a systematic review of current approaches.",
"abstract": "BACKGROUND\nThe large and growing number of published studies, and their increasing rate of publication, makes the task of identifying relevant stud... |
Scientific Reports | 28924196 | PMC5603591 | 10.1038/s41598-017-12141-9 | Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection | As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography. | Related WorkA wide range of approaches for projection alignment are employed in electron tomography14. The most common approach is to use cross-correlation between projections acquired at adjacent rotation angles15–18, or correlation of vertical variations in the mass of the sample19. However, two features in 3D space can have their apparent separation change in projections as a function of rotation angle, leading to ambiguities on which feature dominates the cross-correlation. These ambiguities exponentiate as the number of features are increased, or as the rotation angles between projection images are widened. As a result, while cross-correlation alignment can remove frame-to-frame “jitter” in tomographic datasets, it cannot be relied upon to find a common rotation axis for complete set of projections20.When specimens are mounted within semi-transparent capillary holders, one can use high-contrast capillary edges to correct for jitter21. An alternative approach is to place fiducial markers such as small gold beads22 or silica spheres23 on the specimen mount or directly on the specimen, and identify them either manually or automatically24,25; their positions can then be used to correct for alignment errors26. This approach is quite successful, and is frequently employed; however, it comes at the cost of adding objects that can complicate sample preparation, obscure specimen features in certain projections, and add material that may complicate analytical methods such as the analysis of fluorescent x-rays.For those situations where the addition of fiducial marker materials is problematic, one can instead use a variety of feature detection schemes to identify marker positions intrinsic to the specimen, after which various alignment procedures are applied27–30 including the use of bundle adjustment31. Among these natural feature selection schemes are object corner detection32, wavelet-based detection33, Canny edge detection34, feature curvature detection35, or common-line approach for registration of features in Fourier space36. One can use Markov random fields37 or the Scale-Invariant Feature Transform (SIFT)38,39 to refine the correspondence of features throughout the tomographic dataset. Finally, in the much simpler case of very sparse and local features, one can simply fit sinusoidal curves onto features in the sinogram representation of a set of line projections, and shift projections onto the fitted sinusoid curve40.These methods are widely employed with good success in electron tomography, where the mean free path for inelastic scattering is often in the 100–200 nm range even for biological specimens with low atomic number, so that it is rare to study samples thicker than about 1 \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mu $$\end{document}μ m (high angle dark field can allow scanning transmission electron microscopes, or STEMs, to image somewhat thicker materials41,42). The situation can be more challenging in nanotomography with x-ray microscopes, where the great penetration of x-rays means that samples tens of micrometers or more in size can be studied43–48. This freedom to work with larger specimens means that while feature-based alignment can still be employed for imaging thin specimens with low contrast10,43, in STEM tomography and especially in hard x-ray nanotomography it becomes increasingly challenging to track fiducials or intrinsic features due to the overlap of a large number of features in depth as seen from any one projection.In all of the above techniques, the primary strategy is to perform an alignment that is as accurate as possible before tomographic reconstruction. In the last few decades, a new set of automatic alignment techniques have been introduced based on a “bootstrap” process49, now commonly referred to as “iterative reprojection”. These techniques attempt to achieve simultaneous alignment and reconstruction through an iterative refinement process. They are based on the fact that the measurement process (forward model) and object reconstruction (inverse model) should be consistent only for a correct alignment geometry. Already in its initial implementation for electron tomography49, a multiscale approach of the method has been used, where first a downsampled version of the projections is used to generate a lower resolution 3D object reconstruction for a first pass of alignment; this first pass with a smaller dataset can align large features in images and works quickly, after which one can improve the alignment at higher resolution until finally the full resolution dataset is used49,50. A variation on this approach is to generate a low-quality object reconstruction from a single projection and then use all the remaining projections for another object reconstruction, and to then align these 3D objects51. One can use projection cross-correlation and the common-line approach for an initial alignment so as to improve convergence times52. Within an optimization framework, iterative reprojection can incorporate a variety of criteria to seek optimal alignment parameters, including contrast maximization in the overall image49 or in sub-tomogram features53, cross-correlation of reprojection and original projection images54, and for cost function reduction a quasi-Newton distance minimization55 or a Levenberg-Marquardt distance minimization56. While initially developed for electron nanotomography, iterative reprojection schemes have also been applied in x-ray microscopy10,57 and with commercial x-ray microtomography systems58,59. As was noted above, all of these prior approaches used variations on Algorithm 1, whereas our approach using Algorithm 2 produces faster convergence rates and more robust reconstructions, and can yield better accuracy especially in the case of tomograms with a limited set of projection angles. | [
"22852697",
"23556845",
"5492997",
"5070894",
"18238264",
"22155289",
"20888968",
"4449573",
"20382542",
"22108985",
"6382732",
"27577781",
"6836293",
"24971982",
"21075783",
"26433028",
"2063493",
"10425746",
"19397789",
"17651988",
"20117216",
"16542854",
"11356060",
... | [
{
"pmid": "22852697",
"title": "An instrument for 3D x-ray nano-imaging.",
"abstract": "We present an instrument dedicated to 3D scanning x-ray microscopy, allowing a sample to be precisely scanned through a beam while the angle of x-ray incidence can be changed. The position of the sample is controlled... |
Toxics | 29051407 | PMC5606636 | 10.3390/toxics4010001 | Farmers’ Exposure to Pesticides: Toxicity Types and Ways of Prevention | Synthetic pesticides are extensively used in agriculture to control harmful pests and prevent crop yield losses or product damage. Because of high biological activity and, in certain cases, long persistence in the environment, pesticides may cause undesirable effects to human health and to the environment. Farmers are routinely exposed to high levels of pesticides, usually much greater than those of consumers. Farmers’ exposure mainly occurs during the preparation and application of the pesticide spray solutions and during the cleaning-up of spraying equipment. Farmers who mix, load, and spray pesticides can be exposed to these chemicals due to spills and splashes, direct spray contact as a result of faulty or missing protective equipment, or even drift. However, farmers can be also exposed to pesticides even when performing activities not directly related to pesticide use. Farmers who perform manual labor in areas treated with pesticides can face major exposure from direct spray, drift from neighboring fields, or by contact with pesticide residues on the crop or soil. This kind of exposure is often underestimated. The dermal and inhalation routes of entry are typically the most common routes of farmers’ exposure to pesticides. Dermal exposure during usual pesticide handling takes place in body areas that remain uncovered by protective clothing, such as the face and the hands. Farmers’ exposure to pesticides can be reduced through less use of pesticides and through the correct use of the appropriate type of personal protective equipment in all stages of pesticide handling. | 5. Pesticide-Related Work TasksPesticide use is typically associated with three basic stages: (i) mixing and loading the pesticide product, (ii) application of the spray solution, and (iii) clean-up of the spraying equipment. Mixing and loading are the tasks associated with the greatest intensity of pesticide exposure, given that during this phase farmers are exposed to the concentrated product and, therefore, often face high exposure events (e.g., spills). However, the total exposure during pesticide application may exceed that incurred during mixing and loading, given that pesticide application typically takes more time than the tasks of mixing and loading. Pesticide drift is also a permanent hazard in pesticide use, because it exists even in the most careful applications, and therefore, can increase the possibility of detrimental effects of pesticide use on the users and the environment [28]. There is also evidence that cleaning the equipment after spraying may also be an important source of exposure. The level of pesticide exposure to the operator depends on the type of spraying equipment used. Hand spraying with wide-area spray nozzles (when large areas need to be treated) is associated with greater exposure to the operator than narrowly focused spray nozzles. When pesticides are applied with tractors, the application equipment is mounted directly on the tractor and is associated with a higher degree of operator exposure than when the spray equipment is attached to a trailer. Pesticide deposition on different parts of the operator’s body may vary largely due to differences in individual work habits. Several studies on the contamination of the body in pesticide applicators showed that the hands and the forearms suffer the greatest pesticide contamination during preparation and application of pesticides. However, other body parts such as the thighs, the forearms, the chest, and the back may also be subject to significant contamination.Clean-up of the spraying equipment is an important task in the use of pesticides. The time given to the task of cleaning may occupy a considerable part of the basic stages of pesticide handling [29,30]. Despite considerable variation among farm workers, equipment cleaning has been found to contribute greatly to workers’ daily dermal exposure [29]. Unexpected events, such as spills and splashes, are also a major source of dermal contamination for pesticide applicators, and often the exposure from these events can result in significant acute and long-term health effects [30]. Spills and splashes usually occur during mixing or loading and application, but may also appear in the stage of equipment clean-up [29]. Farmers (or farm workers) who make the spray solutions and apply pesticides have been at the center of attention of most research thus far, but often farmers re-entering the sprayed fields may also face pesticide exposure, sometimes to significant levels [31,32]. It is not surprising that re-entry farm workers may face even greater exposure than pesticide applicators, possibly because safety training and the use of PPE are usually less, and the duration of exposure may be greater than that of the applicators [31,32,33]. Exposure by re-entry in the sprayed fields may become a serious problem if farm workers re-enter the treated fields soon after pesticide application [34]. Spray drift from neighboring fields and overexposure events of this kind, each involving groups of workers, have been documented as inadvertent events of farmers’ exposure to pesticides [35]. | [
"25096494",
"24462777",
"23664459",
"17017381",
"7713022",
"10414791",
"24287863",
"21655127",
"18666136",
"11246121",
"21217838",
"26296070",
"25666658",
"11884232",
"12826473",
"24993452",
"23820846",
"24106643",
"16175199",
"19022871",
"11385639",
"15551369",
"11675623... | [
{
"pmid": "25096494",
"title": "Global trends of research on emerging contaminants in the environment and humans: a literature assimilation.",
"abstract": "Available literature data on five typical groups of emerging contaminants (EMCs), i.e., chlorinated paraffins (CPs), dechlorane plus and related com... |
Scientific Reports | 28959011 | PMC5620056 | 10.1038/s41598-017-12569-z | Discriminative Scale Learning (DiScrn): Applications to Prostate Cancer Detection from MRI and Needle Biopsies | There has been recent substantial interest in extracting sub-visual features from medical images for improved disease characterization compared to what might be achievable via visual inspection alone. Features such as Haralick and Gabor can provide a multi-scale representation of the original image by extracting measurements across differently sized neighborhoods. While these multi-scale features are effective, on large-scale digital pathological images, the process of extracting these features is computationally expensive. Moreover for different problems, different scales and neighborhood sizes may be more or less important and thus a large number of features extracted might end up being redundant. In this paper, we present a Discriminative Scale learning (DiScrn) approach that attempts to automatically identify the distinctive scales at which features are able to best separate cancerous from non-cancerous regions on both radiologic and digital pathology tissue images. To evaluate the efficacy of our approach, our approach was employed to detect presence and extent of prostate cancer on a total of 60 MRI and digitized histopathology images. Compared to a multi-scale feature analysis approach invoking features across all scales, DiScrn achieved 66% computational efficiency while also achieving comparable or even better classifier performance. | Related Work and Brief Overview of DiScrnScale selection has been a key research issue in the computer vision community since the 1990s15. Early investigations in scale selection were based on identifying scale-invariant locations of interest10,13,16,17.Although the idea of locating high interest points is interesting, it is not very feasible for applications where there is a need to investigate every image pixels, e.g., scenarios where one is attempting to identify the spatial location of cancer presence on a radiographic image. In these settings the ability to identify a single, most discriminating scale associated with each individual image pixel is computationally untenable. To address this challenge, Wang et al.18 presented a scale learning approach for finding the most discriminative scales for Local Binary Patterns (LBP) for prostate cancer detection on T2W MRI.While a number of recent papers have focused on computer assisted and radiomic analysis of prostate cancer from MRI19,20, these approaches typically involve extraction of a number of different texture features (Haralick co-occurrence, Gabor filter, and LBP texture features) to define “signatures” for the cancer and non-cancerous classes. Similarly, some researchers have taken a computer based feature analysis approach to detecting and grading prostate cancer from digitized prostate pathology images using shape, morphologic, and texture based features2,6,21–23. However with all these approaches, features are typically either extracted at a single scale or then extracted across multiple scales. Feature selection is then employed for identifying the most optimally discriminating scales2,3.In this paper we present a new generalized discriminative scale learning (DiScrn) framework that can be applied across an arbitrary number of feature scales. The conventional dissimilarity measurement for multi-scale feature is to assign a uniform weight to each scale. Based on this weighting idea, DiScrn invokes a scale selection scheme that retain the scales associated with large weights and ignores those scales with relatively trivial weights. Figure 1 illustrates the pipeline of the new DiScrn approach. It consists of two stages: training and testing. At each stage, we first perform superpixel detection on each image to cluster homogeneous neighboring pixels. This greatly reduces the overall computational cost of the approach. At the training stage, we sample an equal number of positive and negative pixels from each of the labeled training images via the superpixel based approach. We subsequently extract four types of multi-scale features for each pixel: local binary patterns (LBP)12, Gabor wavelet (Gabor)8, Haralick9 and Pyramid Histogram of Visual Words (PHOW)24. The discriminability of these features has been previously and substantively demonstrated for medical images2,3. For each feature type, the corresponding most discriminating scales are independently learned via the DiScrn algorithm.Figure 1Pipeline of the new DiScrn approach. At the training stage, superpixel detection is performed. An equal number of positive and negative pixels based off the superpixels (see details in Section III.D) are selected during the training phase. Up-to-N different textural features are extracted at various scales for each sampled pixels. For each feature class, its most discriminating scales are learned via DiScrn. Subsequently, a cancer/non-cancer classifier is trained only with the features extracted at the learned scales. At the testing stage, with superpixels detected on a test image, the features and the corresponding scales identified during the learning phase are employed for creating new image representations. Exhaustive labeling over the entire input image is performed to generate a probability map reflecting the probability of cancerous and non-cancerous regions. Majority voting within each superpixel is finally applied to smooth the generated probability map.
DiScrn is different compared to traditional feature selection approaches25–28 in that DiScrn specifically aims at selecting most discriminative feature scales while traditional feature selection approach aims to directly select the most discriminating subset of features. Both could potentially reduce the number of features, and therefore may significantly reduce the computational burden associated with feature extraction. However, only DiScrn guarantees that only the most predictive feature scales will be used for subsequent feature extraction during the testing phase. This is particularly beneficial for feature extraction in parallel.Once the DiScrn approach has been applied, texture features will only be extracted at the learned scales for both the classifier training and subsequent detection. In particular, cancerous regions are detected via exhaustive classification over the entire input image. This results in a statistical probability heatmap, where coordinates having higher probabilities represent cancerous regions. Majority voting within each superpixel is finally applied to smooth the generated probability map. To evaluate the performance of DiScrn, multi-site datasets (MRI and histopathology) and testing are employed. | [
"20570758",
"17948727",
"18812252",
"20443509",
"20493759",
"19164079",
"21988838",
"17482752",
"16350920",
"22214541",
"24875018",
"25203987",
"16119262",
"22641706",
"21911913",
"21255974",
"21626933",
"22337003",
"21960175",
"23294985"
] | [
{
"pmid": "20570758",
"title": "A boosted Bayesian multiresolution classifier for prostate cancer detection from digitized needle biopsies.",
"abstract": "Diagnosis of prostate cancer (CaP) currently involves examining tissue samples for CaP presence and extent via a microscope, a time-consuming and sub... |
Frontiers in Plant Science | 29033961 | PMC5625571 | 10.3389/fpls.2017.01680 | Fast High Resolution Volume Carving for 3D Plant Shoot Reconstruction | Volume carving is a well established method for visual hull reconstruction and has been successfully applied in plant phenotyping, especially for 3d reconstruction of small plants and seeds. When imaging larger plants at still relatively high spatial resolution (≤1 mm), well known implementations become slow or have prohibitively large memory needs. Here we present and evaluate a computationally efficient algorithm for volume carving, allowing e.g., 3D reconstruction of plant shoots. It combines a well-known multi-grid representation called “Octree” with an efficient image region integration scheme called “Integral image.” Speedup with respect to less efficient octree implementations is about 2 orders of magnitude, due to the introduced refinement strategy “Mark and refine.” Speedup is about a factor 1.6 compared to a highly optimized GPU implementation using equidistant voxel grids, even without using any parallelization. We demonstrate the application of this method for trait derivation of banana and maize plants. | 1.1. Related workMeasuring plant geometry from single view-point 2D images often suffers from insufficient information, especially when plant organs occlude each other (self-occlusion). In order to achieve more detailed information and recover the plants 3D geometric structure volume carving is a well established method to generate 3D point clouds of plant shoots (Koenderink et al., 2009; Golbach et al., 2015; Klodt and Cremers, 2015), seeds (Roussel et al., 2015, 2016; Jahnke et al., 2016), and roots (Clark et al., 2011; Zheng et al., 2011; Topp et al., 2013). Volume carving can be applied in high-throughput scenarios (Golbach et al., 2015): For the reconstruction of relatively simple plant structures like tomato seedlings image reconstruction takes ~25–60 ms, based on a well though out camera geometry using 10 cameras and a suitably low voxel resolution 240 × 240 × 300 voxels at 0.25 mm voxel width. Short reconstruction times are achieved by precomputing voxel to pixel projections for each of the fully calibrated cameras. However, precomputing lookup-tables is not feasible for high voxel resolutions due to storage restrictions (Ladikos et al., 2008). Current implementations popular in plant sciences suffer from high computational complexity, when voxel resolutions are high. We therefore implemented and tested a fast and reliable volume carving algorithm based on octrees (cmp. Klodt and Cremers, 2015) and integral images (cmp. Veksler, 2003), and investigate different refinement strategies. This work summarizes and extends our findings presented in Embgenbroich (2015).Visual hull reconstruction via volume carving is a well-known shape-from-silhouette technique (Martin and Aggarwal, 1983; Potmesil, 1987; Laurentini, 1994) and found many applications. Also octree as multigrid approach and integral image for reliable and fast foreground testing have been used successfully with volume carving in medical applications (Ladikos et al., 2008) and human pose reconstruction (Kanaujia et al., 2013). Realtime applications at 5123 voxel resolution have been achieved where suitable caching strategies on GPUs can be applied e.g., for video conferencing (Waizenegger et al., 2009). Here we demonstrate that even higher spatial resolutions are achievable on consumer computer hardware without prohibitively large computational cost. Subsequent octree-voxel-based processing allows extraction of plant structural features suitable for plant phenotypic trait extraction. | [
"27853175",
"25801304",
"24139902",
"25501589",
"26535051",
"24721154",
"23451789",
"22074787",
"21284859",
"27663410",
"21869096",
"25774205",
"27547208",
"22553969",
"27375628",
"23580618",
"17388907"
] | [
{
"pmid": "27853175",
"title": "Salinity tolerance loci revealed in rice using high-throughput non-invasive phenotyping.",
"abstract": "High-throughput phenotyping produces multiple measurements over time, which require new methods of analyses that are flexible in their quantification of plant growth an... |
Frontiers in Neuroscience | 29056897 | PMC5635061 | 10.3389/fnins.2017.00550 | Kernel-Based Relevance Analysis with Enhanced Interpretability for Detection of Brain Activity Patterns | We introduce Enhanced Kernel-based Relevance Analysis (EKRA) that aims to support the automatic identification of brain activity patterns using electroencephalographic recordings. EKRA is a data-driven strategy that incorporates two kernel functions to take advantage of the available joint information, associating neural responses to a given stimulus condition. Regarding this, a Centered Kernel Alignment functional is adjusted to learning the linear projection that best discriminates the input feature set, optimizing the required free parameters automatically. Our approach is carried out in two scenarios: (i) feature selection by computing a relevance vector from extracted neural features to facilitating the physiological interpretation of a given brain activity task, and (ii) enhanced feature selection to perform an additional transformation of relevant features aiming to improve the overall identification accuracy. Accordingly, we provide an alternative feature relevance analysis strategy that allows improving the system performance while favoring the data interpretability. For the validation purpose, EKRA is tested in two well-known tasks of brain activity: motor imagery discrimination and epileptic seizure detection. The obtained results show that the EKRA approach estimates a relevant representation space extracted from the provided supervised information, emphasizing the salient input features. As a result, our proposal outperforms the state-of-the-art methods regarding brain activity discrimination accuracy with the benefit of enhanced physiological interpretation about the task at hand. | 1.1. Related workThere are two alternative approaches to addressing the problem of a large amount of EEG data (Naeem et al., 2009): (i) Channel selection that intends to choose a subset of electrodes contributing the most to the desired performance. Besides of avoiding redundancy of non-focal/unnecessary channels, this procedure makes visual EEG monitoring more practical when the number of needed channels becomes very few (Alotaiby et al., 2015). A significant disadvantage of decreasing the number of EEG channels is the unrealistic assumption that cortical activity is produced by EEG signals coming only from its immediate vicinity (Haufe et al., 2014). (ii) Dimensionality Reduction that projects the original feature space into a smaller space representation, aiming to reduce the overwhelming number of extracted features (Birjandtalab et al., 2017).Although either approach to dimensionality reduction can be performed separately, there is a growing interest in minimizing together the number of channels and features to be handled by the classification algorithms (Martinez-Leon et al., 2015). According to the way the input data points are mapped into a lower-dimensional space, dimensionality reduction methods can be categorized as linear or non-linear. The former approaches (like Principal Component Analysis (Zajacova et al., 2015), Discriminant and Common Spatial Patterns (Liao et al., 2007; Zhang et al., 2015), and Spatio-Spectral Decomposition) are popular choices for either EEG representation case (channels or features) with the benefit of computational efficiency, numerical stabilization, and denoising capability. Nevertheless, they face a deficiency, namely, the feature spaces extracted from EEG signals can induce significant and complex variations regarding the nonlinearity and sparsity of the manifolds that hardly can be encoded by linear decompositions (Sturm et al., 2016). Moreover, based on their contribution to a linear regression model, linear dimensionality reduction methods usually select the most compact and relevant set of features, which might not be the best option for a non-linear classifier (Adeli et al., 2017).In turn, the non-linear mapping can more precisely preserve the information about the local neighborhoods of data-points by introducing either locally linearized structures or pairwise distances along the subtle non-linear manifold, attempting to unfold more complex high-dimensional data as separable groups (Lee and Verleysen, 2007). Among machine learning approaches to dimensionality reduction, the Kernel-based analysis is promising because of the following properties (Chu et al., 2011): (i) kernel methods apply a non-linear mapping to a higher dimensional space where the original non-linear data become linear or near-linear. (ii) The Kernel trick decreases the computational complexity of high dimensional data as the parameter evaluation domain is lessened from the explicit feature space into the Kernel space. In practice, an open issue is the definition of the kernel transformation that can be more connected with the appropriate type of application nonlinearity (Zimmer et al., 2015). Nevertheless, more efforts are spent in the development of a metric learning that allows a kernel to adjust the importance of individual features of tasks under consideration, usually exploiting a given amount of supervisory information (Hurtado-Rincón et al., 2016). Hence, the kernel-based relevance analysis can handle the estimated weights to highlight the features or dimensions relevant for improving the classification performance (Brockmeier et al., 2014). | [
"28120883",
"24967316",
"24165805",
"28161592",
"17475513",
"24684447",
"27138114",
"20348000",
"27103525",
"25168571",
"25799903",
"23844021",
"18269986",
"12574475",
"25003816",
"24302929",
"17518278",
"25977685",
"22438708",
"25769276",
"16235818",
"27746229",
"2256314... | [
{
"pmid": "28120883",
"title": "Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson's Disease.",
"abstract": "Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. T... |
Scientific Reports | 29042602 | PMC5645324 | 10.1038/s41598-017-13640-5 | The optimal window size for analysing longitudinal networks | The time interval between two snapshots is referred to as the window size. A given longitudinal network can be analysed from various actor-level perspectives, such as exploring how actors change their degree centrality values or participation statistics over time. Determining the optimal window size for the analysis of a given longitudinal network from different actor-level perspectives is a well-researched network science problem. Many researchers have attempted to develop a solution to this problem by considering different approaches; however, to date, no comprehensive and well-acknowledged solution that can be applied to various longitudinal networks has been found. We propose a novel approach to this problem that involves determining the correct window size when a given longitudinal network is analysed from different actor-level perspectives. The approach is based on the concept of actor-level dynamicity, which captures variability in the structural behaviours of actors in a given longitudinal network. The approach is applied to four real-world, variable-sized longitudinal networks to determine their optimal window sizes. The optimal window length for each network, determined using the approach proposed in this paper, is further evaluated via time series and data mining methods to validate its optimality. Implications of this approach are discussed in this article. | Related workThe temporal sampling of a longitudinal network is often performed opportunistically20 depending on a wide range of factors. Timmons and Preacher12 identified some of these factors, including types of social networks, competing objectives, processes and measurements, planned time horizons of the respective study, the availability of funds, logistic and organisational constraints, the availability and expected behaviours of study participants and the desired accuracy and precision of study outcomes. Most theoretical and methodological approaches to defining optimal sliding windows of dynamic networks focus on the aggregation of links in time-window graphs18. This emphasis impacts the observation bias and the accuracy and significance of analyses, as dynamic network processes (e.g., the formation and dissolution of ties) can begin or end during inter-event periods21. Longitudinal networks are also analysed based on an assumption that more sampling generates better results15,22 or, in the case of randomised clinical trials, a sliding window size that maximises the efficiency of estimating treatment effects23. Frameworks of statistical analysis such as the separable temporal exponential random graph model
24 can also be used by relating the timing of network snapshots to the precision of parameter estimates.The aforementioned approaches to determine an appropriate or optimal time window for analysing longitudinal networks suffer from their inherent limitations. For example, Timmons and Preacher12 found deteriorating outcomes from studies using more sampling windows and suggested that researchers consider the trade-off between precision and sampling time. On the other hand, statistical frameworks are parameter dependent and only work when applied to small networks with a few hundred actors. Recent studies focus on empirical analysis by comparing network statistics of temporal aggregations or graph metrics over time against threshold values in determining appropriate or meaningful sampling window sizes. For example, the Temporal Window in Network (TWIN) algorithm developed by Sulo, Berger-Wolf and Grossman17 analyses the compression ratio and variances in time series of graph metrics computed over a series of graph snapshots composed of temporal edges as a function of sampling window size. A window size is considered appropriate when the difference between variance and compression ratio for that window size is smaller or equal to a user-defined threshold. Soundarajan et al.25 defined another algorithm that identifies variable-length aggregation intervals by considering a ‘structurally mature graph’ that represents the stability of a network with respect to network statistics. A detailed study by Caceres and Berger-Wolf20 illustrates this windowing problem in reference to different formalisations and initial approaches to identifying the optimal resolution of edge aggregations, including their corresponding advantages and limitations. | [
"24179229",
"15190252",
"19120606",
"26609742",
"20691543",
"24443639",
"27942416",
"24141695"
] | [
{
"pmid": "24179229",
"title": "Structural and functional brain networks: from connections to cognition.",
"abstract": "How rich functionality emerges from the invariant structural architecture of the brain remains a major mystery in neuroscience. Recent applications of network theory and theoretical ne... |
Scientific Reports | 29079836 | PMC5660213 | 10.1038/s41598-017-13923-x | Small Molecule Accurate Recognition Technology (SMART) to Enhance Natural Products Research | Various algorithms comparing 2D NMR spectra have been explored for their ability to dereplicate natural products as well as determine molecular structures. However, spectroscopic artefacts, solvent effects, and the interactive effect of functional group(s) on chemical shifts combine to hinder their effectiveness. Here, we leveraged Non-Uniform Sampling (NUS) 2D NMR techniques and deep Convolutional Neural Networks (CNNs) to create a tool, SMART, that can assist in natural products discovery efforts. First, an NUS heteronuclear single quantum coherence (HSQC) NMR pulse sequence was adapted to a state-of-the-art nuclear magnetic resonance (NMR) instrument, and data reconstruction methods were optimized, and second, a deep CNN with contrastive loss was trained on a database containing over 2,054 HSQC spectra as the training set. To demonstrate the utility of SMART, several newly isolated compounds were automatically located with their known analogues in the embedded clustering space, thereby streamlining the discovery pipeline for new natural products. | Related workAgain, the aforementioned grid-cell approaches28 are similar to ours in that the shifted grid positions can be thought of as corresponding to the first layer of convolutions, which have small receptive fields (like grid cells), and they are shifted across the input space like shifted grids. Also, our approach uses layers of convolutions that can capture multi-scale similarities. The grid-cell approaches, however, use hand-designed features (i.e. counts of peaks within each grid cell), and the similarities are computed by simple distance measures. In particular, PLSI and LSR are linear techniques applied to hand-designed features. Furthermore, other representations, for example the ‘tree-based’ method59, also rely on data structures designed by the researcher. Our approach, using deep networks and gradient descent, allows higher-level and nonlinear features to be learned in the service of the task. This approach is similar to modern approaches for computer vision, which since 2012 has shifted away from hand-designed features to deep networks and learned features, and has led to orders of magnitude better performance. Similarly to how deep networks applied to computer vision tasks have learned to deal with common problems, such as recognizing objects and faces in different lighting conditions and poses, our CNN pattern recognition-based method can overcome solvent effects, instrumental artefacts, and weak signal issues.It is difficult to directly compare Wolfram et al.’s results to ours because they used a much smaller dataset (132 compounds) from 10 well-separated families. This is not enough data to train the deep network. To further compare our approach with other NMR pattern recognition approaches, we generated precision-recall curves (Fig. 5) using SMART trained with the SMART5 and SMART10 databases (Fig. 6). Considering SMART as a search engine, precision recall curves help evaluate the SMART’s performance to find the most relevant chemical structures, while taking into account the non-relevant compounds that are retrieved. In our approach to HSQC spectra recognition/retrieval, precision is a measure of the percentage of correct compounds over the total number retrieved, while recall is the percentage of the total number of relevant compounds. Therefore, higher precision indicates a lower false positive rate, and higher recall indicates a lower false negative rate. The precision-recall curves of our approach show high precision peaks at low recall rates, suggesting that SMART retrieves at least some relevant structures in the first 10–20% of compounds retrieved, and thus indicates that SMART returns accurate chemical structures. To compare this to a linear embedding, we performed PCA on the SMART5 and SMART10 databases separately. The precision recall curves of those PCA results are much worse than those processed by the CNN (Fig. 5).Figure 5Precision-recall curves measured across 10-fold validation for different dimensions (dim) of embeddings. (a) and (b) Mean precision-recall curves on test HSQC spectra for SMART5 and SMART10 datasets, respectively. (c) and (d) Mean precision-recall with error curves (grey) for SMART5 and SMART10, respectively. (e) and (f) Mean precision-recall curves for SMART5 and SMART10 clustered by Principal Component Analysis (PCA) without use of the CNN. AUC: area under the curve.
Figure 6Distribution in the Training Dataset of Numbers of Families Containing Different Numbers of Individual Compounds. The SMART5 training set contains 238 compound subfamilies, giving rise to 2,054 HSQC spectra in total. (Blue and Green) The SMART10 training set contains 69 compound subfamilies and is composed of 911 HSQC spectra in total. (Green only). | [
"26852623",
"24149839",
"26284661",
"26284660",
"20179874",
"23291908",
"22481242",
"21538743",
"25901905",
"21796515",
"22331404",
"21773916",
"21314130",
"18052244",
"26017442",
"25462637",
"9887531",
"19635374",
"24140622",
"22663190",
"25036668",
"24047259",
"24484201... | [
{
"pmid": "26852623",
"title": "Natural Products as Sources of New Drugs from 1981 to 2014.",
"abstract": "This contribution is a completely updated and expanded version of the four prior analogous reviews that were published in this journal in 1997, 2003, 2007, and 2012. In the case of all approved the... |
Scientific Reports | 29093451 | PMC5665979 | 10.1038/s41598-017-12884-5 | Sleep Benefits Memory for Semantic Category Structure While Preserving Exemplar-Specific Information | Semantic memory encompasses knowledge about both the properties that typify concepts (e.g. robins, like all birds, have wings) as well as the properties that individuate conceptually related items (e.g. robins, in particular, have red breasts). We investigate the impact of sleep on new semantic learning using a property inference task in which both kinds of information are initially acquired equally well. Participants learned about three categories of novel objects possessing some properties that were shared among category exemplars and others that were unique to an exemplar, with exposure frequency varying across categories. In Experiment 1, memory for shared properties improved and memory for unique properties was preserved across a night of sleep, while memory for both feature types declined over a day awake. In Experiment 2, memory for shared properties improved across a nap, but only for the lower-frequency category, suggesting a prioritization of weakly learned information early in a sleep period. The increase was significantly correlated with amount of REM, but was also observed in participants who did not enter REM, suggesting involvement of both REM and NREM sleep. The results provide the first evidence that sleep improves memory for the shared structure of object categories, while simultaneously preserving object-unique information. | Other related workThere have been a few prior studies finding effects of sleep on semantic memory45,51, but they have focused on integrating new information with existing semantic memory networks, not learning an entirely novel domain, as in our study. One study that did look at novel conceptual learning found retention of memory for category exemplars as well as retention of ability to generalize to novel exemplars and never-seen prototypes across a night of sleep, but not a day awake, in a dot pattern categorization task52. This study is consistent with ours in suggesting a benefit of overnight sleep for both unique and shared structure. They did not find any reliable above-baseline improvements, but there was numerical improvement in novel exemplar accuracy similar in magnitude to our shared feature effects, and statistical power may have been lower due to an across-subject design and fewer subjects.Another study found increased generalization in an object categorization task over the course of an afternoon delay in both nap and wake conditions, with no difference between the two53. The task assessed memory or inference for the locations of faces, where some locations were predicted by feature rules (e.g. faces at one location were all young, stout, and had no headwear) and other locations had no rules. Overall memory for studied faces decreased over time, but more so for faces at locations without feature rules, suggesting a benefit due to shared structure in the rule location. While there are many differences between this paradigm and ours, our findings suggest the possibility that the forgetting and lack of sleep-wake differences observed may be due to averaging across all memories, instead of focusing on the weaker memories; Our Experiment 2 findings averaged across frequency are qualitatively similar to the findings from this study. | [
"16797219",
"7624455",
"14599236",
"22775499",
"23589831",
"24813468",
"17500644",
"17488206",
"20035789",
"26856905",
"25498222",
"15576888",
"19926780",
"23055117",
"18274266",
"18391183",
"12061756",
"1519015",
"2265922",
"21967958",
"21742389",
"26227582",
"17085038",... | [
{
"pmid": "16797219",
"title": "Theory-based Bayesian models of inductive learning and reasoning.",
"abstract": "Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of t... |
Scientific Reports | 29097661 | PMC5668259 | 10.1038/s41598-017-14411-y | GRAFENE: Graphlet-based alignment-free network approach integrates 3D structural and sequence (residue order) data to improve protein structural comparison | Initial protein structural comparisons were sequence-based. Since amino acids that are distant in the sequence can be close in the 3-dimensional (3D) structure, 3D contact approaches can complement sequence approaches. Traditional 3D contact approaches study 3D structures directly and are alignment-based. Instead, 3D structures can be modeled as protein structure networks (PSNs). Then, network approaches can compare proteins by comparing their PSNs. These can be alignment-based or alignment-free. We focus on the latter. Existing network alignment-free approaches have drawbacks: 1) They rely on naive measures of network topology. 2) They are not robust to PSN size. They cannot integrate 3) multiple PSN measures or 4) PSN data with sequence data, although this could improve comparison because the different data types capture complementary aspects of the protein structure. We address this by: 1) exploiting well-established graphlet measures via a new network alignment-free approach, 2) introducing normalized graphlet measures to remove the bias of PSN size, 3) allowing for integrating multiple PSN measures, and 4) using ordered graphlets to combine the complementary PSN data and sequence (specifically, residue order) data. We compare synthetic networks and real-world PSNs more accurately and faster than existing network (alignment-free and alignment-based), 3D contact, or sequence approaches. | Motivation and related workProteins perform important cellular functions. While understanding protein function is clearly important, doing so experimentally is expensive and time-consuming1,2. Because of this, the functions of many proteins remain unknown2,3. Consequently, computational prediction of protein function has received attention. In this context, protein structural comparison (PC) aims to quantify similarity between proteins with respect to their sequence or 3-dimensional (3D) structural patterns. Then, functions of unannotated proteins can be predicted based on functions of similar, annotated proteins. By “function”, we mean traditional notions of protein function, such as its biological process, molecular function, or cellular localization4, or any protein characteristic (e.g., length, hydrophobicity/hydrophilicity, or folding rate), as long as the given characteristic is expected to correlate well with the protein structure. In this study, we propose a new PC approach, which we evaluate in an established way: by measuring how accurately it captures expected (dis)similarities between known groups of structurally (dis)similar proteins5, such as protein structural classes from Class, Architecture, Topology, Homology (CATH)6,7, or Structural Classification of Proteins (SCOP)8. Application of our proposed PC approach to protein function prediction is out of the scope of the current study and is the subject of future work.Early PC has relied on sequence analyses9–11. Due to advancements of high-throughput sequencing technologies, rich sequence data are available for many species, and thus, comprehensive sequence pattern searches are possible. However, amino acids that are distant in the linear sequence can be close in 3D structure. Thus, 3D structural analyses can reveal patterns that might not be apparent from the sequence alone12. For example, while high sequence similarity between proteins typically indicates their high structural and functional similarity3, proteins with low sequence similarity can still be structurally similar and perform similar function13,14. In this case, 3D structural approaches, unlike sequence approaches, can correctly identify structurally and thus functionally similar proteins. On the other extreme, proteins with high sequence similarity can be structurally dissimilar and perform different functions15–19. In this case, 3D structural approaches, unlike sequence approaches, can correctly identify structurally and thus functionally different proteins.3D structural approaches can be categorized into traditional 3D contact approaches, which are alignment-based, and network approaches, which can be alignment-based or alignment-free. By alignment-based (3D contact or network) approaches, we mean approaches whose main goal is to map amino acid residues between the compared proteins in a way that conserves the maximum amount of common substructure. In the process, alignment-based approaches can and typically do quantify similarity between the compared protein structures, and they do so under their resulting residue mappings. Given this, alignment-based approaches can and have been used in the task of PC as we define it5,20, even though they are not necessarily directly designed for this task. On the other hand, by alignment-free (network) approaches, we mean approaches whose main goal is to quantify similarity between the compared protein structures independent of any residue mapping, typically by extracting from each structure some network patterns (also called network properties, network features, network fingerprints, or measures of network topology) and comparing the patterns between the structures. Alignment-free approaches are directly designed for the task of PC as we define it. We note that there exist approaches that are alignment-free but not network-based21, which are out of the scope of our study. Below, we discuss 3D contact alignment-based PC approaches, followed by network alignment-based PC approaches, followed by network alignment-free PC approaches.3D contact alignment-based PC approaches are typically rigid-body approaches22,23, meaning that they treat proteins as rigid objects. Such approaches aim to identify alignments that satisfy two objectives: 1) they maximize the number of mapped residues and 2) they minimize deviations between the mapped structures (with respect to e.g., Root Mean Square Deviation). Different rigid-body approaches mainly differ in how they combine these two objectives. There exist many approaches of this type5,24–26. Prominent ones are DaliLite27 and TM-align28. These two approaches have explicitly been used and evaluated in the task of PC as we define it5,20, and are thus directly relevant for our study.Network alignment-based PC approaches are typically flexible alignment methods, meaning that they treat proteins as flexible (rather than rigid) objects, because proteins can undergo large conformational changes. These approaches align local protein regions in a rigid-body manner but account for flexibility by allowing for twists between the locally aligned regions5,29. Also, these approaches are typically of the Contact Map Overlap (CMO) type. That is, first, they represent a 3D protein structure consisting of n residues as a contact map, i.e., n × n matrix C, where position Cij has a value of l if residues i and j are close enough and are thus in contact, and it has a value of 0 otherwise. Note that contact maps are equivalent to protein structure networks (PSNs), in which nodes are residues and edges link spatially close amino acids30. Second, CMO approaches aim to compare contact maps of two proteins, in order to align a subset of residues in one protein to a subset of residues in another protein in a way that maximizes the number of common contacts and also conserves the order of the aligned residues31. Prominent CMO approaches are Apurva, MSVNS, AlEigen7, and GR-Align5. When evaluated in the task of PC as we define it, i.e., when used to compare proteins labeled with structural classes of CATH or SCOP, GR-Align outperformed Apurva, MSVNS, and AlEigen7 in terms of both accuracy and running time5. So, we consider GR-Align to be the state-of-the-art CMO (i.e., network alignment-based) approach. In addition to these network alignment-based approaches, GR-Align was evaluated in the same manner against the existing 3D contact alignment-based approaches (DaliLite and TM-Align mentioned above, as well as three additional approaches, MATT, Yakusa, and FAST)5. In terms of running time, GR-Align was the fastest. In terms of accuracy, GR-Align was superior to MATT, Yakusa, and FAST, but it was inferior or comparable to DaliLite and TM-Align. So, while GR-Align remains the state-of-the-art network alignment-based PC approach, DaliLite and TM-Align remain state-of-the-art 3D contact alignment-based PC approaches, and we continue to consider all three in our study.Network alignment-free approaches also deal with PSN representations of compared proteins, but they aim to quantify protein similarity without accounting for any residue mapping. We propose a novel network alignment-free PC approach (see below). We first compare our approach to its most direct competitors, i.e., existing alignment-free approaches. Then, we compare our approach to existing alignment-based approaches. We recognize that evaluation of alignment-free against alignment-based approaches should be taken with caution32,33, (because the two comparison types quantify protein similarity differently – see above). Yet, as we show later in our evaluation, existing alignment-based PC approaches are superior to existing alignment-free PC approaches and are thus our strongest (though not necessarily fairest) competitors.Next, we discuss existing network alignment-free PC approaches. Such approaches have already been developed14,34, to compare network topological patterns within a protein or across proteins, for example to study differences in network properties between transmembrane and globular proteins, analyze the packing topology of structurally important residues in membrane proteins, or refine homology models of transmembrane proteins35–37. Existing network alignment-free PC approaches, however, have the following limitations:They rely on naive measures of network topology, such as average degree, average or maximum distance (diameter), or average clustering coefficient of a network, which capture the global view of a network but ignore complex local interconnectivities that exist in real-world networks, including PSNs38–40.They can bias PC by PSN size: networks of similar topology but different sizes can be mistakenly identified as dissimilar by the existing approaches simply because of their size differences alone.Because different network measures quantify the same PSN topology from different perspectives41, and because each existing approach uses a single measure, PC could be biased towards the perspective captured by the given measure.They ignore valuable sequence information (also, the existing sequence approaches ignore valuable PSN information). | [
"10802651",
"18037900",
"25428369",
"24443377",
"25348408",
"9847200",
"7723011",
"18811946",
"15987894",
"18506576",
"18004789",
"22817892",
"18923675",
"24392935",
"8377180",
"8506262",
"24629187",
"19481444",
"8819165",
"20457744",
"15849316",
"14534198",
"19557139",
... | [
{
"pmid": "18037900",
"title": "Predicting protein function from sequence and structure.",
"abstract": "While the number of sequenced genomes continues to grow, experimentally verified functional annotation of whole genomes remains patchy. Structural genomics projects are yielding many protein structure... |
JMIR Medical Informatics | 29089288 | PMC5686421 | 10.2196/medinform.8531 | Ranking Medical Terms to Support Expansion of Lay Language Resources for Patient Comprehension of Electronic Health Record Notes: Adapted Distant Supervision Approach | BackgroundMedical terms are a major obstacle for patients to comprehend their electronic health record (EHR) notes. Clinical natural language processing (NLP) systems that link EHR terms to lay terms or definitions allow patients to easily access helpful information when reading through their EHR notes, and have shown to improve patient EHR comprehension. However, high-quality lay language resources for EHR terms are very limited in the public domain. Because expanding and curating such a resource is a costly process, it is beneficial and even necessary to identify terms important for patient EHR comprehension first.ObjectiveWe aimed to develop an NLP system, called adapted distant supervision (ADS), to rank candidate terms mined from EHR corpora. We will give EHR terms ranked as high by ADS a higher priority for lay language annotation—that is, creating lay definitions for these terms.MethodsAdapted distant supervision uses distant supervision from consumer health vocabulary and transfer learning to adapt itself to solve the problem of ranking EHR terms in the target domain. We investigated 2 state-of-the-art transfer learning algorithms (ie, feature space augmentation and supervised distant supervision) and designed 5 types of learning features, including distributed word representations learned from large EHR data for ADS. For evaluating ADS, we asked domain experts to annotate 6038 candidate terms as important or nonimportant for EHR comprehension. We then randomly divided these data into the target-domain training data (1000 examples) and the evaluation data (5038 examples). We compared ADS with 2 strong baselines, including standard supervised learning, on the evaluation data.ResultsThe ADS system using feature space augmentation achieved the best average precision, 0.850, on the evaluation set when using 1000 target-domain training examples. The ADS system using supervised distant supervision achieved the best average precision, 0.819, on the evaluation set when using only 100 target-domain training examples. The 2 ADS systems both performed significantly better than the baseline systems (P<.001 for all measures and all conditions). Using a rich set of learning features contributed to ADS’s performance substantially.ConclusionsADS can effectively rank terms mined from EHRs. Transfer learning improved ADS’s performance even with a small number of target-domain training examples. EHR terms prioritized by ADS were used to expand a lay language resource that supports patient EHR comprehension. The top 10,000 EHR terms ranked by ADS are available upon request. | Related WorkNatural Language Processing to Facilitate Creation of Lexical EntriesPrevious studies have used both unsupervised and supervised learning methods to prioritize terms for inclusion in biomedical and health knowledge resources [32-35]. Term recognition methods, which are widely used unsupervised methods for term extraction, use rules and statistics (eg, corpus-level word and term frequencies) to prioritize technical terms from domain-specific text corpora. Since these methods do not use manually annotated training data, they have better domain portability but are less accurate than supervised learning [32]. The contribution of this study is to propose a new learning-based method for EHR term prioritization, which is more accurate than supervised learning while also having good domain portability.Our work is also related to previous studies that have used distributional semantics for lexicon expansion [35-37]. In this work, we used word embedding, one technique for distributional semantics, to generate one type of learning features for the ADS system to rank EHR terms.Ranking Terms in Electronic Health RecordsWe previously developed NLP systems to rank and identify important terms from each EHR note of individual patients [38,39]. This study is different in that it aimed to rank terms at the EHR corpus level for the purpose of expanding a lay language resource to improve health literacy and EHR comprehension of the general patient population. Notice that both types of work are important for building NLP-enabled interventions to support patient EHR comprehension. For example, a real-world application can link all medical jargon terms in a patient’s EHR note to lay terms or definitions, and then highlight the terms most important for this patient and provide detailed information for these important terms.Distant SupervisionOur ADS system uses distant supervision from the CHV. Distant supervision refers to the learning framework that uses information from knowledge bases to create labeled data to train machine learning models [40-42]. Previous work often used this technique to address context-based classification problems such as named entity detection and relation detection. In contrast, we used it to rank terms without considering context. However, our work is similar in that it uses heuristic rules and knowledge bases to create training data. Although training data created this way often contain noise, distant supervision has been successfully applied to several biomedical NLP tasks to reduce human annotation efforts, including extraction of entities [40,41,43], relations [44-46], and important sentences [47] from the biomedical literature. In this study, we made novel use of the non-EHR-centric lexical resource CHV to create training data for ranking terms from EHRs. This approach has greater domain portability than conventional distant supervision methods due to fewer demands on the likeness between the knowledge base and the target-domain learning task. On the other hand, learning from the distantly labeled data with a mismatch to the target task is more challenging. We address this challenge by using transfer learning.Transfer LearningTransfer learning is a learning framework that transfers knowledge from the source domain DS (the training data derived from the CHV, in our case) to the target domain DT to help improve the learning of the target-domain task TT [48]. We followed Pan and Yang [48] to distinguish between inductive transfer learning, where the source- and target-domain tasks are different, and domain adaptation, where the source- and target-domain tasks are the same but the source and target domains (ie, data distributions) are different. Our approach belongs to the first category because our source-domain and target-domain tasks define positive and negative examples in different ways. Transfer learning has been applied to important bioinformatics tasks such as DNA sequence analysis and gene interaction network analysis [49]. It has also been applied to several clinical and biomedical NLP tasks, including part-of-speech tagging [50] and key concept identification for clinical text [51], semantic role labeling for biomedical articles [52] and clinical text [53], and key sentence extraction from biomedical literature [47]. In this work, we investigated 2 state-of-the-art transfer learning algorithms that have shown superior performance in recent studies [47,53]. We aimed to empirically show that they, in combination with distant supervision, are effective in ranking EHR terms. | [
"19224738",
"24359554",
"26104044",
"23027317",
"23407012",
"23535584",
"14965405",
"18693866",
"12923796",
"11103725",
"1517087",
"20845203",
"23978618",
"26681155",
"27702738",
"20442139",
"15561782",
"18436895",
"18693956",
"21347002",
"23920650",
"22357448",
"25953147... | [
{
"pmid": "24359554",
"title": "The Medicare Electronic Health Record Incentive Program: provider performance on core and menu measures.",
"abstract": "OBJECTIVE\nTo measure performance by eligible health care providers on CMS's meaningful use measures.\n\n\nDATA SOURCE\nMedicare Electronic Health Recor... |
GigaScience | 29048555 | PMC5691353 | 10.1093/gigascience/gix099 | An architecture for genomics analysis in a clinical setting using Galaxy and Docker | AbstractNext-generation sequencing is used on a daily basis to perform molecular analysis to determine subtypes of disease (e.g., in cancer) and to assist in the selection of the optimal treatment. Clinical bioinformatics handles the manipulation of the data generated by the sequencer, from the generation to the analysis and interpretation. Reproducibility and traceability are crucial issues in a clinical setting. We have designed an approach based on Docker container technology and Galaxy, the popular bioinformatics analysis support open-source software. Our solution simplifies the deployment of a small-size analytical platform and simplifies the process for the clinician. From the technical point of view, the tools embedded in the platform are isolated and versioned through Docker images. Along the Galaxy platform, we also introduce the AnalysisManager, a solution that allows single-click analysis for biologists and leverages standardized bioinformatics application programming interfaces. We added a Shiny/R interactive environment to ease the visualization of the outputs. The platform relies on containers and ensures the data traceability by recording analytical actions and by associating inputs and outputs of the tools to EDAM ontology through ReGaTe. The source code is freely available on Github at https://github.com/CARPEM/GalaxyDocker. | Related worksWorkflow management systemsThe development of high-throughput methods in molecular biology has considerably increased the volume of molecular data produced daily by biologists. Many analytical scripts and software have been developed to assist biologists and clinicians in their tasks. Commercial and open-source solutions have emerged, allowing the user to combine analytical tools and build pipelines using graphical interfaces. In addition, workflow management systems (such as Taverna [12], Galaxy [11], SnakeMake [13], NextFlow [14]) also ensure the traceability and reproducibility of the analytical process. The efficient use of a workflow management system remains limited to trained bioinformaticians.Docker and GalaxyDocker provides a standard way to supply ready-to-use applications, and it's beginning to be a common way to share works [15–22]. In Aranguren and Wilkinson [23], the authors make the assumption that the reproducibility could be implemented at 2 levels: (1) at the Docker container level: the encapsulation of a tool with all its dependencies would ensure the sustainability, traceability, and reproducibility of the tool; and (2) at the workflow level: the reproducibility is ensured by Galaxy workflow definition. They developed a containerized Galaxy Docker platform in the context of the OpenLifeData2SADI research project. In Kuenzi et al. [16], the authors distribute a Galaxy Docker container hosting a tool suite called APOSTL, which is dedicated to proteomics analysis of mass spectrometry data. They implemented R/Shiny [24] environments inside Galaxy.Grüning et al. [17] provide a standard Dockerized Galaxy application that can be extended in many flavors [18, 25, 26]. Some Galaxy Dockerized applications already exist in containers, e.g., deepTools2 [18].The integration of new tools in Galaxy can be simplified by applications generating configuration files that are near-ready for integration [27, 28]. We propose a similar tool (the DockerTools2Galaxy script), dedicated to Dockerized tools.In this article, we present our architecture to deploy a bioinformatics platform in a clinical setting, leveraging the worldwide-known bioinformatics workflow management solution Galaxy and Docker virtualization technology, standardized bioinformatics application programming interfaces (APIs), and graphical interfaces developed in R SHINY. | [
"15118073",
"20619739",
"12860957",
"21639808",
"20818844",
"19553641",
"20609467",
"24204232",
"27137889",
"23640334",
"22908215",
"27079975",
"28045932",
"26335558",
"26640691",
"26336600",
"25423016",
"28299179",
"26280450",
"28542180",
"23479348",
"27994937",
"2802731... | [
{
"pmid": "15118073",
"title": "Activating mutations in the epidermal growth factor receptor underlying responsiveness of non-small-cell lung cancer to gefitinib.",
"abstract": "BACKGROUND\nMost patients with non-small-cell lung cancer have no response to the tyrosine kinase inhibitor gefitinib, which t... |
PLoS Computational Biology | 29131816 | PMC5703574 | 10.1371/journal.pcbi.1005857 | Automated visualization of rule-based models | Frameworks such as BioNetGen, Kappa and Simmune use “reaction rules” to specify biochemical interactions compactly, where each rule specifies a mechanism such as binding or phosphorylation and its structural requirements. Current rule-based models of signaling pathways have tens to hundreds of rules, and these numbers are expected to increase as more molecule types and pathways are added. Visual representations are critical for conveying rule-based models, but current approaches to show rules and interactions between rules scale poorly with model size. Also, inferring design motifs that emerge from biochemical interactions is an open problem, so current approaches to visualize model architecture rely on manual interpretation of the model. Here, we present three new visualization tools that constitute an automated visualization framework for rule-based models: (i) a compact rule visualization that efficiently displays each rule, (ii) the atom-rule graph that conveys regulatory interactions in the model as a bipartite network, and (iii) a tunable compression pipeline that incorporates expert knowledge and produces compact diagrams of model architecture when applied to the atom-rule graph. The compressed graphs convey network motifs and architectural features useful for understanding both small and large rule-based models, as we show by application to specific examples. Our tools also produce more readable diagrams than current approaches, as we show by comparing visualizations of 27 published models using standard graph metrics. We provide an implementation in the open source and freely available BioNetGen framework, but the underlying methods are general and can be applied to rule-based models from the Kappa and Simmune frameworks also. We expect that these tools will promote communication and analysis of rule-based models and their eventual integration into comprehensive whole-cell models. | Related workIn addition to the approaches discussed in Introduction (Fig 1A–1D) and Methods (Fig 2C), we show examples of other currently available tools (Fig 11) and how they compare with compact rule visualizations and atom-rule graphs.10.1371/journal.pcbi.1005857.g011Fig 11Other visualization approaches applied to the enzyme-substrate phosphorylation model of Fig 2.(A) The binding rule drawn using SBGN Process Description conventions, which require visual graph comparison. (B) Kappa story, showing the causal order of rules that produces sub_pp, which refers to doubly phosphorylated substrate. (C) Simmune Network Viewer diagram, which merges patterns across rules and hides certain causal dependencies (details in S4 Fig). Here, the Enz.Sub node merges all enzyme-substrate patterns shown in Fig 2. (D) SBGN Entity Relationship. (E) Molecular Interaction Map. Panels D-E require manual interpretation of the model, like the extended contact map. (F) rxncon regulatory graph visualization of the rxncon model format, which can only depict a limited subset of reaction rules (details in S5 Fig).The SBGN Process Description (Fig 11A) [24] is a visualization standard for reacting entities. It has the same limitation as conventional rule visualization, namely the need for visual graph comparison.The Kappa story (Fig 11B) [22] shows the causal order in which rules can be applied to generate specific outputs, and these are derived by analysis of model simulation trajectories. It is complementary to the statically derived AR graph for showing interactions between rules, but it does not show the structures that mediate these interactions nor does it provide a mechanism for grouping rules. Integrating Kappa stories with AR graphs is an interesting area for future work.The Simmune Network Viewer (Fig 11C) [26] compresses the representation of rules differently from the AR graph: it merges patterns that have the same molecules and bonds, but differ in internal states. Like the AR graph, it shows both structures and rules, and it produces diagrams with much lower density (‘sim’ in Fig 10), but it obscures causal dependencies on internal states (S4 Fig).The SBGN Entity Relationship diagram (Fig 11D) [24] and the Molecular Interaction Map (Fig 11E) [25], like the Extended Contact Map [23], are diagrams of model architecture that rely on manual analysis.The rxncon regulatory graph (Fig 11F) visualizes the rxncon model format [27], which uses atoms (called elemental states in rxncon) to specify contextual influences on processes. This approach, which is also followed in Process Interaction Model[49], is less expressive than the graph transformation approach used in BioNetGen, Kappa and Simmune (S5 Fig). The AR graph we have developed generalizes the regulatory graph visualization so it can be derived from arbitrary types of rules found in BioNetGen, Kappa and Simmune models. | [
"15217809",
"19399430",
"27402907",
"16854213",
"23508970",
"12646643",
"24782869",
"19348745",
"25147952",
"16233948",
"22913808",
"26928575",
"22114196",
"22817898",
"19045830",
"27497444",
"22607382",
"21647530",
"19668183",
"12779444",
"24934175",
"22531118",
"2241285... | [
{
"pmid": "15217809",
"title": "BioNetGen: software for rule-based modeling of signal transduction based on the interactions of molecular domains.",
"abstract": "BioNetGen allows a user to create a computational model that characterizes the dynamics of a signal transduction system, and that accounts com... |
International Journal for Equity in Health | 29183335 | PMC5706427 | 10.1186/s12939-017-0702-z | Evaluating medical convenience in ethnic minority areas of Southwest China via road network vulnerability: a case study for Dehong autonomous prefecture | BackgroundSouthwest China is home to more than 30 ethnic minority groups. Since most of these populations reside in mountainous areas, convenient access to medical services is an important metric of how well their livelihoods are being protected.MethodsThis paper proposes a medical convenience index (MCI) and computation model for mountain residents, taking into account various conditions including topography, geology, and climate. Data on road networks were used for comprehensive evaluation from three perspectives: vulnerability, complexity, and accessibility. The model is innovative for considering road network vulnerability in mountainous areas, and proposing a method of evaluating road network vulnerability by measuring the impacts of debris flows based on only links. The model was used to compute and rank the respective MCIs for settlements of each ethnic population in the Dehong Dai and Jingpo Autonomous Prefecture of Yunnan Province, in 2009 and 2015. Data on the settlements over the two periods were also used to analyze the spatial differentiation of medical convenience levels within the study area.ResultsThe medical convenience levels of many settlements improved significantly. 80 settlements were greatly improved, while another 103 showed slight improvement.Areas with obvious improvement were distributed in clusters, and mainly located in the southwestern part of Yingjiang County, northern Longchuan County, eastern Lianghe County, and the region where Lianghe and Longchuan counties and Mang City intersect.ConclusionsDevelopment of the road network was found to be a major contributor to improvements in MCI for mountain residents over the six-year period. | Related workCurrent studies by Chinese and international scholars on medical and healthcare services for residents mostly focus on accessibility to hospitals, or uniform distribution of medical and healthcare facilities [5]. When analyzing medical accessibility, the minimum travel time/distance is commonly used because the requisite data are easily available, the calculation method is simple, and the results are readily understood [6–11].In recent years, the most widely used method for studying medical accessibility and the balance between the demand and supply of medical and healthcare facilities is floating catchment area (FCA), and associated enhancements such as the two-step floating catchment area (2SFCA) method, enhanced two-step floating catchment area (E2SFCA) method and the three-step floating catchment area (2SFCA) method. FCA originated from spatial decomposition [12], and is a special case of gravity model [13]. The application and improvement of this method made calculations simpler and the results more rational [5, 6, 14–20]. Methods used in other studies included the gravity model [6, 16, 21] and kernel density estimation (KDE) [20, 22, 23]. Neutens (2015) [24] further analyzed the advantages and disadvantages of the aforementioned methods when studying medical accessibility. However, there remains a lack of research on road network vulnerability and its impact on residents’ medical convenience levels.Despite numerous studies on road network vulnerability in the past two decades, the concept of vulnerability has yet to be clearly defined. It is often jointly explained with other related terms such as risk, reliability, flexibility, robustness, and resilience. Many scholars have also attempted to explore the inter-relationships between those terms [25–28]. A review of the literature indicated that research on road network vulnerability generally adopts one of the following perspectives:i.The connectivity of the road network, taking into account its topological structure. For example, Kurauchi et al. (2009) [29] determined the critical index of each road segment by calculating the number of connecting links between journey origin and destination, thereby identifying critical segments in the road network. Rupi et al. (2015)[30]evaluated the vulnerability of mountain road networks by examining the connectivity between start and end points, before grading them.ii.After a segment has deteriorated, the road network becomes disrupted or its traffic capacity declines. This reduces regional accessibility and leads to socioeconomic losses. These losses are used to determine and grade critical segments of the road network. For example, Jenelius et al. (2006) [27] ranked the importance of different roads based on their daily traffic volumes. Next, the impact of each road grading on traveling options and durations under various scenarios were simulated. Chen et al. (2007) [31] determined the vulnerability level of a road segment by the impact of its failure on regional accessibility, while Qiang and Nagurney (2008) [32]identified the relative importance and ranking of nodes and links within a road network by documenting the traffic volumes and behaviors of the network. Similarly, Jenelius and Mattsson (2012) [33] used traffic volumes to calculate the importance of the road network within each grid.iii.The impact of a road network’s deterioration or obstruction for regional accessibility is assessed by simulating a particular scenario, for example the occurrence of a natural disaster or deliberate attack. The results provide decision support on the transportation and delivery of relief provisions, as well as disaster recovery efforts. Bono and Gutiérrez (2011) [34] analyzed the impacts of road network disruption caused by the Haiti earthquake on the accessibility of the urban area of Port Au Prince.iv.Computational models for evaluating road network vulnerability are subjected to optimization. Some scholars have focused on model optimization because they believe that the computational burden is very heavy when grading the vulnerability of each segment within the overall road network. On the basis of the Hansen integral index, Luathep (2011) [35] used the relative accessibility index (AI) to analyze the socioeconomic impacts subsequent to road network deterioration, which causes network disruption or reduction in traffic capacity. Next, the AIs of all critical road segments before and after network deterioration were compared for categorization and ranking. This method reduces both computational burden and memory requirements.
In the present study, road network vulnerability is determined by combining data on the paths of debris flow hazards with only links in the topological structure of a road network. | [
"26286033",
"18190678",
"22587023",
"23964751",
"17018146",
"16574290",
"7100960",
"22469488",
"24077335",
"16548411",
"15446621"
] | [
{
"pmid": "26286033",
"title": "Spatial inequity in access to healthcare facilities at a county level in a developing country: a case study of Deqing County, Zhejiang, China.",
"abstract": "BACKGROUND\nThe inequities in healthcare services between regions, urban and rural, age groups and diverse income ... |
Frontiers in Neurorobotics | 29311888 | PMC5742219 | 10.3389/fnbot.2017.00066 | Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots | In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method. | 2Related Work2.1Lexical Acquisition by RobotStudies of language acquisition also constitute a constructive approach to the human developmental process (Cangelosi and Schlesinger, 2015), the language grounding (Steels and Hild, 2012), and the symbol emergence (Taniguchi et al., 2016c). One approach to studying language acquisition focuses on the estimation of phonemes and words from speech signals (Goldwater et al., 2009; Heymann et al., 2014; Taniguchi et al., 2016d). However, these studies used only continuous speech signals without using co-occurrence based on other sensor information, e.g., visual, tactile, and proprioceptive information. Therefore, the robot was not required to understand the meaning of words. Yet, it is important for a robot to understand word meanings, i.e., grounding the meanings to words, for human–robot interaction (HRI).Roy and Pentland (2002) proposed a computational model by which a robot could learn the names of objects from images of the object and natural infant-directed speech. Their model could perform speech segmentation, lexical acquisition, and visual categorization. Hörnstein et al. (2010) proposed a method based on pattern recognition and hierarchical clustering that mimics a human infant to enable a humanoid robot to acquire language. Their method allowed the robot to acquire phonemes and words from visual and auditory information through interaction with the human. Nakamura et al. (2011a,b) proposed multimodal latent Dirichlet allocation (MLDA) and a multimodal hierarchical Dirichlet process (MHDP) that enables the categorization of objects from multimodal information, i.e., visual, auditory, haptic, and word information. Their methods enabled more accurate object categorization by using multimodal information. Taniguchi et al. (2016a) proposed a method for simultaneous estimation of self-positions and words from noisy sensory information and an uttered word. Their method integrated ambiguous speech recognition results with the self-localization method for learning spatial concepts. However, Taniguchi et al. (2016a) assumed that the name of a place would be learned from an uttered word. Taniguchi et al. (2016b) proposed a nonparametric Bayesian spatial concept acquisition method (SpCoA) based on place categorization and unsupervised word segmentation. SpCoA could acquire the names of places from spoken sentences including multiple words. In the above studies, the robot was taught to focus on one target, e.g., an object or a place, by a tutor using one word or one sentence. However, considering a more realistic problem, the robot needs to know which event in a complicated situation is associated with which word in the sentence. The CSL, which is extended from the aforementioned studies on the lexical acquisition, is a more difficult and important problem in robotics in comparison. Our research concerns the CSL problem because of its importance in relation to the lexical acquisition by a robot.2.2Cross-Situational Learning2.2.1Conventional Cross-Situational Learning StudiesFrank et al. (2007, 2009) proposed a Bayesian model that unifies statistical and intentional approaches to cross-situational word learning. They conducted basic CSL experiments with the purpose of teaching an object name. In addition, they discussed that the effectiveness of mutual exclusivity for CSL in probabilistic models. Fontanari et al. (2009) performed object-word mapping from the co-occurrence between objects and words by using a method based on neural modeling fields (NMF). In “modi” experiments using iCub, their findings were similar to those reported by Smith and Samuelson (2010). The abovementioned studies are CSL studies that were inspired by studies based on experiments with human infants. These studies assumed a simple situation such as learning the relationship between objects and words as the early stage of CSL. However, the real environment is varied and more complex. In this study, we focus on the problem of CSL in utterances including multiple words and observations from multiple sensory-channels.2.2.2Probabilistic ModelsQu and Chai (2008, 2010) proposed a learning method that automatically acquires novel words for an interactive system. They focused on the co-occurrence between word-sequences and entity-sequences tracked by eye-gaze in lexical acquisition. Qu and Chai’s method, which is based on the IBM-translation model (Brown et al., 1993), estimates the word-entity association probability. However, their studies did not result in perfect unsupervised lexical acquisition because they used domain knowledge based on WordNet. Matuszek et al. (2012) presented a joint model of language and perception for grounded attribute learning. This model enables the identification of which novel words correspond to color, shape, or no attribute at all. Celikkanat et al. (2014) proposed an unsupervised learning method based on latent Dirichlet allocation (LDA) that allows many-to-many relationships between objects and contexts. Their method was able to predict the context from the observation information and plan the action using learned contexts. Chen et al. (2016) proposed an active learning method for cross-situational learning of object-word association. In experiments, they showed that LDA was more effective than non-negative matrix factorization (NMF). However, they did not perform any HRI experiment using the learned language. In our study, we perform experiments that use word meanings learned in CSL to generate an action and explain a current situation.2.2.3Neural Network ModelsYamada et al. (2015, 2016) proposed a learning method based on a stochastic continuous-time recurrent neural network (CTRNN) and a multiple time-scales recurrent neural network (MTRNN). They showed that the learned network formed an attractor structure representing both the relationships between words and action and the temporal pattern of the task. Stramandinoli et al. (2017) proposed partially recurrent neural networks (P-RNNs) for learning the relationships between motor primitives and objects. Zhong et al. (2017) proposed multiple time-scales gated recurrent units (MTGRU) inspired by MTRNN and long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997). They showed that the MTGRU could learn long-term dependencies in large-dimensional multimodal datasets by conducting multimodal interaction experiments using iCub. The learning results of the above studies using neural networks (NNs) are difficult to interpret because time-series data is mapped to continuous latent space. These studies implicitly associate words with objects and actions. Generally, NN methods require a massive amount of learning data in many cases. On the other hand, the learning result is easier to interpret when Bayesian methods rather than NN methods are used. In addition, Bayesian methods require less data to learn efficiently. We propose a Bayesian generative model that can perform CSL, including action learning.2.2.4Robot-to-Robot InteractionSpranger (2015) and Spranger and Steels (2015) proposed a method for the co-acquisition of semantics and syntax in the spatial language. The experimental results showed that the robot could acquire spatial grammar and categories related to spatial direction. Heath et al. (2016) implemented mobile robots (Lingodroids) capable of learning a lexicon through robot-to-robot interaction. They used two robots equipped with different sensors and simultaneous localization and mapping (SLAM) algorithms. These studies reported that the robots created their lexicons in relation to places and the distance in terms of time. However, these studies did not consider lexical acquisition by HRI. We consider HRI to be necessary to enable a robot to learn human language.2.2.5Multimodal Categorization and Word LearningAttamimi et al. (2016) proposed multilayered MLDA (mMLDA) that hierarchically integrates multiple MLDAs as an extension of Nakamura et al. (2011a). They performed an estimation of the relationships among words and multiple concepts by weighting the learned words according to their mutual information as a post-processing step. In their model, the same uttered words are generated from three kinds of concepts, i.e., this model has three variables for same word information in different concepts. We consider this to be an unnatural assumption as the generative model for generating words. However, in our proposed model, we assume that the uttered words are generated from one variable. We consider our proposed model to involve a more natural assumption than Attamimi’s model. In addition, their study did not use data that were autonomously obtained by the robot. In Attamimi et al. (2016), it was not possible for the robot to learn the relationships between self-actions and words because human motions obtained by the motion capture system based on Kinect and a wearable sensor device attached to a human were used as action data. In our study, the robot learns the action category based on subjective self-action. Therefore, the robot can perform a learned action based on a sentence of human speech. In this paper, we focus on complicated CSL problems arising from situations with multiple objects and sentences including words related to various sensory-channels such as the names, position, and color of objects, and the action carried out on the object. | [
"19596549",
"19389131",
"19409539",
"24478693",
"9377276",
"21635302",
"3365937",
"3802971",
"27471463"
] | [
{
"pmid": "19596549",
"title": "Cross-situational learning of object-word mapping using Neural Modeling Fields.",
"abstract": "The issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among... |
Frontiers in Neurorobotics | 29311889 | PMC5742615 | 10.3389/fnbot.2017.00067 | Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes | Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. | 2. Related workVarious studies have focused on learning motion primitives from manually segmented motions (Gräve and Behnke, 2012; Manschitz et al., 2015). Manschitz et al. proposed a method to generate sequential skills by using motion primitives that are learned in a supervised manner. Gräve et al. proposed segmenting motions using motion primitives that are learned by a supervised hidden Markov model. In these studies, the motions are segmented and labeled in advance. However, we consider that it is difficult to segment and label all possible motion primitives.Additionally, some studies have proposed unsupervised motion segmentation. However, these studies rely on heuristics. For instance, Wächter et al. have proposed a method to segment human manipulation motions based on contact relations between the end-effectors and objects in a scene (Wachter and Asfour, 2015); in their method, the points at which the end-effectors make contact with an object are determined as boundaries of motions. We believe this method works well in limited scenes; however, there are many motions, such as gestures and dances, in which objects are not manipulated. Lioutikov et al. proposed unsupervised segmentation; however, to reduce computational costs, this technique requires the possible boundary candidates between motion primitives to be specified in advance (Lioutikov et al., 2015). Therefore, the segmentation depends on those candidates, and motions cannot be segmented correctly if the correct candidates are not selected. In contrast, our proposed method does not require such candidates; all possible cutting points are considered by use of forward filtering-backward sampling, which uses the principles of dynamic programming. In some methods (Fod et al., 2002; Shiratori et al., 2004; Lin and Kulić, 2012), motion features (such as the zero velocity of joint angles) are used for motion segmentation. However, these features cannot be applied to all motions. Takano et al. use the error between actual movements and predicted movements as the criteria for specifying boundaries (Takano and Nakamura, 2016). However, the threshold must be manually tuned according to the motions to be segmented. Moreover, they used an HMM that is a stochastic model. We consider such an assumption to be unnatural from the viewpoint of stochastic models, and boundaries should be determined based on a stochastic model. In our proposed method, we do not use such heuristics and assumptions, and instead formulate the segmentation based on a stochastic model.Fox et al. have proposed unsupervised segmentation for the discovery of a set of latent, shared dynamical behaviors in multiple time series data (Fox et al., 2011). They introduce a beta process, which represents a share of motion primitives in multiple motions, into autoregressive HMM. They formulate the segmentation using a stochastic model, and no heuristics are used in their proposed model. However, in their proposed method, continuous data points that are classified into the same states are extracted as segments, and the lengths of the segments are not estimated. The states can be changed in the short term, and therefore shorter segments are estimated. They reported that some true segments were split into two or more categories, and that those shorter segments were bridged in their experiment. On the other hand, our proposed method classifies data points into states, and uses HSMM to estimate segment lengths. Hence, our proposed method can prevent states from being changed in the short term.Matsubara et al. proposed an unsupervised segmentation method called AutoPlait (Matsubara et al., 2014). This method uses multiple HMMs, each of which represents a fixed pattern; moreover, transitions between the HMMs are allowed. Therefore, time series data is segmented at points at which the state is changed to another HMM's state. However, we believe that HMMs are too simple to represent complicated sequences such as motions. Figure 2 illustrates an example of representation of time series data by HMM. The graph on the right in Figure 2 represents the mean and standard deviation learned by HMM from data points shown in the graph on the left. HMM represents time series data using only the mean and standard deviation; therefore, details of time series data can be lost. Therefore, we use Gaussian processes, which are non-parametric methods that can represent complex time series data.Figure 2Example of representation of time series data by HMM. Left: Data points for learning HMM. Right: Mean and standard deviation learned by HMM.The field of natural language processing has also produced literature related to sequence data segmentation. For example, unsupervised morphological analysis has been proposed for segmenting sequence data (Goldwater, 2006; Mochihashi et al., 2009; Uchiumi et al., 2015). Goldwater et al. proposed a method to divide sentences into words by estimating the parameters of a 2-gram language model based on a hierarchical Dirichlet process. The parameters are estimated in an unsupervised manner by Gibbs sampling (Goldwater, 2006). Mochihashi et al. proposed a nested Pitman-Yor language model (NPYLM) (Mochihashi et al., 2009). In this method, parameters of an n-gram language model based on the hierarchical Pitman-Yor process are estimated via the forward filtering-backward sampling algorithm. NPYLM can thus divide sentences into words more quickly and accurately than the method proposed in (Goldwater, 2006). Moreover, Uchiumi et al. extended the NPYLM to a Pitman-Yor hidden semi-Markov model (PY-HSMM) (Uchiumi et al., 2015) that can divide sentences into words and estimate the parts of speech (POS) of the words by sampling not only words, but also POS in the sampling phase of the forward filtering-backward sampling algorithm. However, these relevant studies aimed to divide symbolized sequences (such as sentences) into segments, and did not consider analogous divisions in continuous sequence data, such as that obtained by analyzing human motion.Taniguchi et al. proposed a method to divide continuous sequences into segments by utilizing NPYLM (Taniguchi and Nagasaka, 2011). In their method, continuous sequences are discretized and converted into discrete-valued sequences using the infinite hidden Markov model (Fox et al., 2007). The discrete-valued sequences are then divided into segments by using NPYLM. In this method, motions can be recognized by the learned model, but cannot be generated naively because they are discretized. Moreover, segmentation based on NPYLM does not work well if errors occur in the discretization step.Therefore, we propose a method to divide a continuous sequence into segments without using discretization. This method divides continuous motions into unit actions. Our proposed method is based on HSMM, the emission distribution of which is GP, which represents continuous unit actions. To learn the model parameters, we use forward filtering-backward sampling, and segment points and classes are sampled simultaneously. However, our proposed method also has limitations. One limitation is that the method requires the number of motion classes to be specified in advance. It is estimated automatically in methods such as (Fox et al., 2011) and (Matsubara et al., 2014). Another limitation is that computational costs are very high, owing to the numerous recursive calculations. We discuss these limitations in the experiments. | [] | [] |
PLoS Computational Biology | 29240763 | PMC5746283 | 10.1371/journal.pcbi.1005904 | Costs of task allocation with local feedback: Effects of colony size and extra workers in social insects and other multi-agent systems | Adaptive collective systems are common in biology and beyond. Typically, such systems require a task allocation algorithm: a mechanism or rule-set by which individuals select particular roles. Here we study the performance of such task allocation mechanisms measured in terms of the time for individuals to allocate to tasks. We ask: (1) Is task allocation fundamentally difficult, and thus costly? (2) Does the performance of task allocation mechanisms depend on the number of individuals? And (3) what other parameters may affect their efficiency? We use techniques from distributed computing theory to develop a model of a social insect colony, where workers have to be allocated to a set of tasks; however, our model is generalizable to other systems. We show, first, that the ability of workers to quickly assess demand for work in tasks they are not currently engaged in crucially affects whether task allocation is quickly achieved or not. This indicates that in social insect tasks such as thermoregulation, where temperature may provide a global and near instantaneous stimulus to measure the need for cooling, for example, it should be easy to match the number of workers to the need for work. In other tasks, such as nest repair, it may be impossible for workers not directly at the work site to know that this task needs more workers. We argue that this affects whether task allocation mechanisms are under strong selection. Second, we show that colony size does not affect task allocation performance under our assumptions. This implies that when effects of colony size are found, they are not inherent in the process of task allocation itself, but due to processes not modeled here, such as higher variation in task demand for smaller colonies, benefits of specialized workers, or constant overhead costs. Third, we show that the ratio of the number of available workers to the workload crucially affects performance. Thus, workers in excess of those needed to complete all tasks improve task allocation performance. This provides a potential explanation for the phenomenon that social insect colonies commonly contain inactive workers: these may be a ‘surplus’ set of workers that improves colony function by speeding up optimal allocation of workers to tasks. Overall our study shows how limitations at the individual level can affect group level outcomes, and suggests new hypotheses that can be explored empirically. | Related workThe process of task allocation and its typical outcome, division of labor, have received a lot of attention in the social insect literature. Empirical studies typically focus on determining the individual traits or experiences that shape, or at least correlate with, individual task specialization: e.g. when larger or older individuals are more likely to forage (e.g. [53]) or when interaction rates or positive experience in performing a task affect task choices [32, 64]. Generally the re-allocation of workers to tasks after changes in the demand for work often needs to happen on a time scale that is shorter than the production of new workers (which, in bees or ants, takes weeks or months, [65]), and indeed empirical studies have found that the traits of new workers do not seem to be modulated by colonies to match the need for work in particular tasks [66]. Therefore, more recent empirical and most modeling studies focus on finding simple, local behavior rules that generate individual task specialization (i.e. result in division of labor at the colony level), while simultaneously also enabling group-level responsiveness to the changing needs for work in different tasks [35, 67, 68]. For example, in classic papers, Bonabeau et al. [69] showed theoretically that differing task stimulus response thresholds among workers enable both task specialization and a flexible group-level response to changing task needs; and Tofts and others [70, 71] showed that if workers inhabit mutually-avoiding spatial fidelity zones, and tasks are spread over a work surface, this also enables both task specialization and flexible response to changing needs for work.In this paper we examined how well we should expect task allocation to be able to match actual demands for work, and how this will depend on group size and the number of ‘extra’, thus inactive, workers. Neither of the modeling studies cited above explicitly considered whether task allocation is improved or hindered by colony size and inactive workers. In addition, while several studies find increasing levels of individual specialization in larger groups, the empirical literature overall does not show a consensus on how task allocation or the proportion of inactive workers is or should be affected by group size (reviewed in [14, 22]).In general, few studies have cosidered the efficiency of the task allocation process itself, and how it relates to the algorithm employed [72], often in the context of comparing bio-(ant-)inspired algorithms to algorithms of an entirely different nature [73, 74]. For example, Pereira and Gordon, assuming task allocation by social interactions, demonstrate that speed and accuracy of task allocation may trade off against each other, mediated by group size, and thus ‘optimal’ allocation of workers to tasks is not achieved [72]. Duarte et al. also find that task allocation by response thresholds does not achieve optimal allocation, and they also find no effect of colony size on task allocation performance [75]. Some papers on task allocation in social insects do not examine how group size per se influences task allocation, but look at factors such as the potential for selfish worker motives [76], which may be affected by group size, and which imply that the task allocation algorithm is not shaped by what maximizes collective outcomes. When interpreting modeling studies on task allocation, it is also important to consider whether the number of inactive workers is an outcome emerging from particular studied task allocation mechanisms, or whether it is an assumption put into the model to study its effect on efficiency of task allocation. In our study, we examined how an assumed level of ‘superfluous’, thus by definition ‘inactive’, workers would affect the efficiency of re-allocating workers to tasks after demands had changed.While the above models concern the general situation of several tasks, such as building, guarding, and brood care, being performed in parallel but independently of one another, several published models of task allocation specifically consider the case of task partitioning [77], defined in the social insect literature as a situation where, in an assembly-line fashion, products of one task have to be directly passed to workers in the next task, such that a tight integration of the activity in different tasks is required. This is, for example, the case in wasp nest building, where water and pulp are collected by different foragers, these then have to be handed to a construction worker (who mixes the materials and applies them to the nest). Very limited buffering is possible because the materials are not stored externally to the workers, and a construction worker cannot proceed with its task until it receives a packet of water and pulp. One would expect different, better-coordinated mechanisms of task allocation to be at work in this case. In task partitioning situations, a higher level of noise (variation in availability of materials, or in worker success at procuring them) increases the optimal task switching rate as well as the number of inactive workers, although this might reverse at very high noise levels [78]. Generally larger groups are expected to experience relatively lower levels of noise [79]. In this line of reasoning, inactive workers are seen as serving a function as ‘buffer’ (or ‘common stomach’, as they can hold materials awaiting work) [79, 80]; this implies that as noise or task switching rate increase, so does the benefit (and optimal number) of inactive workers. | [
"10221902",
"18330538",
"18031303",
"21888521",
"11112175",
"28127225",
"28877229",
"21233379",
"26213412",
"19018663",
"24417977",
"25489940",
"17629482",
"1539941",
"11162062",
"22661824",
"23666484",
"22079942",
"15024125",
"26218613"
] | [
{
"pmid": "10221902",
"title": "Notch signaling: cell fate control and signal integration in development.",
"abstract": "Notch signaling defines an evolutionarily ancient cell interaction mechanism, which plays a fundamental role in metazoan development. Signals exchanged between neighboring cells throu... |
Royal Society Open Science | 29308229 | PMC5748960 | 10.1098/rsos.170853 | Quantifying team cooperation through intrinsic multi-scale measures: respiratory and cardiac synchronization in choir singers and surgical teams | A highly localized data-association measure, termed intrinsic synchrosqueezing transform (ISC), is proposed for the analysis of coupled nonlinear and non-stationary multivariate signals. This is achieved based on a combination of noise-assisted multivariate empirical mode decomposition and short-time Fourier transform-based univariate and multivariate synchrosqueezing transforms. It is shown that the ISC outperforms six other combinations of algorithms in estimating degrees of synchrony in synthetic linear and nonlinear bivariate signals. Its advantage is further illustrated in the precise identification of the synchronized respiratory and heart rate variability frequencies among a subset of bass singers of a professional choir, where it distinctly exhibits better performance than the continuous wavelet transform-based ISC. We also introduce an extension to the intrinsic phase synchrony (IPS) measure, referred to as nested intrinsic phase synchrony (N-IPS), for the empirical quantification of physically meaningful and straightforward-to-interpret trends in phase synchrony. The N-IPS is employed to reveal physically meaningful variations in the levels of cooperation in choir singing and performing a surgical procedure. Both the proposed techniques successfully reveal degrees of synchronization of the physiological signals in two different aspects: (i) precise localization of synchrony in time and frequency (ISC), and (ii) large-scale analysis for the empirical quantification of physically meaningful trends in synchrony (N-IPS). | 2.Related workIn addition to our recently proposed data association measures, IPS and ICoh, there also exist several other synchrony measures. Cross-correlation is a simple measure of linear synchronization between two signals, and hence it cannot effectively deal with the nonlinear coupling behaviour, thus resulting in an undesired low value of correlation coefficient. Phase synchronization index (PSI) proposed in [32] is obtained by considering time-averaged phase difference between two signals, instead of considering the distribution of phase differences as employed in IPS and the proposed N-IPS (see §3.2 for more details). This technique can underestimate synchrony if the distribution of phase differences between two signals has more than one peak, and by averaging over time, phase differences can be cancelled out, resulting in an undesired low value of PSI. Note that the estimation of PSI in IPS, N-IPS and [32] is achieved via the calculation of instantaneous phase of the analytic signal generated using the Hilbert transform. Wavelet-based PSI was introduced in [33], whereby instantaneous phase is calculated by convolving each signal with a complex wavelet function and PSI is obtained in the same manner as in IPS and [32]. As a central frequency and a width of the wavelet function must be specified, this approach for estimating PSI is sensitive to phase synchrony only in a certain frequency band.Synchrony can also be measured by means of information-theoric concepts [34], whereby the mutual information between two signals is defined as the indication of the amount of information of a signal which can be obtained by knowing the other signal and vice versa. The physical meaning or interpretation of synchrony quantified using this approach, however, does not exist.General synchronization—the existence of a functional relationship between the systems generating the signals of interest—can be characterized by the conditional stability of the driven chaotic oscillator if the equations of the systems are known [35]. For real-world data, however, the model equations are typically unavailable. The non-parametric method of mutual false nearest neighbours [36], which is based on the technique of delay embedding and on conditional neighbours, therefore, has been proposed to characterize general synchronization, yet this technique might produce errors if the signals of interest have more than one predominant time scale [37]. Phase and general synchronization can also be quantified using the concept of recurrence quantification analysis, whereby two signals are deemed: (i) phase synchronized if the distances between the diagonal lines in their respective recurrence plots coincide and (ii) generally synchronized if their recurrence plots are very similar or approximately the same.All of the described measures are limited to quantifying synchrony between the signals as a whole, and cannot yield TF representations of synchrony. Although such representations can be generated from the IPS algorithm, through the Hilbert transform, we have empirically found that for effective estimation of time-varying synchrony using IPS relatively long sliding windows should be used; hence its time localization is poor. Furthermore, a number of realizations of IPS must be performed for statistical relevance, thus inevitably blurring out TF representations of synchrony. On the other hand, the ISC proposed here generates highly localized TF representations of synchrony and is suitable for the analysis of synchrony in nonlinear and non-stationary multivariate signals. | [
"23847555",
"23431279",
"28154536",
"16235655",
"23204288",
"10619414",
"10060528",
"12796727",
"15465935",
"22200377",
"23083794",
"10783543"
] | [
{
"pmid": "23847555",
"title": "Music structure determines heart rate variability of singers.",
"abstract": "Choir singing is known to promote wellbeing. One reason for this may be that singing demands a slower than normal respiration, which may in turn affect heart activity. Coupling of heart rate vari... |
Royal Society Open Science | 29308250 | PMC5750017 | 10.1098/rsos.171200 | A brittle star-like robot capable of immediately adapting to unexpected physical damage | A major challenge in robotic design is enabling robots to immediately adapt to unexpected physical damage. However, conventional robots require considerable time (more than several tens of seconds) for adaptation because the process entails high computational costs. To overcome this problem, we focus on a brittle star—a primitive creature with expendable body parts. Brittle stars, most of which have five flexible arms, occasionally lose some of them and promptly coordinate the remaining arms to escape from predators. We adopted a synthetic approach to elucidate the essential mechanism underlying this resilient locomotion. Specifically, based on behavioural experiments involving brittle stars whose arms were amputated in various ways, we inferred the decentralized control mechanism that self-coordinates the arm motions by constructing a simple mathematical model. We implemented this mechanism in a brittle star-like robot and demonstrated that it adapts to unexpected physical damage within a few seconds by automatically coordinating its undamaged arms similar to brittle stars. Through the above-mentioned process, we found that physical interaction between arms plays an essential role for the resilient inter-arm coordination of brittle stars. This finding will help develop resilient robots that can work in inhospitable environments. Further, it provides insights into the essential mechanism of resilient coordinated motions characteristic of animal locomotion. | 2.Related works on brittle stars2.1.Anatomical studiesBrittle stars have a circular body disc and typically five radiating arms (figure 1a). Each arm consists of a series of segments, each containing a roughly discoidal vertebral ossicle. Adjacent ossicles are linked by four muscle blocks, which enables the arm to bend both in the horizontal and vertical directions [28]. The arm movements are innervated by a simple distributed nervous system. Specifically, brittle stars have a ‘circumoral nerve ring’ that surrounds the disc and connects to ‘radial nerves’ running along the arms (figure 1b; electronic supplementary material, video S1, 1 : 13–1 : 31) [27]. Each arm communicates with its two adjacent arms via the nerve ring [22].2.2.Behavioural studiesBrittle stars have a locomotion strategy distinguished from any other metazoan: arms with highly muscularized internal skeletons coordinate powerful strides for rapid movement across the ocean floor [23]. Despite the lack of a sophisticated centralized controller, brittle stars assign distinct roles to individual arms and coordinate their movements to propel the body [21–26]. When a stimulus is encountered and locomotion becomes necessary, each arm is assigned one of three roles in the gait corresponding with its position relative to the requisite direction of movement [21,22,25,26]. One arm is designated as the centre limb, two as the forelimbs and two as hindlimbs. The centre limb is the arm parallel to the direction of movement. The forelimbs are the primary structures that work in coordination to move the organism forward, and the hindlimbs take a minimal role in propulsion. When the centre limb is anterior to the direction of desired disc movement (during the locomotor mode, referred to as ‘rowing’), the left and right forelimbs are adjacent to the centre limb, and the remaining two take the role as the hindlimbs. When the centre limb is posterior to the direction of movement (referred to as ‘reverse rowing’), the forelimbs are the most anterior limbs, while the hindlimbs flank the centre limb [21,23]. Each arm is capable of assuming any of the three roles. Therefore, to change direction, the animal simply reassigns the roles of the arms [23]. This system allows these organisms to move in every direction equally without rotating the body to turn, as would need to occur in a bilateral organism.Further, brittle stars can seamlessly modify their locomotion strategy to accommodate a lost or inoperative arm [19,20,22,31]. For example, a brittle star can autotomize some of its arms and coordinate the remaining arms to evade predators or harmful stimuli (figure 1c; electronic supplementary material, video S1, 0 : 55–1 : 13) [19,20]. Brittle stars whose arms are amputated surgically in various ways can also maintain locomotion by coordinating the remaining arms [22].2.3.Mathematical and robotic studiesBrittle star locomotion has also attracted attention in the fields of mathematics and robotics. For example, Lal et al. [32] developed a brittle star-like modular robot. They let the robots learn their movements by using a genetic algorithm so as to coordinate each other and generate locomotion. However, as the ‘performance phase’ of the robot is completely separated from the ‘learning phase’ that requires a certain amount of time, the robot cannot behave adaptively in real time.In contrast, we have proposed a decentralized control mechanism for the locomotion of brittle stars with five arms, based on a synthetic approach [25,26]. Spontaneous role assignment of rhythmic and non-rhythmic arm movements was modelled by using an active rotator model that can describe both oscillatory and excitatory properties. The proposed mechanism was validated via simulations [25] and robot experiments [26]. | [
"20130624",
"17110570",
"26017452",
"18929578",
"25845627",
"22573771",
"22617431",
"7285093",
"18006736",
"28412715",
"28325917",
"19545439"
] | [
{
"pmid": "17110570",
"title": "Resilient machines through continuous self-modeling.",
"abstract": "Animals sustain the ability to operate after injury by creating qualitatively different compensatory behaviors. Although such robustness would be desirable in engineered systems, most machines fail in the... |
Scientific Reports | 29343692 | PMC5772550 | 10.1038/s41598-018-19194-4 | Symmetric Decomposition of Asymmetric Games | We introduce new theoretical insights into two-population asymmetric games allowing for an elegant symmetric decomposition into two single population symmetric games. Specifically, we show how an asymmetric bimatrix game (A,B) can be decomposed into its symmetric counterparts by envisioning and investigating the payoff tables (A and B) that constitute the asymmetric game, as two independent, single population, symmetric games. We reveal several surprising formal relationships between an asymmetric two-population game and its symmetric single population counterparts, which facilitate a convenient analysis of the original asymmetric game due to the dimensionality reduction of the decomposition. The main finding reveals that if (x,y) is a Nash equilibrium of an asymmetric game (A,B), this implies that y is a Nash equilibrium of the symmetric counterpart game determined by payoff table A, and x is a Nash equilibrium of the symmetric counterpart game determined by payoff table B. Also the reverse holds and combinations of Nash equilibria of the counterpart games form Nash equilibria of the asymmetric game. We illustrate how these formal relationships aid in identifying and analysing the Nash structure of asymmetric games, by examining the evolutionary dynamics of the simpler counterpart games in several canonical examples. | Related WorkThe most straightforward and classical approach to asymmetric games is to treat agents as evolving separately: one population per player, where each agent in a population interacts by playing against agent(s) from the other population(s), i.e. co-evolution21. This assumes that players of these games are always fundamentally attached to one role and never need to know/understand how to play as the other player. In many cases, though, a player may want to know how to play as either player. For example, a good chess player should know how to play as white or black. This reasoning inspired the role-based symmetrization of asymmetric games22.The role-based symmetrization of an arbitrary bimatrix game defines a new (extensive-form) game where before choosing actions the role of the two players are decided by uniform random chance. If two roles are available, an agent is assigned one specific role with probability \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\frac{1}{2}$$\end{document}12. Then, the agent plays the game under that role and collects the role-specific payoff appropriately. A new strategy space is defined, which is the product of both players’ strategy spaces, and a new payoff matrix computing (expected) payoffs for each combination of pure strategies that could arise under the different roles. There are relationships between the sets of evolutionarily stable strategies and rest points of the replicator dynamics between the original and symmetrized game19,23.This single-population model forces the players to be general: able to devise a strategy for each role, which may unnecessarily complicate algorithms that compute strategies for such players. In general, the payoff matrix in the resulting role-based symmetrization is n! (n being the number of agents) times larger due to the number of permutations of player role assignments. There are two-population variants that formulate the problem slightly differently: a new matrix that encapsulates both players’ utilities assigns 0 utility to combinations of roles that are not in one-to-one correspondence with players24. This too, however, results in an unnecessarily larger (albeit sparse) matrix.Lastly, there are approaches that have structured asymmetry, that arises due to ecological constraints such as locality in a network and genotype/genetic relationships between population members25. Similarly here, replicator dynamics and their properties are derived by transforming the payoff matrix into a larger symmetric matrix.Our primary motivation is to enable analysis techniques for asymmetric games. However, we do this by introducing new symmetric counterpart dynamics rather than using standard dynamics on a symmetrised game. Therefore, the traditional role interpretation as well as any method that enlarges the game for the purpose of obtaining symmetry is unnecessarily complex for our purposes. Consequently, we consider the original co-evolutionary interpretation, and derive new (lower-dimensional) strategy space mappings. | [
"23519283",
"27892509",
"16843499",
"7412323",
"26308326",
"27161141",
"28355181"
] | [
{
"pmid": "23519283",
"title": "Evolution of collective action in adaptive social structures.",
"abstract": "Many problems in nature can be conveniently framed as a problem of evolution of collective cooperative behaviour, often modelled resorting to the tools of evolutionary game theory in well-mixed p... |
Plant Methods | 29375647 | PMC5773030 | 10.1186/s13007-018-0273-z | The use of plant models in deep learning: an application to leaf counting in rosette plants | Deep learning presents many opportunities for image-based plant phenotyping. Here we consider the capability of deep convolutional neural networks to perform the leaf counting task. Deep learning techniques typically require large and diverse datasets to learn generalizable models without providing a priori an engineered algorithm for performing the task. This requirement is challenging, however, for applications in the plant phenotyping field, where available datasets are often small and the costs associated with generating new data are high. In this work we propose a new method for augmenting plant phenotyping datasets using rendered images of synthetic plants. We demonstrate that the use of high-quality 3D synthetic plants to augment a dataset can improve performance on the leaf counting task. We also show that the ability of the model to generate an arbitrary distribution of phenotypes mitigates the problem of dataset shift when training and testing on different datasets. Finally, we show that real and synthetic plants are significantly interchangeable when training a neural network on the leaf counting task.Electronic supplementary materialThe online version of this article (10.1186/s13007-018-0273-z) contains supplementary material, which is available to authorized users. | Related workThe use of synthetic or simulation data has been explored in several visual learning contexts, including pose estimation [29] as well as viewpoint estimation [30]. In the plant phenotyping literature, models have been used as testing data to validate image-based root system descriptions [23], as well as to train machine learning models for root description tasks [31]. However, when using synthetic images, the model was both trained and tested on synthetic data, leaving it unclear whether the use of synthetic roots could offer advantages to the analysis of real root systems, or how a similar technique would perform on shoots.The specialized root system models used by Benoit et al. [23] and Lobet et al. [31] are not applicable to tasks involving the aerial parts of a plant—the models have not been generalized to produce structures other than roots. Nonetheless, for image-based tasks Benoit et al. [23] were the first to employ a model [24] based on the L-system formalism. Because of its effectiveness in modelling the structure and development of plants, we chose the same formalism for creating our Arabidopsis rosette model | [
"22074787",
"28066963",
"28736569",
"23286457",
"14732445",
"22235985",
"25469374",
"5659071",
"22670147",
"28220127",
"22942389"
] | [
{
"pmid": "22074787",
"title": "Phenomics--technologies to relieve the phenotyping bottleneck.",
"abstract": "Global agriculture is facing major challenges to ensure global food security, such as the need to breed high-yielding crops adapted to future climates and the identification of dedicated feedsto... |
Scientific Reports | 29374175 | PMC5786031 | 10.1038/s41598-018-19440-9 | Three-dimensional reconstruction and NURBS-based structured meshing of coronary arteries from the conventional X-ray angiography projection images | Despite its two-dimensional nature, X-ray angiography (XRA) has served as the gold standard imaging technique in the interventional cardiology for over five decades. Accordingly, demands for tools that could increase efficiency of the XRA procedure for the quantitative analysis of coronary arteries (CA) are constantly increasing. The aim of this study was to propose a novel procedure for three-dimensional modeling of CA from uncalibrated XRA projections. A comprehensive mathematical model of the image formation was developed and used with a robust genetic algorithm optimizer to determine the calibration parameters across XRA views. The frames correspondences between XRA acquisitions were found using a partial-matching approach. Using the same matching method, an efficient procedure for vessel centerline reconstruction was developed. Finally, the problem of meshing complex CA trees was simplified to independent reconstruction and meshing of connected branches using the proposed nonuniform rational B-spline (NURBS)-based method. Because it enables structured quadrilateral and hexahedral meshing, our method is suitable for the subsequent computational modelling of CA physiology (i.e. coronary blood flow, fractional flow reverse, virtual stenting and plaque progression). Extensive validations using digital, physical, and clinical datasets showed competitive performances and potential for further application on a wider scale. | Related workThe majority of the available methods for reconstructing CA from XRA are semi-automatic and consist of these five steps: (1) pairing of frames acquired from different views; (2) vessel segmentation, decomposition and tracking in the XRA dynamic runs; (3) calibration of the parameters defining the device orientations; (4) modeling of CA centerline from its synchronized segmentations; and (5) reconstruction of the CA tree surface. In this part of the section, we briefly review state of the art approaches for solving these reconstruction tasks (see also Table 1).Table 1Comparative overview of the features provided by studies.MethodCalibrationCenterlineLumen approximationcross-section (# views)4DDelivered mesh formatSurface validationapproach (referent)ProposedOpt. I & EPM (B-Spline)C (2), E (2–4), P (4+)+PS (NURBS, TRI, TET, HEX, QUAD)DP QA (GT), PP QA (CT) RP QA (CT)Chen & Carroll7Opt. EEM (poly)C (2)—PCTRP VACañero et al.10+AC (poly)——N/A—Chen & Carroll16Opt. EEM (poly)C (2)+PCTRP VAAndriotis et al.18Opt. EEM (poly)C (2)+PCTPP QA (CT), RP QA (CT)Yang et al.9Opt. I & EEM (poly)C (2)—PCTPP VA (GT), RP VAZheng et al.14Opt. EAC (poly)—+N/A—Yang et al.22Opt. I & EAC (poly)N/A+PCTDP QA (GT), RP VACong et al.12+AC (poly)E (2–5)—PCTDP QA (GT), RP VAI-intrinsic, E-extrinsic, Opt.-optimization, + -precalibrated, AC-active contours, PM-partial matching, EM-epipolar matching, C-circle, E-elipse, P-polyline, PS-parametric surface, PCT-point cloud triangulation, DP-digital phantom, PP-physical phantom, RP-real patient, GT-ground truth, CT-computed tomography, VA-visual assessment, QA-quantitative assessment. | [
"19573711",
"2210785",
"10909927",
"15822802",
"19414289",
"12564886",
"20053531",
"12774895",
"21227652",
"12872946",
"15575409",
"19060360",
"12200927",
"18583730",
"24503518",
"12194670",
"11607632",
"8006269",
"24771202",
"15191145",
"21327913",
"20300489",
"20964209"... | [
{
"pmid": "19573711",
"title": "Coronary angiography: the need for improvement and the barriers to adoption of new technology.",
"abstract": "Traditional coronary angiography presents a variety of limitations related to image acquisition, content, interpretation, and patient safety. These limitations we... |
Scientific Reports | 29391406 | PMC5794926 | 10.1038/s41598-018-20037-5 | Advanced Steel Microstructural Classification by Deep Learning Methods | The inner structure of a material is called microstructure. It stores the genesis of a material and determines all its physical and chemical properties. While microstructural characterization is widely spread and well known, the microstructural classification is mostly done manually by human experts, which gives rise to uncertainties due to subjectivity. Since the microstructure could be a combination of different phases or constituents with complex substructures its automatic classification is very challenging and only a few prior studies exist. Prior works focused on designed and engineered features by experts and classified microstructures separately from the feature extraction step. Recently, Deep Learning methods have shown strong performance in vision applications by learning the features from data together with the classification step. In this work, we propose a Deep Learning method for microstructural classification in the examples of certain microstructural constituents of low carbon steel. This novel method employs pixel-wise segmentation via Fully Convolutional Neural Network (FCNN) accompanied by a max-voting scheme. Our system achieves 93.94% classification accuracy, drastically outperforming the state-of-the-art method of 48.89% accuracy. Beyond the strong performance of our method, this line of research offers a more robust and first of all objective way for the difficult task of steel quality appreciation. | Related WorksBased on the instrument used for imaging, we can categorize the related works into Light Optical Microscopy (LOM) and Scanning Electron Microscopy (SEM) imaging. High-resolution SEM imaging is very expensive compared with LOM imaging in terms of time and operating costs. However, low-resolution LOM imaging makes distinguishing microstructures based on their substructures even more difficult. Nowadays, the task of microstructural classification is performed by observing a sample image by an expert and assigning one of the microstructure classes to it. As experts are different in their level of expertise, one can assume that sometimes there are different opinions from different experts. However, thanks to highly professional human experts, this task has been accomplished so far with low error which is appreciated. Regarding automatic microstructural classification, microstructures are typically defined by the means of standard procedures in metallography. Vander Voort11 used Light Optical Microscopy (LOM) microscopy, but without any sort of learning the microstructural features which is actually still the state of the art in material science for classification of microstructres in most institutes as well as in industry. His method defined only procedures with which one expert can decide on the class of the microstructure. Moreover, additional chemical etching12 made it possible to distinguish second phases using different contrasts, however etching is constrained to empirical methods and can not be used in distinguishing various phases in steel with more than two phases. Nowadays, different techniques and approaches made morphological or crystallographic properties accessible4,13–18. Any approach for identification of phases in multiphase steel relies on these methods and aims at the development of advanced metallographic methods for morphological analysis purposes using the common characterization techniques and were accompanied with pixel- and context-based image analysis steps.Previously, Velichko et al.19 proposed a method using data mining methods by extracting morphological features and a feature classification step on cast iron using Support Vector Machines (SVMs) - a well established method in the field of machine learning20. More recently, Pauly et al.21 followed this same approach by applying on a contrasted and etched dataset of steel, acquired by SEM and LOM imaging which was also used in this work. However, it could only reach 48.89% accuracy in microstructural classification on the given dataset for four different classes due to high complexity of substructures and not discriminative enough features.Deep Learning methods have been applied in object classification and image semantic segmentation for different applications. AlexNet, a CNN proposed by Alex Krizhevsky et al.22, with 7 layers was the winner of ImageNet Large Scale Visual Recognition Challenge (ILSVRC)23 in 2012 which is one of the most well-known object classification task challenges in computer vision community. It is the main reason that Deep Learning methods drew a lot of attention. AlexNet improved the accuracy ILSVRC2012 by 10 percent points which was a huge increase in this challenge. VGGNet, a CNN architecture proposed by Simonyan et al.8 has even more layers than AlexNet achieving better accuracy performance. Fully Convolutional Neural Networks (FCNNs) architectures, proposed by Long et al.24, is one of the first and well-known works to adapt object classification CNNs to semantic segmentation tasks. FCNNs and their extensions to the approach are currently the state-of-the-art in semantic segmentation on a range of benchmarks including Pascal VOC image segmentation challenge25 or Cityscape26.Our method transfers the success of Deep Learning for segmentation tasks to the challenging problem of microstructural classification in the context of steel quality appraisal. It is the first demonstration of a Deep Learning technique in this context that in particular shows substantial gains over the previous state of the art. | [] | [] |
Frontiers in Neuroscience | 29467600 | PMC5808221 | 10.3389/fnins.2017.00754 | White Matter Tract Segmentation as Multiple Linear Assignment Problems | Diffusion magnetic resonance imaging (dMRI) allows to reconstruct the main pathways of axons within the white matter of the brain as a set of polylines, called streamlines. The set of streamlines of the whole brain is called the tractogram. Organizing tractograms into anatomically meaningful structures, called tracts, is known as the tract segmentation problem, with important applications to neurosurgical planning and tractometry. Automatic tract segmentation techniques can be unsupervised or supervised. A common criticism of unsupervised methods, like clustering, is that there is no guarantee to obtain anatomically meaningful tracts. In this work, we focus on supervised tract segmentation, which is driven by prior knowledge from anatomical atlases or from examples, i.e., segmented tracts from different subjects. We present a supervised tract segmentation method that segments a given tract of interest in the tractogram of a new subject using multiple examples as prior information. Our proposed tract segmentation method is based on the idea of streamline correspondence i.e., on finding corresponding streamlines across different tractograms. In the literature, streamline correspondence has been addressed with the nearest neighbor (NN) strategy. Differently, here we formulate the problem of streamline correspondence as a linear assignment problem (LAP), which is a cornerstone of combinatorial optimization. With respect to the NN, the LAP introduces a constraint of one-to-one correspondence between streamlines, that forces the correspondences to follow the local anatomical differences between the example and the target tract, neglected by the NN. In the proposed solution, we combined the Jonker-Volgenant algorithm (LAPJV) for solving the LAP together with an efficient way of computing the nearest neighbors of a streamline, which massively reduces the total amount of computations needed to segment a tract. Moreover, we propose a ranking strategy to merge correspondences coming from different examples. We validate the proposed method on tractograms generated from the human connectome project (HCP) dataset and compare the segmentations with the NN method and the ROI-based method. The results show that LAP-based segmentation is vastly more accurate than ROI-based segmentation and substantially more accurate than the NN strategy. We provide a Free/OpenSource implementation of the proposed method. | 2. Related works2.1. Supervised tract segmentationHere we review the literature on supervised tractogram segmentation and on the linear assignment problem. In order to organize the body of work in this field, we articulate the discussion on supervised tract segmentation along these five topics: alignment, embedding space, similarity/distance, correspondence techniques, and refinement step.2.1.1. AlignmentIn supervised tract segmentation, tractograms are initially aligned to an atlas. Both voxel-based and streamline-based atlases have been used in literature, e.g., white matter ROI-based anatomical atlas (Maddah et al., 2005), high dimensional atlas (O'Donnell and Westin, 2007; Vercruysse et al., 2014), example-based single atlas (Guevara et al., 2012; Labra et al., 2017), example-based multi-atlas (Yoo et al., 2015). To the best of our knowledge, the specific step of alignment has been conducted with standard methods: in most of the cases with voxel-based linear registration (O'Donnell and Westin, 2007; Guevara et al., 2012; Yoo et al., 2015) and in the others with nonlinear voxel-based registration (Vercruysse et al., 2014).2.1.2. Embedding spaceStreamlines are complex geometrical objects with a different number of points one from another. They are unfit to be directly given as input to many efficient data analysis algorithms which, instead, require vectors all with the same number of dimensions. Tractograms are large collections of streamlines, from hundreds of thousands to millions of streamlines, and their analysis is often limited by the required computational cost. A common preprocessing step before using algorithms like clustering, or nearest neighbor, is to transform streamlines into vectors, a process called Euclidean embedding. Different authors opted for different embedding approaches, like the spectral embedding (O'Donnell and Westin, 2007), the re-sampling of all streamlines to the same number of points (Guevara et al., 2012; Yoo et al., 2015; Labra et al., 2017), the use of B-splines with re-sampling (Maddah et al., 2005) and the dissimilarity representation (Olivetti and Avesani, 2011). Re-sampling all streamlines to a fixed number of points is the most common approach to obtain the embedding. In principle, up-sampling/down-sampling to a particular number of points may cause the loss of information. On the other hand, spectral embedding has high computation cost. The dissimilarity representation has shown remarkable results in terms of machine learning applications (Olivetti et al., 2013) and exploration of tractograms (Porro-Muñoz et al., 2015), at a moderate computational cost.2.1.3. Streamline distanceIn order to find corresponding streamlines from one tractogram to another one, the definition of the streamline distance plays a crucial role. Most commonly, the corresponding streamline in the new tractogram is defined as the closest one, for the given streamline distance function. Similarly, when doing clustering for tract segmentation, the streamline distance function is a fundamental building block. Different streamline distance functions have been used in the supervised tract segmentation literature, e.g., minimum closest point (MCP) (O'Donnell and Westin, 2007), symmetric minimum average distance (MAM) (Olivetti and Avesani, 2011), minimum average flip distance (MDF) (Yoo et al., 2015; Garyfallidis et al., in press), Hausdorff distance (Maddah et al., 2005), normalized Euclidean distances (Labra et al., 2017), Mahalanobis distance (Yoo et al., 2015).2.1.4. Correspondence techniqueOne crucial aspect of supervised tract segmentation is the mechanism to find the corresponding streamline between the tractograms of different subjects, in order to transfer anatomical knowledge. A common approach for addressing such problem is to use the nearest neighbor strategy, i.e., finding the nearest streamline or centroid in the atlas and labeling the streamlines of the new subject based on that. In O'Donnell and Westin (2007) and O'Donnell et al. (2017), a high dimensional atlas was reconstructed from multiple tractograms. Then the new subject was aligned with the atlas and the closest cluster centroids from atlas were computed to assign the anatomical label. In Guevara et al. (2012), nearest centroids of the new subject were computed from a single atlas from multiple subjects, with the normalized Euclidean distance. Recently, a faster implementation has been proposed in Labra et al. (2017). There, they proposed to label each single streamline instead of cluster centroids and to accelerate the computation time by filtering the streamlines in advance, using properties of the normalized Euclidean distance. A limitation is that an appropriate threshold has to be defined for each tract to be segmented. Similarly, in Yoo et al. (2015), the nearest neighbor strategy is used to find corresponding streamlines between those of the tractogram of a new subject, with those of multiple example-subjects (12, in their experiments). Again, two thresholds, i.e., a distance threshold and a voting threshold, are required to be set in order to obtain the segmentation. The proposed implementation requires a GPU.A different approach based on graph matching, instead of the nearest neighbor, was proposed by us for tractogram alignment, see Olivetti et al. (2016). Such idea could be extended to the tract segmentation problem.2.1.5. RefinementAfter segmentation, in order to improve the accuracy of the segmented tract, some authors propose a refinement step, for example, to identify and remove outliers. In Mayer et al. (2011), a tree-based refinement step was introduced. Initially, they padded the segmented tract with the nearest neighbors and then used a probabilistic boosting tree classifier to identify the outliers. Another approach to increase the accuracy of the segmented tract is majority voting (Rohlfing et al., 2004; Jin et al., 2012; Vercruysse et al., 2014; Yoo et al., 2015). The main concept of the majority voting is to reach the agreement on the segmented streamlines (or voxel) coming from different examples, usually removing the infrequent ones.The accuracy of the outcome after the step of refinement is closely related to the number of examples. This relation has been investigated in the vast literature of multi-atlas segmentation (MAS). The intuitive idea is that the behavior of the segmentation error is connected to the size of the atlas dataset. A first attempt to characterize such a relationship, with a first principle approach, was proposed by Awate and Whitaker (2014). In their proposal the size of the atlases is predicted against the segmentation error by formulating the multi-atlas segmentation as a nonparametric regression problem. More recently, Zhuang and Shen (2016) combined the idea of multi-atlas with multi-modality and multi-scale patch for heart segmentation. For a comprehensive survey of multi-atlas segmentation in the broader field of medical imaging, see Iglesias and Sabuncu (2015).2.2. Linear assignment problem solutionsThe linear assignment problem (LAP) computes the optimal one-to-one assignment between the N elements of two sets of objects, minimizing the total cost. The LAP takes as input the cost matrix that describes the cost of assigning each object of the first set to each object of the second set. Various algorithms for solving the LAP in polynomial time has been proposed in literature. A comprehensive review of all the proposed algorithms can be found in Burkard et al. (2009) and Burkard and Cela (1999). An extensive computational comparison among eight well known algorithms is in Dell'Amico and Toth (2000). The algorithms are: Hungarian, signature, auction, pseudoflow, interior point method, Jonker-Volgenant (LAPJV). According that survey and to Serratosa (2015), the Hungarian (Kuhn, 1955) algorithm and Jonker-Volgenant algorithm (LAPJV, Jonker and Volgenant, 1987) are the most efficient ones, with time complexity O(N3). Nevertheless, in practice, LAPJV is much faster than the Hungarian algorithm, as reported in Serratosa (2015) and Dell'Amico and Toth (2000). This occurs because, despite the same time complexity class, i.e., O(N3), the respective constants of the 3rd order polynomials describing the exact running time of each algorithm are much different, giving large advantage to LAPJV. We have directly observed this behavior in our experiments with LAPJV, compared to those with the Hungarian algorithm that we published in Sharmin et al. (2016).According to Dell'Amico and Toth (2000) and Burkard et al. (2009), LAPJV is faster than other algorithms in multiple applications, see also Bijsterbosch and Volgenant (2010). Moreover, in many practical applications, the two sets of objects1 on which to compute the LAP have different sizes, i.e., the related cost matrix is rectangular. In Bijsterbosch and Volgenant (2010), the rectangular version of LAPJV was proposed with a more efficient and robust solution than the original one in Jonker and Volgenant (1987). In this work, we adopted the rectangular version of LAPJV because of its efficiency and because we need to compute the correspondence between an example tract and the target tractogram which, clearly, have different number of streamlines. | [
"24157921",
"24802528",
"8130344",
"12023417",
"12482069",
"18041270",
"18357821",
"24600385",
"23248578",
"28712994",
"23668970",
"28034765",
"22414992",
"26201875",
"24297904",
"24821529",
"27722821",
"20716499",
"23631987",
"27981029",
"18041271",
"27994537",
"15325368... | [
{
"pmid": "24157921",
"title": "Automated longitudinal intra-subject analysis (ALISA) for diffusion MRI tractography.",
"abstract": "Fiber tractography (FT), which aims to reconstruct the three-dimensional trajectories of white matter (WM) fibers non-invasively, is one of the most popular approaches for... |
Scientific Reports | 29467401 | PMC5821733 | 10.1038/s41598-018-21715-0 | dynGENIE3: dynamical GENIE3 for the inference of gene networks from time series expression data | The elucidation of gene regulatory networks is one of the major challenges of systems biology. Measurements about genes that are exploited by network inference methods are typically available either in the form of steady-state expression vectors or time series expression data. In our previous work, we proposed the GENIE3 method that exploits variable importance scores derived from Random forests to identify the regulators of each target gene. This method provided state-of-the-art performance on several benchmark datasets, but it could however not specifically be applied to time series expression data. We propose here an adaptation of the GENIE3 method, called dynamical GENIE3 (dynGENIE3), for handling both time series and steady-state expression data. The proposed method is evaluated extensively on the artificial DREAM4 benchmarks and on three real time series expression datasets. Although dynGENIE3 does not systematically yield the best performance on each and every network, it is competitive with diverse methods from the literature, while preserving the main advantages of GENIE3 in terms of scalability. | Related worksLike dynGENIE3, many network inference approaches for time series data are based on an ODE model of the type (7) 8,21. These methods mainly differ in the terms present in the right-hand side of the ODE (such as decay rates or the influence of external perturbations), the mathematical form of the models fj, the algorithm used to train these models, and the way a network is inferred from the resulting models. dynGENIE3 adopts the same ODE formulation as in the Inferelator approach16: each ODE includes a term representing the decay of the target gene and the functions fj take as input the expression of all the genes at some time point t. In the specific case of dynGENIE3, the functions fj are represented by ensembles of regression trees, which are trained to minimize the least-square error using the Random forest algorithm, and a network is inferred by thresholding variable importance scores derived from the Random forest models. Like for the standard GENIE3, dynGENIE3 has a reasonable computational complexity, which is at worst O(prN log N), where p is the total number of genes, r is the number of candidate regulators and N is the number of observations.In comparison, most methods in the literature (including Inferelator) assume that the models fj are linear and train these models by jointly maximizing the quality of the fit and minimizing some sparsity-inducing penalty (e.g. using a L1 penalty term or some appropriate Bayesian priors). After training the linear models, a network can be obtained by analysing the weights within the models, several of which having been enforced to zero during training. In contrast to these methods, dynGENIE3 does not make any prior hypothesis about the form of the fj models. This is an advantage in terms of representational power but this could also result in a higher variance, and therefore worse performance because of overfitting, especially when the data is scarce. A few methods also exploit non-linear/non-parametric models within a similar framework, among which Jump320, OKVAR-Boost22 and CSI13. Like dynGENIE3, Jump3 incorporates a (different) dynamical model within a non-parametric, tree-based approach. In the model used by Jump3, the functions fj represent latent variables, which necessitated the development of a new type of decision tree, while Random forests can be applied as such in dynGENIE3. One drawback of Jump3 is its high computational complexity with respect to the number N of observations, being O(N4) in the worst-case scenario. Moreover, Jump3 can not be used for the joint analysis of time series and steady-state data. OKVAR-Boost jointly represents the models fj for all genes using an ensemble of operator-valued kernel regression models trained using a randomized boosting algorithm. The network structure is then estimated from the resulting model by computing its Jacobian matrix. One of the drawbacks of this method with respect to dynGENIE3 is that it requires to tune several meta-parameters. The authors have nevertheless proposed an original approach to tune them based on a stability criterion. Finally, CSI is a Bayesian inference method that learns the fj models in the form of Gaussian processes. Since learning Gaussian processes does not embed any feature selection mechanism, network inference is performed in CSI by a combinatorial search through all the potential sets of regulators for each gene in turn, and constructing a posterior probability distribution over these potential sets of regulators. As a consequence, the complexity of the method is O(pN3r d/(d − 1)!), where d is a parameter defining the maximum number of regulators per gene8. Its high complexity makes CSI unsuitable when the number of candidate regulators (r) or the number of observations (N) is too high. Supplementary Table S1 compares the running times of dynGENIE3 and CSI for different datasets. The most striking difference is observed when inferring the DREAM4 100-gene networks. While dynGENIE3 takes only several minutes to infer one network, CSI can take more than 48 hours per target gene. The CSI algorithm can be parallelised over the different target genes (like dynGENIE3), but even in that case the computational burden remains an issue when inferring large networks containing thousands of genes and hundreds of transcription factors (such as the E. coli network). | [
"22805708",
"17214507",
"20927193",
"22796662",
"23226586",
"24176667",
"24786523",
"16686963",
"21049040",
"20186320",
"24400020",
"19961876",
"20949005",
"24529382",
"17224916",
"21036869",
"25896902",
"20461071",
"23203884",
"24243845",
"27682842",
"11911796"
] | [
{
"pmid": "22805708",
"title": "Studying and modelling dynamic biological processes using time-series gene expression data.",
"abstract": "Biological processes are often dynamic, thus researchers must monitor their activity at multiple time points. The most abundant source of information regarding such ... |
Frontiers in Neurorobotics | 29515386 | PMC5825909 | 10.3389/fnbot.2018.00004 | Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors | In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor. | 1.3Related WorkThere is a variety of algorithms to extract lines from frames, most notably the Hough transform (Duda and Hart, 1972; Matas et al., 2000). In Grompone von Gioi et al. (2012), a line segment detector (called LSD) is proposed that works stably without parameter tuning (see also Section 3 for comparisons). In Section 3, we compare the results of these algorithms with our method. Different methods that use line segments for interframe tracking are described in Neubert et al. (2008), Hirose and Saito (2012), and Zhang and Koch (2013).In recent years, several trackers for different shapes have been developed for event-based systems. An early example of this can be found in Litzenberger et al. (2006). Based on this, Delbruck and Lang (2013) shows how to construct a robotic goalie with fast reaction time of only 3 ms. Conradt et al. (2009) focuses explicitly on detecting lines from events and describes a pencil balancer. Estimates about the pencil position are performed in Hough space.In a more recent work, Brändli et al. (2016) describe a line segment detector to detect multiple lines in arbitrary scenes. They use Sobel operators to find the local orientation of events and cluster events with similar angles to form line segments. Events are stored in a circular buffer of fixed size, so that old events are overwritten when new ones arrive and the position and orientation of lines is updated through this process, but do not put their focus on tracking (see Section 3 for a comparison with the here proposed method).There are also increasing efforts to track other basic geometric shapes in event-based systems: corners have been a focus in multiple works as they generate distinct features that do not suffer from the aperture problem, can be tracked fast and find usage in robotic navigation. Clady et al. (2015) use a corner matching algorithm based on a combination of geometric constraints to detect events caused by corners and reduce the event stream to a corner event stream. Vasco et al. (2016) transfer the well-known Harris corner detector (Harris and Stephens, 1988) to the event domain, while Mueggler et al. (2017) present a rapid corner detection method inspired by FAST (Rosten and Drummond, 2006), which is capable of processing more than one million events per second.Lagorce et al. (2015) introduces a method to track visual features using different kernels like Gaussians, Gabors, or other hand designed kernels. Tedaldi et al. (2016) uses a hybrid approach combining frames and event stream. It does not require features to be specified beforehand but extracts them using the grayscale frames. The extracted features are subsequently tracked asynchronously using the stream of events. This permits a smooth tracking through time between two frames. | [
"28704206",
"26120965",
"21869365",
"25828960",
"24311999",
"25248193"
] | [
{
"pmid": "28704206",
"title": "An autonomous robot inspired by insect neurophysiology pursues moving features in natural environments.",
"abstract": "OBJECTIVE\nMany computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platfo... |
PLoS Computational Biology | 29447153 | PMC5831643 | 10.1371/journal.pcbi.1005935 | A model of risk and mental state shifts during social interaction | Cooperation and competition between human players in repeated microeconomic games offer a window onto social phenomena such as the establishment, breakdown and repair of trust. However, although a suitable starting point for the quantitative analysis of such games exists, namely the Interactive Partially Observable Markov Decision Process (I-POMDP), computational considerations and structural limitations have limited its application, and left unmodelled critical features of behavior in a canonical trust task. Here, we provide the first analysis of two central phenomena: a form of social risk-aversion exhibited by the player who is in control of the interaction in the game; and irritation or anger, potentially exhibited by both players. Irritation arises when partners apparently defect, and it potentially causes a precipitate breakdown in cooperation. Failing to model one’s partner’s propensity for it leads to substantial economic inefficiency. We illustrate these behaviours using evidence drawn from the play of large cohorts of healthy volunteers and patients. We show that for both cohorts, a particular subtype of player is largely responsible for the breakdown of trust, a finding which sheds new light on borderline personality disorder. | Earlier and related workTrust games of various kinds have been used in behavioural economics and psychology research (see [34]). In particular, the MRT we used was based on variants in several earlier studies (see examples in [17, 35, 36]).The current MRT was first modeled using regression models (see [16]) of various depths: one step models for the increase/decrease of the amount sent to the partner and models which track the effects of more distant investments/repayments. These models generated signals of increases and decreases in investments and returns that were correlated with fMRI data. One seminal study on the effect of BPD in the trustgame by King-Casas et al. (see [6]) included the concept of “coaxing” (repaying substanially more than the fair split) the partner (back) into cooperating/trust whenever trust was running low, as signified by small investments.Furthermore, an earlier study (see [37]) used clustering to associate trustgame investment and repayment levels to various clinical populations.An I-POMDP generative model for the trust task which included inequality aversion, inference and theory of mind level was previously proposed [8]. This model was later refined rather substantially to include faster calculation and planning as a parameter [10].The I-POMDP framework itself has been used in a considerable number of studies. Notable among these are investigations of the depth of tactical reasoning directly in competitive games (see [38–40]). It has also been used for deriving optimal strategies in repeated games (see [41]).The benefits of a variant of the framework for fitting human behavioural data were recently exhibited in [42]. | [
"18255038",
"23300423",
"26053429",
"16470710",
"15802598",
"11562505",
"16588946",
"23325247",
"20510862",
"21556131",
"15631562",
"20975934"
] | [
{
"pmid": "18255038",
"title": "Self responses along cingulate cortex reveal quantitative neural phenotype for high-functioning autism.",
"abstract": "Attributing behavioral outcomes correctly to oneself or to other agents is essential for all productive social exchange. We approach this issue in high-f... |
Frontiers in Genetics | 29559993 | PMC5845696 | 10.3389/fgene.2018.00039 | Griffin: A Tool for Symbolic Inference of Synchronous Boolean Molecular Networks | Boolean networks are important models of biochemical systems, located at the high end of the abstraction spectrum. A number of Boolean gene networks have been inferred following essentially the same method. Such a method first considers experimental data for a typically underdetermined “regulation” graph. Next, Boolean networks are inferred by using biological constraints to narrow the search space, such as a desired set of (fixed-point or cyclic) attractors. We describe Griffin, a computer tool enhancing this method. Griffin incorporates a number of well-established algorithms, such as Dubrova and Teslenko's algorithm for finding attractors in synchronous Boolean networks. In addition, a formal definition of regulation allows Griffin to employ “symbolic” techniques, able to represent both large sets of network states and Boolean constraints. We observe that when the set of attractors is required to be an exact set, prohibiting additional attractors, a naive Boolean coding of this constraint may be unfeasible. Such cases may be intractable even with symbolic methods, as the number of Boolean constraints may be astronomically large. To overcome this problem, we employ an Artificial Intelligence technique known as “clause learning” considerably increasing Griffin's scalability. Without clause learning only toy examples prohibiting additional attractors are solvable: only one out of seven queries reported here is answered. With clause learning, by contrast, all seven queries are answered. We illustrate Griffin with three case studies drawn from the Arabidopsis thaliana literature. Griffin is available at: http://turing.iimas.unam.mx/griffin. | 3.2. Related workAs a first comparison between our work and related articles, it is important to point out a difference in the type of input data. In this work, Griffin input is composed of (partial) information about the network topology (R-graphs) along with other data representing biological constraints. R-graphs contain information on genetic connectivity that was inferred from data obtained by direct measurement of gene expression or protein interaction. By contrast, inference methods proposed by other authors (e.g., Laubenbacher and Stigler, 2004) have an input composed of time series together with other data that capture information on the network dynamics. Time series, unlike the R-graphs, represent experimental data obtained by direct measurement of gene expression or protein interaction.There are a number of tutorials on Boolean-network inference (D'haeseleer et al., 2000; Markowetz and Spang, 2007; Karlebach and Shamir, 2008; Hecker et al., 2009; Hickman and Hodgman, 2009; Berestovsky and Nakhleh, 2013). From these tutorials we can classify algorithms according to (1) the expected input, (2) the kind of model inferred, and (3) the search strategy.Much of the effort in Boolean-network inference has been aimed at having a binarized time-series data as input. Hence, multiple methods have been proposed and some of them have been compared with each other (Berestovsky and Nakhleh, 2013). An influential method in this category is reveal (Liang et al., 1998) (employing Shannon's mutual information between all pairs of molecules to extract an influence graph; the truth tables of the update functions for each molecule are simply taken from the time series). The use of mutual information and time-series for the inference of Boolean networks continues to develop (Barman and Kwon, 2017), as well as alternative methods based on time series. Example are Han et al. (2014) (using a Bayesian approximation), Shmulevich et al. (2003), Lähdesmäki et al. (2003), Akutsu et al. (1999), Laubenbacher and Stigler (2004), and Layek et al. (2011) (using a generate-and-test method, generating all possible update functions for one gene and testing with the input data). Extra information can be included in addition to time-series data. For example, an expected set of stable states (Layek et al., 2011), previously known regulations (Haider and Pal, 2012) and gene expression data (Chueh and Lu, 2012) are used as an aid to curtail the number of possible solutions.Griffin belongs to a second family of methods taking as input a possibly partial regulation graph (perhaps obtained form the literature). There are also approaches employing both time-series data and a regulation graph, such as Ostrowski et al. (2016).A third important area of research is the development of algorithms taking as input temporal-logic specifications, based on Model Checking (Clarke et al., 1999). Works following this approach are: Calzone et al. (2006b), Mateus et al. (2007), and Streck and Siebert (2015).As for the kind of model inferred, here we would be concerned with Boolean networks and similar formalisms. Among the nonprobabilistic approaches, we find synchronous Boolean networks, asynchronous networks based on Thomas's formalism (Bernot et al., 2004; Khalis et al., 2009; Corblin et al., 2012; Richard et al., 2012), and polynomial dynamical systems (Laubenbacher and Stigler, 2004). Typically, methods based on temporal logic infer Kripke structures, which are closely related to Boolean networks. Probabilistic models, one the other hand, include Bayesian networks, and have the advantage of being able to deal with noise and uncertainty.From the search-strategy point of view, Boolean-network inference methods may employ a simple random-value assignment (Pal et al., 2005), exhaustive search (Akutsu et al., 1999) or more elaborate algorithms. The work of Chueh and Lu (2012) is based on p-scores and that of Liang et al. (1998) guides search with Shannon's mutual information. Genetic algorithms are used by Saez-Rodriguez et al. (2009) and Ghaffarizadeh et al. (2017). Linear programming is the basis of Tarissan et al. (2008). The methods of Ostrowski et al. (2016) and Corblin et al. (2012), are based on Answer Set Programming (Brewka et al., 2011). Algebraic methods use reductions (of polynomials over a finite field) modulo an ideal of vanishing polynomials.Approaches based on temporal logic (sometimes augmented with constraints), such as Calzone et al. (2006b) and Mateus et al. (2007), normally employ Model Checking (Clarke et al., 1999). Model Checking, in turn, is often based on symbolic approaches: BDDs and SAT solvers. Biocham (Calzone et al., 2006b) and SMBioNet (Bernot et al., 2004; Khalis et al., 2009; Richard et al., 2012) use a model checker as part of a generate-and-test method.Having classified various approaches, we now mention a work similar to ours in spirit: Pal et al. (2005). These authors also propagate fixed-point constraints onto the truth tables of the update functions of each variable (molecule species). There is, however, no search technique, other than randomly giving values to the remaining entries of such truth tables. There is a random assignment of values to such entries, and a check for unwanted attractors. Neither is there a formal definition of regulation.In contrast with the logical approach of our work, we devote our attention now to an algebraic approach to the problem of inference (reverse engineering) of Boolean networks. Instead of using a Boolean network to model the dynamics of a gene network, the algebraic approach (Laubenbacher and Stigler, 2004; Jarrah et al., 2007; Veliz-Cuba, 2012) uses a polynomial dynamical system F: Sn → Sn where S has the structure of a finite field (ℤ/p for an appropriate prime number p). The first benefit of this algebraic approach is that each component of F can be expressed by a polynomial (in n variables with coefficients in S) such that the degree of each variable is at most equal to the cardinality of the field S.Following a computational algebra approach, in a framework of modeling with polynomial dynamical systems, Laubenbacher and Stigler (2004) propose a reverse-engineering algorithm. This algorithm takes as input a time series s1,…,sm∈Sn of network states (where S is a finite field), and produces as output a polynomial dynamical system F=(F1,…,Fn):Sn→Sn such that ∀i ∈ {1, …, n}: Fi ∈ S[x1, …, xn] and Fi(sj) = sj+1,i for j ∈ {1, …, m}. An advantage of the algebraic approach of this algorithm is that there exists a well-developed theory of algorithmic polynomial algebra, with a variety of procedures already implemented, supporting the implementation task. The time complexity of this algorithm is quadratic in the number of variables (n) and exponential in the number of time points (m).Comparing Griffin with the reverse-engineering algebraic algorithms proposed by Laubenbacher and Stigler (2004) and Veliz-Cuba (2012), we found three basic differences. (1) Algebraic algorithms can handle discrete multi-valued variables, while Griffin only handles Boolean (two-valued) variables. Multi-valued variables give more flexibility and detail to the modeling process, but Boolean variables (of Boolean networks) lead to simpler models (see section 1). (2) The input of the algebraic algorithms, typically time series, provides simple information coming directly from experimental measurements, while input of Griffin, R-graphs and Griffin queries, provides structured information allowing a more precise specification of the required Boolean network. (3) The algebraic algorithm of Veliz-Cuba (2012) uses a formal definition of regulation, but this definition does not match the definition of regulation used by Griffin. While Griffin allows for R-regulations based on Boolean combinations of positive and negative regulations, Veliz-Cuba (2012) uses regulations restricted to unate functions h such that, for all variable x: h does not depend on x, or h depends positively on x, or h depends negatively on x.Finally, we observe that sometimes results might have been reported overoptimistically. There have been some doubts cast upon the effectiveness of a number of methods of inference of network dynamics (Wimburly et al., 2003), especially those based on more general-purpose learning methods. It is therefore important to establish tests such as the dream challenges (Stolovitzky et al., 2007) emphasizing reproducibility. | [
"12782112",
"22303253",
"22192526",
"20056001",
"28186191",
"23658556",
"28178334",
"23805196",
"15234201",
"11688712",
"16239464",
"18508746",
"16672256",
"22833747",
"22952589",
"18301750",
"11099257",
"21778527",
"15486106",
"17989686",
"28426669",
"18614585",
"2825436... | [
{
"pmid": "12782112",
"title": "The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila melanogaster.",
"abstract": "Expression of the Drosophila segment polarity genes is initiated by a pre-pattern of pair-rule gene products and maintained... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.