Research - , ,
Exascale volumes of diverse data from distributed sources are continuously produced. Healthcare data stand out in the size produced (production 2020 >2000 exabytes), heterogeneity (many media, acquisition methods), included knowledge (e.g.diagnostic reports) and commercial value. The supervised nature of deep learning models requires large labeled, annotated data, which precludes models to extract knowledge and value. EXA MODE solves this by allowing easy & fast, weakly supervised knowledge discovery of exascale heterogeneous data provided by the partners, limiting human interaction. Its objectives include the development and release of extreme analytic methods and tools, that are adopted in decision making by industry and hospitals. Deep learning naturally allows building semantic representations of entities and relations in multimodal data. Knowledge discovery is performed via document-level semantic networks in text and the extraction of homogeneous features in heterogeneous images. The results are fused, aligned to medical ontologies, visualized and refined. Knowledge is then applied using a semantic middleware to compress, segment and classify images and it is exploited in decision support and semantic knowledge management prototypes. The ExaMode project is supported by the European Union through the Horizon 2020 framework.
Facebook Widget
WordPress.org
Google Font API
Facebook Login (Connect)
Google Analytics
YouTube