Research - Palo Alto, California, United States
When we sequence human genomes, we derive complex, large-scale data - billions of bytes of character strings. This is yet one of the types of data that researchers across academia and industry derive in-house and from secondary sources on a daily basis. Analyses of such data necessitate an expert view. For an expert, the analysis of complex, large-scale data is laborious, repetitive and error-prone due to size, composition and personal biases. In addition, certain types of data are unfit for the processing through bioinformatics methods employed to date, as information is lost in the high level of noise. Assigning specific sets of tasks to both humans and algorithms in seamless and secure combinatorial software pipelines enables the exploration of large-scale data at an unprecedented depth. It sets the stage to uncover fundamental biological mechanisms and novel therapeutics.
Gmail
WordPress.org
Google Tag Manager
Mobile Friendly