Extensible GUIs accelerate all the lab’s work – such as taking notes on animal care, analyzing data in Jupyter, tuning an ML model, or preparing figures for publication.
Schedule a demoLorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique.
Automation eliminates tedious and risky DIY processing – protecting data integrity and managing change while you focus on your research.
Book a demoShare or publish a complete digital replica of a study that can be validated, reproduced, or extended – and achieve compliance with NIH rules on Data Management and Sharing.
Learn moreNulla vitae elit libero, a pharetra augue. Duis mollis, est non commodo luctus, nisi erat porttitor ligula, eget lacinia odio sem nec elit. Cras mattis consectetur purus sit amet fermentum. Donec id elit non mi porta gravida at eget metus. Etiam porta sem malesuada magna mollis euismod.
DataJoint is a general-purpose data operations platform engineered for reproducible computation. Its roots lie in systems neuroscience with experiments that integrate multiple data modalities – electrophysiology, calcium imaging and miniscope single- and multi-photon microscopy, optogenetics, histology, behavior, and more. DataJoint Elements includes open-source reference implementations for numerous modalities, ready to be combined and customized to suit your experiment.
Yes. Most pipelines written in open-source DataJoint Python can readily be set up and operated on the platform. Otherwise, some development effort is typically required to define data models and computational dependencies to capture your pipeline. Your existing processing and analysis code is fully reusable.
Contact us today! When you engage with DataJoint, our expert SciOps engineers will train your team and assist you in defining data pipelines, GUIs, and processes best suited to the needs of your lab.
The DataJoint platform offers full ELN capabilities with GUIs for data entry, curation, visualization, and dashboards. These can be customized to your lab's workflow by our team of SciOps engineers, or you can do it yourself with a bit of Python skill and knowledge of the open-source Plotly Dash framework.
Full use of the DataJoint platform requires python proficiency at the same level as other scientific packages (e.g., numpy, pandas, matplotlib). It is ideal if someone in your lab understands basic database principles (e.g., primary keys, foreign keys, joins, normalization). Training materials and our SciOps team can help you quickly climb the learning curve and get the most out of DataJoint.
Start with the documentation on DataJoint Python and check out the DataJoint Tutorials. You can use Codespaces to run a learning environment right on Github. In addition, you can review and use the documentation and source code from DataJoint Elements, our NIH-funded library of reference pipeline implementations for numerous neurophysiology data modalities and analyses.
The DataJoint platform is a computational database that integrates the management of data, metadata, processing and analysis code, and the structure of the computational pipeline.
Large data files (e.g., raw data recordings or bulky processed results) are stored as files in cloud-based object storage or on-premises file servers.
Metadata, parameters, and computational results reside in a relational database that can be hosted in the cloud or on-premises.
The code defining the pipeline and its processing and analysis steps resides in a source code management system, typically Github.com or Github Enterprise.
No. With DataJoint, your data, code, and workflow is FAIR - findable, accessible, interoperable, and reusable. Only open-source software, or code that you create yourself, is used in representing your pipeline, storing your data, and transforming your data. Your pipeline is written in open-source DataJoint Python, and your data is stored in the open-source MySQL database. These can be exported at any time and set up independently of the platform.
Depending on IT security requirements, the platform supports external API access to data using DataJoint Python, DataJoint MATLAB, or other programmatic interfaces. Our SciOps engineers can create bi-directional integration with a variety of external systems, from lab instruments, to coding tools, to ELNs (e.g., Benchling), to analysis engines (e.g., Palantir Foundry).