This article was originally written by Dr. Astrid Sivertsen.
This can in turn increase productivity, ensure knowledge continuity in labs with a high turnover of personnel, and lead to easier automation of workflows.
Scientific funding agencies such as the NIH – National Institutes of Health and DFG – Deutsche Forschungsgemeinschaft, increasingly ask that researchers have a Data Management Plan in place, and ask that data must be FAIR – Findable, Accessible, Interoperable, Reusable.
A push towards digitalization of labs is motivated by the benefits but also by the need for compliance.
Many lab leaders find the road to digitalization elusive, however.
Any one lab may work with many types of data ranging from human observations to extremely large datasets generated by equipment such as NMR or mass spectrometers. Several researchers may work on hundreds of samples over long time periods.
Another stumbling block may be the fact that the boundaries between different systems available for lab digitalization are not clearly defined, so it may not be clear which system a given functionality may reside in.
Full lab digitalization “will require four different platforms,” says Jakob J. Lopez, Founder and Managing Director of Signals. For the daily experimental work in a lab, two platforms will be responsible for safely storing the data.
An electronic lab notebook for human input, and a Scientific Data Management System for data from all equipment.
A Laboratory Information Management System will keep track of samples and workflows, and finally, a platform for data analysis is needed.
In its simplest form, a SDMS gathers data from instruments by automated upload and stores it centrally.
This functionality is central to the SDMS, and found its beginning in very early forms of the SDMS, where simple file synchronisation transferred data from instruments so the limited storage space of the instrument could be cleared for new data.
Even in the simplest form, a SDMS offers solutions to several problems: “Just ask any scientist who has struggled finding research data produced by a former colleague, or even themselves some years ago, or has accidentally lost data” says Jakob J. Lopez.
“Automated upload straight from the instruments eliminates the risk of human error and ensures data completeness for a lab.”Jakob J. Lopez
Makers of instruments, for example spectrometers, are aware of the benefits of an SDMS. Many such companies have started adding SDMS capabilities into their software, but the capabilities generally only work with the manufacturer’s own data format.
Signals company is designing LOGS, a general SDMS that works with data formats from multiple instrument manufacturers and allows web based access to all data without the need for third party software.
Data kept in LOGS remains uniquely accessible, both for the individual researcher, for calling up data in day to day discussions internally in a research group, and for sharing data with collaborators.
Metadata is included along with data in LOGS. Some metadata is extracted from the original data, such as information about sample, experiment, and operator, and further metadata can be added manually by the user.
Data is searchable by metadata parameters, giving researchers a new type of access to their own data and that of past and present collaborators, sorting it by sample, by experiment type, or by publication.
In summary, a general SDMS gathers and stores data centrally. At a central location, data can be backed up and accessed for analysis, for sharing with collaborators, publishers, and funding bodies. Data is searchable by multiple parameters, and truly becomes Findable, Accessible, Interoperable, and Reusable (FAIR principles) .