Overview
This documentation guides computational biologists through common biotech data workflows using Scispot’s Python interface.
It focuses on scenarios where raw data comes from contract research organizations (CROs), internal labs, or external collaborators, and shows how to streamline data ingestion, transformation, analysis, and collaboration.
All examples assume familiarity with Python, CSV files, and basic assay workflows.
Training Videos:
Getting Started
Before diving into creation workflows, ensure the following:
API Key: Generate your API Key.
Go to Scispot and login
Click on the "Account" button in the navigation bar on the bottom left corner of your screen
Click "Personal Tokens" on the left side of the pop-up modal
Click "Generate New Token"
Name your token, select token access, and click "Generate"
Copy the API Key and save it somewhere safe - this token cannot be retrieved once it is generated
Import Python Libraries: We are using requests libraries to perform API calls
import requests
Creation Workflows
1. Creating Experiments and Protocols
Experiments and protocols form the backbone of lab workflows. Define them programmatically.
Steps:
Use Scispot's GUI to create a Labspace. Labspaces are where you can organize any protocols, experimentation or documentation.
Programmatically create or update experiments.
Example:
In this example I have already created a labspace called "My Labspace" in Scispot.
You can then use the following script to create an experiment
Code Example:
url = "https://cloudlab.scispot.io/experiment/new"
json = { "name" : "My New Experiment",
"location" : "My Labspace"
}
headers = {
"Content-Type": "application/json",
"Authorization" : f"Bearer {YOUR_API_TOKEN}"
}
response = requests.post(url, headers=headers, json=json)
print(response.json())
Here is a similar example for creating protocols
Code Example:
url = "https://cloudlab.scispot.io/protocol/new"
json = { "name" : "My New Protocol",
"location" : "My Labspace"
}
headers = {
"Content-Type": "application/json",
"Authorization" : f"Bearer {YOUR_API_TOKEN}"
}
response = requests.post(url, headers=headers, json=json)
print(response.json())
2. Consumables and Inventory Library
Centralize, and assign consumables like reagents, plasmids, primers, antibodies, or kits to freezers for better management traceability
In this example we will be setting up an ADC Data Model, with a few labsheets which will connect to each other through connections.
Setup the main labsheet "ADC Library", with a few columns for metadata.
url = "https://cloudlab.scispot.io/labsheets/create"
json= {"name" : "ADC Library",
"columns": [
{
"position": 0,
"name": "ADC ID",
"type": "ID"
},
{
"position": 1,
"name": "Name",
"type": "TEXT"
},
{
"position": 2,
"name": "Description",
"type": "TEXT"
},
{
"position": 3,
"name": "Application Area",
"type": "TEXT"
}
]
}
headers = {
"Content-Type": "application/json",
"Authorization" : f"Bearer {YOUR_API_TOKEN}"
}
response = requests.post(url, headers=headers, json=json)
print(response.json())Next we will setup the Antibody Library. This library will contain an "ADC ID" column which will server as a foreign key to the ADC, as well as a connection column. This connection columns allows easy access to the rows of the "ADC Library".
url = "https://cloudlab.scispot.io/labsheets/create"
json = {"name" : "Antibody Library",
"columns": [
{
"position": 0,
"name": "Antibody ID",
"type": "ID"
},
{
"position": 1,
"name": "ADC ID",
"type": "TEXT"
},
{
"position": 2,
"name": "ADC Reference",
"type": "CONNECTION"
},
{
"position": 3,
"name": "Source",
"type": "TEXT"
},
{
"position": 4,
"name": "Target Antigen",
"type": "TEXT"
},
{
"position": 5,
"name": "FC Modification",
"type": "TEXT"
}
]
}
headers = {
"Content-Type": "application/json",
"Authorization" : f"Bearer {YOUR_API_TOKEN}"
}
response = requests.post(url, headers=headers, json=json)
print(response.json())Setup the Linker Library. Like the Anitbody Library, this library will contain both the ADC ID as a foreign key and the connection column, as well as other metadata.
url = "https://cloudlab.scispot.io/labsheets/create"
json = {"name" : "Linker Library",
"columns": [
{
"position": 0,
"name": "Linker ID",
"type": "ID"
},
{
"position": 1,
"name": "ADC ID",
"type": "TEXT"
},
{
"position": 2,
"name": "ADC Reference",
"type": "CONNECTION"
},
{
"position": 3,
"name": "Source",
"type": "TEXT"
},
{
"position": 4,
"name": "Target Antigen",
"type": "TEXT"
},
{
"position": 5,
"name": "FC Modification",
"type": "TEXT"
}
]
}
headers = {
"Content-Type": "application/json",
"Authorization" : f"Bearer {YOUR_API_TOKEN}"
}
response = requests.post(url, headers=headers, json=json)
print(response.json())
We have finished setting up the core Data Model. Later in the guide we will show you how to sync the connection column using a python script
3. Multi-Well Plate Creation
Create well plates programmatically
Requirements:
Select a pre-designed manifest(well plate) template or create one yourself. This templates can be used to define any controls or tags of well positions.
One or more labsheets with the samples that will be assigned to the wells.
Example:
We have selected the "96 Well Plate" as the template for this well plate.
Using the sample manager labsheet, we will assign three samples to the positions A4, A5, A6
Code Example:
url = "https://cloudlab.scispot.io/manifest/create"
json = {"name" : "Assay Well Plate",
"template" : "96 Well Plate",
"plates" : [{
"template" : "96 Well Plate",
"wells" : 96,
"labsheets" : [
{
"idType" : "ID",
"labsheet" : "Sample Manager",
"items" : [
{
"name" : "SMP_001",
"well" : "A4"
},
{
"name" : "SMP_002",
"well" : "A5"
},
{
"name" : "SMP_003",
"well" : "A6"
},
]
}
]
}]
}
headers = {
"Content-Type": "application/json",
"Authorization" : f"Bearer {YOUR_API_TOKEN}"
}
response = requests.post(url, headers=headers, json=json)
print(response.json())
4. Sync Labsheets to Internal Libraries
Programmatically sync Scispot databases with internal systems.
In this example we are only getting data that has been modified in the last 6 hours, and syncing those entries.
First we need to get the data from the internal database
Code Example:
from datetime import datetime, timedelta
cutoff_time = datetime.utcnow() - timedelta(hours=6)
cutoff_time_str = cutoff_time.isoformat()
url = "https://internal-api.example.com/records"
params = {"modified_since": cutoff_time_str}
headers = { "Authorization": "Bearer YOUR_TOKEN_HERE" }
response = requests.get(url, params=params, headers=headers)
data_dict = response.json()
print(data_dict)Code Output:
[
{"Sample ID": "20c20", "Quantity": "78", "Comp": "c24"},
{"Sample ID": "30d40", "Quantity": "53", "Comp": "c35"},
{"Sample ID": "10b15", "Quantity": "67", "Comp": "c18"},
{"Sample ID": "40e25", "Quantity": "92", "Comp": "c29"},
{"Sample ID": "50f30", "Quantity": "34", "Comp": "c42"},
{"Sample ID": "60g45", "Quantity": "88", "Comp": "c51"},
{"Sample ID": "70h50", "Quantity": "41", "Comp": "c64"},
{"Sample ID": "80i60", "Quantity": "75", "Comp": "c72"},
{"Sample ID": "90j70", "Quantity": "60", "Comp": "c81"},
{"Sample ID": "100k80", "Quantity": "49", "Comp": "c91"}
]Now we can directly use the dictionary to update the labsheet. Note: All keys in the data_dict must correspond to a column name in the labsheet. Also the ID column in the labsheet must be present in the data_dict
url = "https://cloudlab.scispot.io/labsheets/update-rows-by-id"
params = {
"labsheet": "Instrument Outputs",
"rows" : data_dict
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {token}"
}
response = requests.post(url, headers=headers, json=params)
print(response.json())
This script can then be run every 6 hours to maintain the sync between the labsheet and the internal database
Updating and Processing Experiments
1. Updating Experiment Pages
Define experiments in the GUI, then programmatically update their content.
In this example we will be adding some text to the end of the experiment. We will need the hrid (Human-Readable ID) of the experiment page. This can be found in the "Copy Page ID" option in the experiment page
Code Example:
url = "https://cloudlab.scispot.io/labspace/experiment/write"
params = {
"hrid" : "jeffrey/new-Experiment-11-26-24/13:59:33",
"contentToAppend" : "This experiment is complete"
}
headers = {
"Content-Type": "application/json",
"Authorization" : f"Bearer {YOUR_API_TOKEN}"
}
response = requests.post(url, headers=headers, json=params)
print(response.json())
2. Processing Completed Experiments
Integrate wet lab data into Scispot workflows.
Wet Lab Workflow:
Wet lab user updates the protein expression experiment template with plate reader raw data.
Upload the instrument CSV to Scispot
Run the python script that will process and update the metadata and visualize results.
Script Creation:
To create a script for this workflow we will first get the data in the experiment. We will be using the hrid of the experiment page, this can be found in the "Copy Page ID" option in the experiment page
Code Example:
url = "https://cloudlab.scispot.io/labspace/experiment/fetch"
params = {
"hrid" : "jeffrey/new-Experiment-11-26-24/13:59:33",
}
headers = {
"Content-Type": "application/json",
"Authorization" : f"Bearer {YOUR_API_TOKEN}"
}
experiment_response = requests.get(url, headers=headers, params=params)
print(experiment_response.json())
Example Output:
{'uuid': 'be6a2709-aa2c-4b96-88f5-9b9a404df593',
'name': 'AUG 27 TEST 3', 'hrid': 'JeffreyChingLok.Ng/new-Experiment-08-27-24/22:58:20',
'parent': 'Jeffrey test',
'status': 'preparing',
'created_at': '2024-08-28T02:58:34.012Z',
'created_by': '[email protected]',
'last_updated': '2025-01-03T21:38:42.485Z',
'description': '',
'content': '<div id="start-mark" /><section><h5 class="editor-heading-class">Objective</h5><p>Enter your objective of the experiment here</p></section><p></p><section><h5 class="editor-heading-class">Materials</h5><p>Enter all your materials by entering \\Labsheets here</p></section><embed-file uuid="5ad857cc-3452-415b-a410-4b193f1d3c6d" name="output.zip" type="zip" caption="" alt_text="" creator="CAD0AC6F-E97C-4E48-966E-2F737877A6AE" creation_date="2025-01-03T21:38:36.145Z"></embed-file><section><h5 class="editor-heading-class">Protocols</h5><p>Attach your protocols by typing \\protocol or \\templates if they already exist or create them from scratch. You can also attach images by entering \\image or dragging and dropping images to any step</p><p></p></section><p></p><section><h5 class="editor-heading-class">Results</h5><p>Enter your experiment results here. You can easily ingest your instrument data into Labsheets and bring that Labsheets here by entering \\labsheet. You can also attach images by entering \\image or dragging and dropping images, and add tables by entering \\table.</p><p></p></section><p></p>', 'labsheets': [],
'sheets': [],
'protocols': [],
'manifests': [],
'files': [{'position': 1, 'uuid': '5ad857cc-3452-415b-a410-4b193f1d3c6d', 'name': 'output_instrument.csv'}],
'success': True}
This API call provides the full metadata and content of the experiment page. We are only interested in the instrument file here for processing. Now we will obtain the file content through the API by using the uuid we obtained above
Code Example:
url = "https://new.scispot.io/download/file"
file_uuid = response.json()['files'][0]['uuid']
params = {
"uuid": file_uuid,
}
headers = {
"Authorization": f"Bearer {YOUR_API_TOKEN}"
}
file_response = requests.get(url,headers=headers, params=params)
print(file_response.status_code)
With this we can then easily convert this into a pandas dataframe to do further processing, using the following
Code Example:
from io import BytesIO
import pandas as pd
csv_file = BytesIO(file_response.content)
df = pd.read_csv(csv_file)
Now after we have processed the data, we would then want to push this processed data back into our labsheet.
Code Example:
rows = df.values.tolist()
url = "https://cloudlab.scispot.io/labsheets/add-rows"
params = {
"labsheet": "Instrument Outputs",
"rows" : rows
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {token}"
}
response = requests.post(url, headers=headers, json=params)
print(response.json())
We can also plot and image and upload it to Scispot
Code Example:
import matplotlib.pyplot as plt
plt.plot(df['Absorbance'], df['Concentration'], marker='o')
plt.xlabel('Absorbance')
plt.ylabel('Concentration')
plt.title('Abosorbance vs Concentration')
buf = io.BytesIO()
plt.savefig(buf, format='png')
buf.seek(0)
plt.close()
url = "https://new.scispot.io/files/store"
headers = {
"Authorization": f"Bearer {token}"
}
files = {
"files": ("output.png", buf, "image/png"),
}
image_response = requests.post(url, headers=headers, files=files)
print(image_response.json())
Example Output:
[{'fileName': 'output.png', 'fileId': '4ad48825-7d65-45a7-85ca-75c93aed9e15', 'status': 'success', 'message': 'File uploaded successfully'}]
We can now use the fileId to upload this image back to the experiment page.
Code Example:
image = image_response.json()[0]
url = "https://cloudlab.scispot.io/labspace/experiment/write"
params = {
"contentToHead": {"uuid" : image["fileId"],
"name" : "output"},
"hrid": "jeffrey/new-Experiment-11-26-24/13:59:33",
"contentType" : "image"
}
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {token}"
}
response = requests.post(url, headers=headers, json=params)
print(response.json())
The script is now complete. In this workflow you have seen getting metadata from an experiment. Downloading a instrument file that the web lab scientist has uploaded. Processing the instrument file, then updating the labsheet with the data that was processed. Finally plotting a graph and uploading the graph directly back into the experiment page.
Best Practices
Define Templates First: Use templates to ensure experiments and protocols are reusable.
Integrate Early: Sync data from instruments immediately to avoid bottlenecks.
Automate Routinely: Schedule workflows for processing and analysis.
Centralize Metadata: Maintain consistency in naming and field definitions.
For more details, visit the Scispot API Documentation.