1.8 Resource consumption
Get Started
Drylab tracks your resource consumption across thre categories:
Tokens | AI model usage (chat, code generation, analysis) |
Compute | CPU/GPU processing power consumed by jobs |
Storage | Vault disk space used by your files |
1. Tokens
What it is: The unit measuring how much AI processing your prompts and responses consume. Every message you send, every code cell generated, every database query interpreted costs tokens.
What consumes tokens:
Sending a prompt to the AI assistant
Generating or editing code cells
AI planning and multi-step analysis
Reading and parsing large files or notebooks
Literature search and research queries
Tips to use tokens efficiently:
Be specific and concise in prompts — vague prompts require more back-and-forth
Avoid asking the AI to re-read large files unnecessarily
Use Default Analysis mode for simple tasks (uses fewer tokens than Advanced)
Break large analyses into focused steps rather than one giant prompt
2. Compute
What it is: The processing power consumed when running jobs — CPU cycles, GPU time, and memory usage during active computation.
What consumes compute:
Running accelerated tools (protein folding, docking, alignment)
Executing Nextflow pipelines (RNA-seq, variant calling)
Heavy notebook computations (large matrix operations, ML training)
Parallel jobs submitted to the cloud batch system
Compute is measured by:
Job duration (how long a task runs)
Instance type used (GPU costs more than CPU)
Number of parallel jobs
Default Analysis | For day-to-day analysis but more cost effective | 400-1200 token credits |
|---|---|---|
Advenced Analysis | Longer, intelligent for discovery task | 50-200 token credits |
Reseach | Literature review and research synthesis | 100-200 token credits |
Tips to reduce compute:
Test on small data subsets before running full jobs
Choose the smallest sufficient machine instance
Cancel jobs that are clearly failing early
3. Storage
What it is: The disk space your files occupy in the Vault (/home/user/user_data/).
What consumes storage:
Raw data files (FASTQ, BAM, H5AD)
Pipeline outputs (alignment files, count matrices)
Saved figures and result tables
Notebooks and scripts
Tips to manage storage:
Delete intermediate files after pipelines complete (e.g. raw BAM files after QC)
Compress large files:
.fastq.gzinstead of.fastqRemove duplicate or unused datasets
Keep only final results in the Vault; use temp workspace for intermediate work


