Skip to content

Vizgen MERSCOPE Starter Tutorial

Input Data

The input is a spatial gene expression (SGE) dataset from the adult mouse hippocampus, extracted by masking a coronal brain section (Slice Number: 2;Replicate Number: 1; file: detected_transcripts.csv) from Vizgen MERSCOPE Neuroscience Showcase.

File Format

The MERSCOPE input SGE includes one comma delimited text file with the following format:

CSV file format

1
2
3
4
,barcode_id,global_x,global_y,global_z,x,y,fov,gene
0,22,56.930107,3851.851,5.0,147.80061,1711.9067,0,Adgre1
1,22,183.60107,3874.0085,5.0,1320.6799,1917.0692,0,Adgre1
2,22,59.750736,3666.5576,5.0,132.66754,1844.2372,1,Adgre1
  • Column 1: Unique numeric index for each transcript within a field of view (non-consecutive, ascending).
  • barcode_id: Zero-based index of the transcript barcode in the codebook; forms a composite key with fov.
  • global_x: Transcript x coordinates (µm) in the experimental region; may be negative due to alignment.
  • global_y: Transcript y coordinates (µm) in the experimental region; may be negative due to alignment.
  • global_z: The index of the z-position. The position is a zero-indexed integer.
  • x: The x-coordinate of the transcript (µm), within the coordinate space of the field of view.
  • y: The y-coordinate of the transcript (µm), within the coordinate space of the field of view.
  • fov: Zero-based field of view index; forms a composite key with barcode_id.
  • gene: Gene name associated with the transcript.

Data Access

The example data is hosted on Zenedo ().

Follow the commands below to download the example data.

1
2
3
4
work_dir=/path/to/work/directory
cd $work_dir
wget  https://zenodo.org/records/15701394/files/merscope_starter.raw.tar.gz
tar --strip-components=1 -zxvf merscope_starter.raw.tar.gz

Set Up the Environment

Define paths to all required binaries and resources, and target AWS S3 bucket. Optionally, specify a fixed color map for consistent rendering.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
# ====
# Replace each placeholder with the actual path on your system.  
# ====
work_dir=/path/to/work/directory        # path to work directory that contains the downloaded input data
cd $work_dir

# Define paths to required binaries and resources
spatula=/path/to/spatula/binary         # path to spatula executable
punkst=/path/to/punkst/binary           # path to FICTURE2/punkst executable
tippecanoe=/path/to/tippecanoe/binary   # path to tippecanoe executable
pmtiles=/path/to/pmtiles/binary         # path to pmtiles executable
aws=/path/to/aws/cli/binary             # path to AWS CLI binary

# (Optional) Define path to color map. 
cmap=/path/to/color/map                 # Path to the fixed color map for rendering. cartloader provides a fixed color map at cartloader/assets/fixed_color_map_256.tsv.

# AWS S3 target location for cartostore
AWS_BUCKET="EXAMPLE_AWS_BUCKET"         # replace EXAMPLE_AWS_BUCKET with your actual S3 bucket name

# Activate the bioconda environment
conda activate BIOENV_NAME              # replace BIOENV_NAME with your bioconda environment name

Define data ID and analysis parameters:

1
2
3
4
5
6
7
8
# Unique identifier for your dataset
DATA_ID="merscope_hippo"                # change this to reflect your dataset name
PLATFORM="vizgen_merscope"              # platform information
SCALE=1                                 # scale from coordinate to micrometer

# LDA parameters
train_width=12                           # define LDA training hexagon width (comma-separated if multiple widths are applied)
n_factor=6,12                            # define number of factors in LDA training (comma-separated if multiple n-factor are applied)

How to Define Scaling Factors for MERSCOPE?

The MERSCOPE example data currently used here provides SGE in micrometer units. Use define scaling factor from coordinate to micrometer as 1.

SGE Format Conversion

Convert the raw input to the unified SGE format. See more details in Reference page.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
cartloader sge_convert \
  --makefn sge_convert.mk \
  --platform ${PLATFORM} \
  --in-csv ./input.tsv.gz \
  --units-per-um ${SCALE} \
  --out-dir ./sge \
  --exclude-feature-regex '^(BLANK|NegCon|NegPrb)' \
  --sge-visual \
  --spatula ${spatula} \
  --n-jobs 10
Parameter Required Type Description
--platform required string Platform (options: "10x_visium_hd", "seqscope", "10x_xenium", "bgi_stereoseq", "cosmx_smi", "vizgen_merscope", "pixel_seq", "generic")
--in-csv required string Path to the input TSV/CSV file
--units-per-um required float Scale to convert coordinates to microns (default: 1.0)
--out-dir required string Output directory for the converted SGE files
--makefn string File name for the generated Makefile (default: sge_convert.mk)
--exclude-feature-regex regex Pattern to exclude control features
--sge-visual flag Enable SGE visualization step (generates diagnostic image) (default: FALSE)
--spatula string Path to the spatula binary (default: spatula)
--n-jobs int Number of parallel jobs for processing (default: 1)

FICTURE analysis

Compute spatial factors using punkst (FICTURE2 mode). See more details in Reference page.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
cartloader run_ficture2 \
  --makefn run_ficture2.mk \
  --main \
  --in-transcript ./sge/transcripts.unsorted.tsv.gz \
  --in-feature ./sge/feature.clean.tsv.gz \
  --in-minmax ./sge/coordinate_minmax.tsv \
  --cmap-file ${cmap} \
  --exclude-feature-regex '^(mt-.*$|Gm\d+$)' \
  --out-dir ./ficture2 \
  --width ${train_width} \
  --n-factor ${n_factor} \
  --spatula ${spatula} \
  --ficture2 ${punkst} \
  --n-jobs 10 \
  --threads 10
Parameter Required Type Description
--main required 1 flag Enable cartloader to run all five steps
--in-transcript required string Path to input transcript-level SGE file
--out-dir required string Path to output directory
--width required int or comma-separated list LDA training hexagon width(s)
--n-factor required int or comma-separated list Number of LDA factors
--makefn string File name for the generated Makefile (default: run_ficture2.mk )
--in-feature string Path to input feature file
--in-minmax string Path to input coordinate min/max file
--cmap-file string Path to color map file
--exclude-feature-regex regex Pattern to exclude features
--spatula string Path to the spatula binary (default: spatula)
--ficture2 string Path to the punkst directory (defaults to punkst repository within submodules directory of cartloader)
--n-jobs int Number of parallel jobs (default: 1)
--threads int Number of threads per job (default: 1)

1: cartloader requires the user to specify at least one action. Available actions includes: --tile to run tiling step; --segment to run segmentation step; --init-lda to run LDA training step; --decode to run decoding step; --summary to run summarization step; --main to run all above five actions.

cartloader Compilation

Generate pmtiles and web-compatible tile directories. See more details in Reference page.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
cartloader run_cartload2 \
  --makefn run_cartload2.mk \
  --fic-dir ./ficture2 \
  --out-dir ./cartload2 \
  --id ${DATA_ID} \
  --spatula ${spatula} \
  --pmtiles ${pmtiles} \
  --tippecanoe ${tippecanoe} \
  --n-jobs 10 \
  --threads 10
Parameter Required Type Description
--fic-dir required string Path to the input directory containing FICTURE2 output
--out-dir required string Path to the output directory for PMTiles and web tiles
--id required string Dataset ID used for naming outputs and metadata
--makefn string File name for the generated Makefile (default: run_cartload2.mk)
--spatula string Path to the spatula binary (default: spatula)
--pmtiles string Path to the pmtiles binary (default: pmtiles)
--tippecanoe string Path to the tippecanoe binary (default: tippecanoe)
--n-jobs int Number of parallel jobs (default: 1)
--threads int Number of threads per job (default: 1)

Upload to Data Repository

AWS Uploads

Copy the generated cartloader outputs to your designated AWS S3 catalog path:

1
2
3
4
5
cartloader upload_aws \
  --in-dir ./cartload2 \
  --s3-dir "s3://${AWS_BUCKET}/${DATA_ID}" \
  --aws ${aws} \
  --n-jobs 10
Parameter Required Type Description
--in-dir required string Path to the input directory containing the cartloader compilation output
--s3-dir required string Path to the target S3 directory for uploading
--aws string Path to the AWS CLI binary
--n-jobs int Number of parallel jobs