Usage¶
Note
The examples in this document use the non-container version of p3. If you are using docker/singularity to run p3, simply replace any reference to the p3proc command with the appropriate container call.
# These are equivalent to p3proc /dataset /output
docker run -it --rm p3proc/p3 /dataset /output
# or
singularity run p3proc_p3.simg /dataset /output
These are just examples. You still need to mount/bind the host volumes to access your data.
# mounting in docker is -v
docker run -it --rm \
-v /path/on/host:/path/on/image \
p3proc/p3 /path/on/image/dataset /path/on/image/output
# mounting in singularity is -B
singularity run \
-B /path/on/host:/path/on/image \
p3proc_p3.simg /path/on/image/dataset /path/on/image/output
It is recommended to refer to the Docker Volume and Singularity Bind documentation.
Overview¶
usage: p3proc [-h]
[--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
[--skip_bids_validator] [-v] [-s SETTINGS]
[-g GENERATE_SETTINGS] [--summary] [--disable_run]
[-w WORKFLOWS [WORKFLOWS ...]] [-c CREATE_NEW_WORKFLOW] [-m]
[-d]
[bids_dir] [output_dir] [{participant,group}]
p3 processing pipeline
positional arguments:
bids_dir The directory with the input dataset formatted
according to the BIDS standard.
output_dir The directory where the output files should be stored.
If you are running group level analysis this folder
should be prepopulated with the results of the
participant level analysis.
{participant,group} Level of the analysis that will be performed. Multiple
participant level analyses can be run independently
(in parallel) using the same output_dir. (Note: group
flag is currently disabled since there are not any
group level functionalities in this pipeline
currently. This is set to 'participant' by default and
can be omitted.)
optional arguments:
-h, --help show this help message and exit
--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]
The label(s) of the participant(s) that should be
analyzed.The label corresponds to
sub-<participant_label> from the BIDS spec (so it does
not include "sub-"). If this parameter is not provided
all subjects should be analyzed. Multiple participants
can be specified with a space separated list.
--skip_bids_validator
Whether or not to perform BIDS dataset validation
-v, --version show program's version number and exit
-s SETTINGS, --settings SETTINGS
A JSON settings file that specifies how the pipeline
should be configured. If no settings file is provided,
the pipeline will use internally specified defaults.
See docs for more help details.
-g GENERATE_SETTINGS, --generate_settings GENERATE_SETTINGS
Generates a default settings file in the provided
directory for use/modification. This option will
ignore all other arguments.
--summary Get a summary of the BIDS dataset input.
--disable_run Stop after writing graphs. Does not run pipeline.
Useful for making sure your workflow is connected
properly before running.
-w WORKFLOWS [WORKFLOWS ...], --workflows WORKFLOWS [WORKFLOWS ...]
Other paths p3 should search for workflows. Note that
you should have at least an empty __init__.py so that
the directory is importable. See the documentation for
more details.
-c CREATE_NEW_WORKFLOW, --create_new_workflow CREATE_NEW_WORKFLOW
Creates workflow template at provided path.
-m, --multiproc Runs pipeline in multiprocessing mode. Note that it is
harder to debug when this option is on.
-d, --verbose Enable verbose debugging mode.
Running the pipeline¶
p3 uses the standard BIDS app interface for execution:
p3proc /dataset /output
Note
If you are familiar with the standard BIDS app interface, you may notice the omission of the analysis level argument (participant,group). p3 does not currently run with group level analysis and so disables it by default. The participant option is specified implicitly on program execution.
You can run specific subjects with:
# this runs subject 01 02 and 03
p3proc /dataset /output --participant_label 01 02 03
These labels are the same as the sub- tags defined in the BIDS spec.
You can also run p3 in multiproc mode, which will use all availiable cores of the compute environment.
# run in multiproc mode
p3proc /dataset /output --multiproc
p3 includes the BIDS-Validator, which you can disable.
# this disables the bids validator
p3proc /dataset /output --skip_bids_validator
Settings¶
Without a settings argument, p3 will use its default settings. To customize the settings, you need to generate a settings file.
# This will generate settings at the specified path/file
p3proc --generate_settings /path/to/settings.json
This will generate a settings.json file at the specified path.
Warning
The generate settings option will overwrite any file in the directory path you specify. Be sure you aren’t losing any current files before running this!
The settings file will allow you to customize runtime settings. See the Settings section for information about particular parameters in the default parameters in the settings file.
You can use the settings file you generate with the following:
# This will use the settings defined in settings.json
p3proc /dataset /output --settings settings.json
Creating workflows and testing¶
p3 allows you to extend its capabilities by adding new workflows. You can create a template workflow with:
# This will create a new template workflow at the specified path
p3proc --create_new_workflow /path/to/mynewworkflow
# Template directory stucture
/path/to/mynewworkflow
__init__.py
custom.py
nodedefs.py
workflow.py
See the Creating New Workflows section for more information on creating new workflows.
To use the new workflows you’ve created. You will need to import them on the command line.
# this will import your workflows and use your settings file
p3proc /dataset /output --workflows /path/to/workflows --settings /path/to/settings.json
where /path/to/workflows is a folder that contains a folder of workflows generated from the –create_new_workflow option, and an empty __init__.py file.
# This is what your directory structure should look like
/path/to/workflows
__init__.py
mynewworkflow1
__init__.py
custom.py
nodedefs.py
workflow.py
mynewworkflow2
__init__.py
custom.py
nodedefs.py
workflow.py
...
You also need to register them in the settings file.
# In your settings file, your workflows field should contain the workflow you are adding
"workflows": [
"p3_bidsselector",
"p3_freesurfer",
"p3_skullstrip",
"p3_stcdespikemoco",
"p3_fieldmapcorrection",
"p3_alignanattoatlas",
"p3_alignfunctoanat",
"p3_alignfunctoatlas",
"p3_create_fs_masks",
"mynewworkflow" # <-- Added workflow
]
Note
When registering the workflow in your settings file, you should use the name of the folder containing the workflow, not the name of the workflowgenerator class in the workflow.py file.
A useful option is to stop p3 prior to running the pipeline.
p3proc /dataset /output --disable_run
This is useful to check for errors/see the connected graph output before actually running the pipeline.
You can also use the pipeline in verbose mode, which will print useful messages for debugging purposes.
p3proc /dataset /output –verbose
Filtering the dataset¶
p3 has the option to print availiable keys in the BIDS dataset that you can filter.
# Note that the only argument required here is the BIDS dataset itself
p3proc /MSC_BIDS --summary
This is what should be outputted:
Below are some available keys in the dataset to filter on:
Availiable subject:
MSC01
Availiable session:
func01 func02 func03 func04 func05 func06 func07 func08 func09 func10 struct01 struct02
Availiable run:
1 2 3 4
Availiable type:
angio BIS bold defacemask description events FFI group KBIT2 magnitude1 magnitude2 participants phasediff scans sessions T1w T2w Toolbox veno
Availiable task:
glasslexical memoryfaces memoryscenes memorywords motor rest
Availiable modality:
anat fmap func
The summary internally uses pybids to show availiable keys to filter on. If you are using the default p3_bidsselector workflow, you can use the keys here to specify what images to process. For example, in the settings file:
{
"bids_query":
{
"T1":
{
"type": "T1w"
},
"epi":
{
"modality": "func",
"task": "rest"
}
}
}
This will filter all T1 images by type “T1w”, while epi images will use modality “func” and task “rest”.
You can filter on specific subjects and runs by using []:
{
"bids_query":
{
"T1":
{
"type": "T1w",
"subject": "MSC0[12]"
},
"epi":
{
"modality": "func",
"task": "rest",
"run": "[13]"
}
}
}
This will query will use subjects MSC01 and MSC02 and process epis with runs 1 and 3.