Skip to main content
Version: 8.0.1

Data in and out

The Pathling library provides convenience functions that help you to read FHIR data in ahead of querying it. Functions are also provided to assist with the persistence of FHIR data in various formats

Reading FHIR data

There are several ways of reading FHIR data and making it available for query.

FHIR Bulk Data API

You can load data directly from a FHIR server that implements the FHIR Bulk Data Access API. This allows you to efficiently extract large amounts of data from a FHIR server for analysis.

# Basic system-level export
data = pc.read.bulk(
fhir_endpoint_url="https://bulk-data.smarthealthit.org/fhir",
output_dir="/tmp/bulk_export"
)

# Customized group-level export
data = pc.read.bulk(
fhir_endpoint_url="https://bulk-data.smarthealthit.org/fhir",
output_dir="/tmp/bulk_export",
group_id="BMCHealthNet",
types=["Patient", "Condition", "Observation"],
elements=["id", "status"],
since=datetime(2015, 1, 1, tzinfo=timezone.utc)
)

# Patient-level export with specific patients
data = pc.read.bulk(
fhir_endpoint_url="https://bulk-data.smarthealthit.org/fhir",
output_dir="/tmp/bulk_export",
patients=[
"Patient/736a19c8-eea5-32c5-67ad-1947661de21a",
"Patient/26d06b50-7868-829d-cf71-9f9a68901a81"
]
)

# Export with authentication
data = pc.read.bulk(
fhir_endpoint_url="https://bulk-data.smarthealthit.org/fhir",
output_dir="/tmp/bulk_export",
auth_config={
"enabled": True,
"client_id": "my-client-id",
"private_key_jwk": "{ \"kty\":\"RSA\", ...}",
"scope": "system/*.read"
}
)

The Bulk Data API source supports all features of the FHIR Bulk Data Access specification, including:

  • System, group and patient level exports
  • Filtering by resource types and elements
  • Time-based filtering
  • Associated data inclusion
  • SMART authentication (both symmetric and asymmetric)

NDJSON

You can load all the NDJSON files from a directory, assuming the following naming scheme:

[resource type].ndjson OR [resource type].[tag].ndjson

Pathling will detect the resource type from the file name, and convert it to a Spark dataset using the corresponding resource encoder.

The tag can be any string, and is used to accommodate multiple different files that contain the same resource type. For example, you might have one file called Observation.chart.ndjson and another called Observation.lab.ndjson.

data = pc.read.ndjson("/usr/share/staging/ndjson")

You can also accommodate a custom naming scheme within the NDJSON files by using the file_name_mapper argument. Here is an example of how to import the MIMIC-IV FHIR data set:

data = pc.read.ndjson(
"/usr/share/staging/ndjson",
file_name_mapper=lambda file_name: re.findall(r"Mimic(\w+?)(?:ED|ICU|"
r"Chartevents|Datetimeevents|Labevents|MicroOrg|MicroSusc|MicroTest|"
r"Outputevents|Lab|Mix|VitalSigns|VitalSignsED)?$",
file_name))

FHIR Bundles

You can load data from a directory containing either JSON or XML FHIR Bundles. The specified resource types will be extracted from the Bundles and made available for query.

data = pc.read.bundles("/usr/share/staging/bundles",
resource_types=["Patient", "Condition", "Immunization"])

Datasets

You can make data that is already held in Spark datasets available for query using the datasets method. This method returns an object that can be populated with pairs of resource type and dataset, using the dataset method.

data = pc.read.datasets({
"Patient": patient_dataset,
"Condition": condition_dataset,
"Immunization": immunization_dataset,
})

Parquet

You can load data from a directory containing Parquet files. The Parquet files must have been saved using the schema used by the Pathling encoders ( see Writing FHIR data).

The files are assumed to be named according to their resource type ([resource type].parquet), e.g. Patient.parquet, Condition.parquet.

data = pc.read.parquet("/usr/share/staging/parquet")

Delta Lake

You can load data from a directory containing Delta Lake tables. Delta tables are a specialisation of Parquet that enable additional functionality, such as incremental update and history. The Delta tables must have been saved using the schema used by the Pathling encoders (see Writing FHIR data).

Note that you will need to use the enable_delta parameter when initialising the Pathling context.

The files are assumed to be named according to their resource type ([resource type].parquet), e.g. Patient.parquet, Condition.parquet.

data = pc.read.delta("/usr/share/staging/delta")

Managed tables

You can load data from managed tables that have previously been saved within the Spark catalog. You can optionally specify a schema that will be used to locate the tables, otherwise the default schema will be used.

The tables are assumed to be named according to their resource type, e.g. Patient, Condition.

This also works with the Unity Catalog feature of Databricks.

data = pc.read.tables("mimic-iv")

Writing FHIR data

Once you have read data in from a data source, you can also optionally write it back out to a variety of targets. This is useful for persisting source data in a more efficient form for query (e.g. Parquet or Delta), or for exporting data to NDJSON for use in other systems.

NDJSON

You can write data to a directory containing NDJSON files. The files are named according to their resource type ([resource type].ndjson), e.g. Patient.ndjson, Condition.ndjson.

data.write.ndjson("/tmp/ndjson")

Parquet

You can write data to a directory containing Parquet files. The files are named according to their resource type ([resource type].parquet), e.g. Patient.parquet, Condition.parquet.

data.write.parquet("/usr/share/warehouse/parquet")

Delta Lake

You can write data to a directory containing Delta Lake tables. Delta tables are a specialisation of Parquet that enable additional functionality, such as incremental update and history.

Note that you will need to use the enable_delta parameter when initialising the Pathling context.

The files are named according to their resource type ([resource type].parquet), e.g. Patient.parquet, Condition.parquet.

data.write.delta("/usr/share/warehouse/delta")

Managed tables

You can write data to managed tables that will be saved within the Spark catalog. You can optionally specify a schema that will be used to locate the tables, otherwise the default schema will be used.

The tables are named according to their resource type, e.g. Patient, Condition.

This also works with the Unity Catalog feature of Databricks.

data.write.tables("test")