Creates a Pathling context with the given configuration options.
pathling_connect(
spark = NULL,
max_nesting_level = 3,
enable_extensions = FALSE,
enabled_open_types = c("boolean", "code", "date", "dateTime", "decimal", "integer",
"string", "Coding", "CodeableConcept", "Address", "Identifier", "Reference"),
enable_terminology = TRUE,
terminology_server_url = "https://tx.ontoserver.csiro.au/fhir",
terminology_verbose_request_logging = FALSE,
terminology_socket_timeout = 60000,
max_connections_total = 32,
max_connections_per_route = 16,
terminology_retry_enabled = TRUE,
terminology_retry_count = 2,
enable_cache = TRUE,
cache_max_entries = 2e+05,
cache_storage_type = StorageType$MEMORY,
cache_storage_path = NULL,
cache_default_expiry = 600,
cache_override_expiry = NULL,
token_endpoint = NULL,
enable_auth = FALSE,
client_id = NULL,
client_secret = NULL,
scope = NULL,
token_expiry_tolerance = 120,
accept_language = NULL
)
A pre-configured SparkSession instance, use this if you need to control the way that the session is set up
Controls the maximum depth of nested element data that is encoded upon import. This affects certain elements within FHIR resources that contain recursive references, e.g., QuestionnaireResponse.item.
Enables support for FHIR extensions
The list of types that are encoded within open types, such as extensions.
Enables the use of terminology functions
The endpoint of a FHIR terminology service (R4) that the server can use to resolve terminology queries.
Setting this option to TRUE will enable additional logging of the details of requests to the terminology service.
The maximum period (in milliseconds) that the server should wait for incoming data from the HTTP service
The maximum total number of connections for the client
The maximum number of connections per route for the client
Controls whether terminology requests that fail for possibly transient reasons should be retried
The number of times to retry failed terminology requests
Set this to FALSE to disable caching of terminology requests
Sets the maximum number of entries that will be held in memory
The type of storage to use for the terminology cache
The path on disk to use for the cache
The default expiry time for cache entries (in seconds)
If provided, this value overrides the expiry time provided by the terminology server
An OAuth2 token endpoint for use with the client credentials grant
Enables authentication of requests to the terminology server
A client ID for use with the client credentials grant
A client secret for use with the client credentials grant
A scope value for use with the client credentials grant
The minimum number of seconds that a token should have before expiry when deciding whether to send it with a terminology request
The default value of the Accept-Language HTTP header passed to the terminology server
A Pathling context instance initialized with the specified configuration
If no Spark session is provided and there is not one already present in this process, a new one will be created.
If a SparkSession is not provided, and one is already running within the current process, it will be reused.
It is assumed that the Pathling library API JAR is already on the classpath. If you are running your own cluster, make sure it is on the list of packages.
Other context lifecycle functions:
pathling_disconnect()
,
pathling_disconnect_all()
,
pathling_spark()
# Create PathlingContext for an existing Spark connecton.
sc <- sparklyr::spark_connect(master = "local")
#> Warning: problem writing to connection
#> Error in writeBin(as.integer(value), con, endian = "big"): ignoring SIGPIPE signal
pc <- pathling_connect(spark = sc)
#> Error in pathling_connect(spark = sc): object 'sc' not found
pathling_disconnect(pc)
#> Error in sparklyr::spark_connection(pc): object 'pc' not found
# Create PathlingContext with a new Spark connection.
pc <- pathling_connect()
spark <- pathling_spark(pc)
pathling_disconnect_all()