Purveyor: Centers for Medicare and Medicaid Services
Years in the DataCore: 2012-2017
Years of data owned: 2012-2017
Unit of data: Claim
Dataset website: https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/Hospice/index.html
General description: This file contains final action claims submitted by hospice providers. Once a beneficiary elects hospice, all hospice-related claims will be found in this file, regardless of whether the beneficiary is in Medicare fee-for-service or in a Medicare managed care plan.
Common Key Linking Variables
DESY_SORT_KEY is a unique identifier for a given beneficiary
CLAIMNO is the unique identifier for a given claim.
- ORGNPINM can be used to identify the healthcare organization where the claim was submitted from.
- NPI and UPIN can be used to uniquely identify providers.
- CMS also provides an internal unique provider identifier (PROVIDER)
- State and county data are provided for every claim
Base Claim File
Every row of the claim file represents a claim submitted to CMS.
The primary key of the claim file is CLAIM_NO
Condition Code File
Every row of the condition file is a condition code for a given claim.
The primary key for the condition file is CLAIM_NO and RLT_COND_CD_SEQ (a line number)
Occurrence Code File
Every row of the occurrence code file is claim related occurrence
The primary key for the occurrence code file is CLAIM_NO and RLT_OCRNC_CD_SEQ (a line number)
Value Code File
Every row of the value code file is a value code and amount for a given claim.
The primary key for the value code file is CLAIM_NO and RLT_VAL_CD_SEQ (a line number)
Revenue Center File
Every row of the revenue center file is a revenue line for a given claim.
The primary key for the revenue center file is CLAIM_NO and and CLM_LINE_NUM (a line number)
DataCore Staff Errata
5/28/2019: No data errata, data exceptions or data corrections have been issued.
DataCore Purveyor Errata
5/28/2019: No data errata, data exceptions or data corrections have been implemented.
CMS sent the claims files as comma separated value files (.csv) along with a SAS load script and a data dictionary. It was found that the data dictionary files were incorrect and could not be used to load the data into SQL. Instead, the process below was used.
For the code used for these processes, email email@example.com.
- The .csvfiles were loaded into SAS using the provided SAS load files.
- SQL tables were created using the proc sql "create table like" command in SAS.
- SAS was then used to convert the .csv into Tab Separated Value files (.tsv)
- A bulk copy program (BCP) was used in order to upload the .tsv into SQL.
- The provided data dictionary was used to generate metadata about the dataset fields and was used to generate the data dictionary.