Tokenization for Delimited Type Encrypted Input File Using Migration.properties File
In this sample, using CT-V Bulk Utility, a delimited type encrypted data file will be tokenized using migration.properties file.
Creating the Input Data File
Below is the data that will be used to populate the Encrypted_del.csv file:
1000401ECF7F7C434E2A7BA6B8D639A5A444632A6EB98FA3C1EDD8D895B3BA970ABB87
100040952418109D4D55B82B3D7203F34BB3321178CB4933CA5311B284385C49FFAD2B
1000400028D1F05385C8A8084307B0549F710D885305AE5F166D99E7D22FF487E59577
100040BC2E5718823CFA9836166E0DD86DEF9335DD21E8AE8F139034C2269E5A719390
100040A42CBF6C762B6ADFDBA643096E8B4C3CF77577DF52C6555195FFFFF65A66DBAD
Setting Parameters for Migration.properties File
Below is the parameters set for positional format input data file:
#####################
# Input Configuration
# Input.FilePath
# Input.Type
#####################
#
Input.FilePath = C:\\Desktop\\migration\\Encrypted_del.csv
#
Input.Type = Delimited
###############################
# Delimited Input Configuration
# Input.EscapeCharacter
# Input.QuoteCharacter
# Input.ColumnDelimiter
###############################
#
Input.EscapeCharacter = \\
#
Input.QuoteCharacter = "
#
Input.ColumnDelimiter = ,
###############################
# Decryption Configuration
# Decryptor.Column0.Key
# Decryptor.Column0.Algorithm
# Decryptor.Column0.Encoding
# ...
# Decryptor.ColumnN.Key
# Decryptor.ColumnN.Algorithm
# Decryptor.ColumnN.Encoding
###############################
#
Decryptor.Column0.Key = token_key
#
Decryptor.Column0.Algorithm = AES/CBC/PKCS5Padding
#
Decryptor.Column0.Encoding = Base16
###########################################
# Tokenization Configuration
# Tokenizer.Column0.TokenVault
# Tokenizer.Column0.CustomDataColumnIndex
# Tokenizer.Column0.TokenFormat
# Tokenizer.Column0.LuhnCheck
# ...
# Tokenizer.ColumnN.TokenVault
# Tokenizer.ColumnN.CustomDataColumnIndex
# Tokenizer.ColumnN.TokenFormat
# Tokenizer.ColumnN.LuhnCheck
############################################
#
Tokenizer.Column0.TokenVault = BTM
#
Tokenizer.Column0.CustomDataColumnIndex = -1
#
Tokenizer.Column0.TokenFormat = LAST_FOUR_TOKEN
#
Tokenizer.Column0.LuhnCheck = true
######################
# Output Configuration
# Output.FilePath
# Output.Sequence
######################
#
Output.FilePath = C:\\Desktop\\migration\\tokenized.csv
# Specifies the file path where the intermediate temporary chunks of
# outputs are stored.
#
# Note: If no intermediate file path is set, then the path specified in
# Output.FilePath is used as the intermediate file path.
#
Intermediate.FilePath =
#Set positive value for columns to be tokenized. For example column 0 has been
#set positive below, so now only this columns will be tokenized. Output.Sequence = 0
# TokenSeparator
#
# Specifies if the tokens are space separated or not.
# Note: This parameter is ignored if Input.Type is set to Delimited.
#
# Valid values
# true
# false
# Note: Default value is set to true. TokenSeparator = true
#
# StreamInputData
#
# Specifies whether the input data is streamed or not.
#
# Valid values
# true
# false
# Note: Default value is set to false.
#
StreamInputData = false
Note: This parameter is ignored if Input.Type is set to Delimited.
#
# CodePageUsed
#
# Specifies the code page in use.
# Used with EBCDIC character set for ex. use "ibm500" for EBCDIC International
# https://docs.oracle.com/javase/7/docs/api/java/nio/charset/Charset.html
#
CodePageUsed =
Note: If no value is specified, by default, ASCII character set is used.
#
# FailureThreshold
#
# Specifies the number of errors after which the Bulk Utility aborts the
# tokenization operation.
# Valide values
# -1 = Tokenization continues irrespective of number of errors during the
# operation. This is the default value.
# 0 = Bulk utility aborts the operation on occurance of any error.
# Any positive value = Indicates the failure threshold, after which the Bulk
# Utility aborts the operation.
#
# Note: If no value or a negative value is specified, Bulk Utility will continue
# irrespective of number of errors.
#
FailureThreshold = -1
###############################################################################
# END
###############################################################################
Running CipherTrust Vaulted Tokenization Bulk Utility
Enter the following command to encrypt with CT-V Bulk Utility in a Windows environment:
java -cp SafeNetTokenService-8.12.3.000.jar com.safenet.token.migration.main migration.properties –t DSU=NAE_User1 DSP=qwerty12345 DBU=DB_User1 DBP=abcd1234
Reviewing the Output File
The output data file is saved at the same path mentioned in the migration.properties file with the same name tokenized.csv. Only first column is decrypted and then tokenized as per the output sequence set in the properties file.
Here is the data from the output file:
9624890608688710
8477359828188810
6122480626598910
7470263339509010
0289479816759110